WorldWideScience

Sample records for wavelet-based texture classification

  1. Support Vector Machine and Parametric Wavelet-Based Texture Classification of Stem Cell Images

    National Research Council Canada - National Science Library

    Jeffreys, Christopher

    2004-01-01

    .... Since colony texture is a major discriminating feature in determining quality, we introduce a non-invasive, semi-automated texture-based stem cell colony classification methodology to aid researchers...

  2. Mimicking human texture classification

    NARCIS (Netherlands)

    Rogowitz, B.E.; van Rikxoort, Eva M.; van den Broek, Egon; Pappas, T.N.; Schouten, Theo E.; Daly, S.J.

    2005-01-01

    In an attempt to mimic human (colorful) texture classification by a clustering algorithm three lines of research have been encountered, in which as test set 180 texture images (both their color and gray-scale equivalent) were drawn from the OuTex and VisTex databases. First, a k-means algorithm was

  3. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    Science.gov (United States)

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  4. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    Directory of Open Access Journals (Sweden)

    Vijay G. S.

    2012-01-01

    Full Text Available The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR and reducing the root-mean-square error (RMSE. In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN and the Support Vector Machine (SVM, for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher’s Criterion (FC. Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  5. Pigmented skin lesion detection using random forest and wavelet-based texture

    Science.gov (United States)

    Hu, Ping; Yang, Tie-jun

    2016-10-01

    The incidence of cutaneous malignant melanoma, a disease of worldwide distribution and is the deadliest form of skin cancer, has been rapidly increasing over the last few decades. Because advanced cutaneous melanoma is still incurable, early detection is an important step toward a reduction in mortality. Dermoscopy photographs are commonly used in melanoma diagnosis and can capture detailed features of a lesion. A great variability exists in the visual appearance of pigmented skin lesions. Therefore, in order to minimize the diagnostic errors that result from the difficulty and subjectivity of visual interpretation, an automatic detection approach is required. The objectives of this paper were to propose a hybrid method using random forest and Gabor wavelet transformation to accurately differentiate which part belong to lesion area and the other is not in a dermoscopy photographs and analyze segmentation accuracy. A random forest classifier consisting of a set of decision trees was used for classification. Gabor wavelets transformation are the mathematical model of visual cortical cells of mammalian brain and an image can be decomposed into multiple scales and multiple orientations by using it. The Gabor function has been recognized as a very useful tool in texture analysis, due to its optimal localization properties in both spatial and frequency domain. Texture features based on Gabor wavelets transformation are found by the Gabor filtered image. Experiment results indicate the following: (1) the proposed algorithm based on random forest outperformed the-state-of-the-art in pigmented skin lesions detection (2) and the inclusion of Gabor wavelet transformation based texture features improved segmentation accuracy significantly.

  6. Seismic texture classification. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Vinther, R.

    1997-12-31

    The seismic texture classification method, is a seismic attribute that can both recognize the general reflectivity styles and locate variations from these. The seismic texture classification performs a statistic analysis for the seismic section (or volume) aiming at describing the reflectivity. Based on a set of reference reflectivities the seismic textures are classified. The result of the seismic texture classification is a display of seismic texture categories showing both the styles of reflectivity from the reference set and interpolations and extrapolations from these. The display is interpreted as statistical variations in the seismic data. The seismic texture classification is applied to seismic sections and volumes from the Danish North Sea representing both horizontal stratifications and salt diapers. The attribute succeeded in recognizing both general structure of successions and variations from these. Also, the seismic texture classification is not only able to display variations in prospective areas (1-7 sec. TWT) but can also be applied to deep seismic sections. The seismic texture classification is tested on a deep reflection seismic section (13-18 sec. TWT) from the Baltic Sea. Applied to this section the seismic texture classification succeeded in locating the Moho, which could not be located using conventional interpretation tools. The seismic texture classification is a seismic attribute which can display general reflectivity styles and deviations from these and enhance variations not found by conventional interpretation tools. (LN)

  7. Texture classification using autoregressive filtering

    Science.gov (United States)

    Lawton, W. M.; Lee, M.

    1984-01-01

    A general theory of image texture models is proposed and its applicability to the problem of scene segmentation using texture classification is discussed. An algorithm, based on half-plane autoregressive filtering, which optimally utilizes second order statistics to discriminate between texture classes represented by arbitrary wide sense stationary random fields is described. Empirical results of applying this algorithm to natural and sysnthesized scenes are presented and future research is outlined.

  8. Textural features for image classification

    Science.gov (United States)

    Haralick, R. M.; Dinstein, I.; Shanmugam, K.

    1973-01-01

    Description of some easily computable textural features based on gray-tone spatial dependances, and illustration of their application in category-identification tasks of three different kinds of image data - namely, photomicrographs of five kinds of sandstones, 1:20,000 panchromatic aerial photographs of eight land-use categories, and ERTS multispectral imagery containing several land-use categories. Two kinds of decision rules are used - one for which the decision regions are convex polyhedra (a piecewise-linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89% for the photomicrographs, 82% for the aerial photographic imagery, and 83% for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

  9. A Discrete Wavelet Based Feature Extraction and Hybrid Classification Technique for Microarray Data Analysis

    Directory of Open Access Journals (Sweden)

    Jaison Bennet

    2014-01-01

    Full Text Available Cancer classification by doctors and radiologists was based on morphological and clinical features and had limited diagnostic ability in olden days. The recent arrival of DNA microarray technology has led to the concurrent monitoring of thousands of gene expressions in a single chip which stimulates the progress in cancer classification. In this paper, we have proposed a hybrid approach for microarray data classification based on nearest neighbor (KNN, naive Bayes, and support vector machine (SVM. Feature selection prior to classification plays a vital role and a feature selection technique which combines discrete wavelet transform (DWT and moving window technique (MWT is used. The performance of the proposed method is compared with the conventional classifiers like support vector machine, nearest neighbor, and naive Bayes. Experiments have been conducted on both real and benchmark datasets and the results indicate that the ensemble approach produces higher classification accuracy than conventional classifiers. This paper serves as an automated system for the classification of cancer and can be applied by doctors in real cases which serve as a boon to the medical community. This work further reduces the misclassification of cancers which is highly not allowed in cancer detection.

  10. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    Science.gov (United States)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  11. Cloud field classification based on textural features

    Science.gov (United States)

    Sengupta, Sailes Kumar

    1989-01-01

    An essential component in global climate research is accurate cloud cover and type determination. Of the two approaches to texture-based classification (statistical and textural), only the former is effective in the classification of natural scenes such as land, ocean, and atmosphere. In the statistical approach that was adopted, parameters characterizing the stochastic properties of the spatial distribution of grey levels in an image are estimated and then used as features for cloud classification. Two types of textural measures were used. One is based on the distribution of the grey level difference vector (GLDV), and the other on a set of textural features derived from the MaxMin cooccurrence matrix (MMCM). The GLDV method looks at the difference D of grey levels at pixels separated by a horizontal distance d and computes several statistics based on this distribution. These are then used as features in subsequent classification. The MaxMin tectural features on the other hand are based on the MMCM, a matrix whose (I,J)th entry give the relative frequency of occurrences of the grey level pair (I,J) that are consecutive and thresholded local extremes separated by a given pixel distance d. Textural measures are then computed based on this matrix in much the same manner as is done in texture computation using the grey level cooccurrence matrix. The database consists of 37 cloud field scenes from LANDSAT imagery using a near IR visible channel. The classification algorithm used is the well known Stepwise Discriminant Analysis. The overall accuracy was estimated by the percentage or correct classifications in each case. It turns out that both types of classifiers, at their best combination of features, and at any given spatial resolution give approximately the same classification accuracy. A neural network based classifier with a feed forward architecture and a back propagation training algorithm is used to increase the classification accuracy, using these two classes

  12. Fast Image Texture Classification Using Decision Trees

    Science.gov (United States)

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  13. Adaptive Matrices for Color Texture Classification

    NARCIS (Netherlands)

    Bunte, Kerstin; Giotis, Ioannis; Petkov, Nicolai; Biehl, Michael; Real, P; DiazPernil, D; MolinaAbril, H; Berciano, A; Kropatsch, W

    2011-01-01

    In this paper we introduce an integrative approach towards color texture classification learned by a supervised framework. Our approach is based on the Generalized Learning Vector Quantization (GLVQ), extended by an adaptive distance measure which is defined in the Fourier domain and 2D Gabor

  14. Texture classification by texton: statistical versus binary.

    Directory of Open Access Journals (Sweden)

    Zhenhua Guo

    Full Text Available Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8, image patch (Statistical_Joint and locally invariant fractal (Statistical_Fractal are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor.

  15. A Novel Texture Classification Procedure by using Association Rules

    Directory of Open Access Journals (Sweden)

    L. Jaba Sheela

    2008-11-01

    Full Text Available Texture can be defined as a local statistical pattern of texture primitives in observer’s domain of interest. Texture classification aims to assign texture labels to unknown textures, according to training samples and classification rules. Association rules have been used in various applications during the past decades. Association rules capture both structural and statistical information, and automatically identify the structures that occur most frequently and relationships that have significant discriminative power. So, association rules can be adapted to capture frequently occurring local structures in textures. This paper describes the usage of association rules for texture classification problem. The performed experimental studies show the effectiveness of the association rules. The overall success rate is about 98%.

  16. Combining fine texture and coarse color features for color texture classification

    Science.gov (United States)

    Wang, Junmin; Fan, Yangyu; Li, Ning

    2017-11-01

    Color texture classification plays an important role in computer vision applications because texture and color are two fundamental visual features. To classify the color texture via extracting discriminative color texture features in real time, we present an approach of combining the fine texture and coarse color features for color texture classification. First, the input image is transformed from RGB to HSV color space to separate texture and color information. Second, the scale-selective completed local binary count (CLBC) algorithm is introduced to extract the fine texture feature from the V component in HSV color space. Third, both H and S components are quantized at an optimal coarse level. Furthermore, the joint histogram of H and S components is calculated, which is considered as the coarse color feature. Finally, the fine texture and coarse color features are combined as the final descriptor and the nearest subspace classifier is used for classification. Experimental results on CUReT, KTH-TIPS, and New-BarkTex databases demonstrate that the proposed method achieves state-of-the-art classification performance. Moreover, the proposed method is fast enough for real-time applications.

  17. Forest Classification Based on Forest texture in Northwest Yunnan Province

    Science.gov (United States)

    Wang, Jinliang; Gao, Yan; Wang, Xiaohua; Fu, Lei

    2014-03-01

    Forest texture is an intrinsic characteristic and an important visual feature of a forest ecological system. Full utilization of forest texture will be a great help in increasing the accuracy of forest classification based on remote sensed data. Taking Shangri-La as a study area, forest classification has been based on the texture. The results show that: (1) From the texture abundance, texture boundary, entropy as well as visual interpretation, the combination of Grayscale-gradient co-occurrence matrix and wavelet transformation is much better than either one of both ways of forest texture information extraction; (2) During the forest texture information extraction, the size of the texture-suitable window determined by the semi-variogram method depends on the forest type (evergreen broadleaf forest is 3×3, deciduous broadleaf forest is 5×5, etc.). (3)While classifying forest based on forest texture information, the texture factor assembly differs among forests: Variance Heterogeneity and Correlation should be selected when the window is between 3×3 and 5×5 Mean, Correlation, and Entropy should be used when the window in the range of 7×7 to 19×19 and Correlation, Second Moment, and Variance should be used when the range is larger than 21×21.

  18. Forest Classification Based on Forest texture in Northwest Yunnan Province

    International Nuclear Information System (INIS)

    Wang, Jinliang; Gao, Yan; Fu, Lei; Wang, Xiaohua

    2014-01-01

    Forest texture is an intrinsic characteristic and an important visual feature of a forest ecological system. Full utilization of forest texture will be a great help in increasing the accuracy of forest classification based on remote sensed data. Taking Shangri-La as a study area, forest classification has been based on the texture. The results show that: (1) From the texture abundance, texture boundary, entropy as well as visual interpretation, the combination of Grayscale-gradient co-occurrence matrix and wavelet transformation is much better than either one of both ways of forest texture information extraction; (2) During the forest texture information extraction, the size of the texture-suitable window determined by the semi-variogram method depends on the forest type (evergreen broadleaf forest is 3×3, deciduous broadleaf forest is 5×5, etc.). (3)While classifying forest based on forest texture information, the texture factor assembly differs among forests: Variance Heterogeneity and Correlation should be selected when the window is between 3×3 and 5×5; Mean, Correlation, and Entropy should be used when the window in the range of 7×7 to 19×19; and Correlation, Second Moment, and Variance should be used when the range is larger than 21×21

  19. Completed Local Ternary Pattern for Rotation Invariant Texture Classification

    Directory of Open Access Journals (Sweden)

    Taha H. Rassem

    2014-01-01

    Full Text Available Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP and the Completed Local Binary Count (CLBC, have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

  20. Non-Hodgkin lymphoma response evaluation with MRI texture classification

    Directory of Open Access Journals (Sweden)

    Heinonen Tomi T

    2009-06-01

    Full Text Available Abstract Background To show magnetic resonance imaging (MRI texture appearance change in non-Hodgkin lymphoma (NHL during treatment with response controlled by quantitative volume analysis. Methods A total of 19 patients having NHL with an evaluable lymphoma lesion were scanned at three imaging timepoints with 1.5T device during clinical treatment evaluation. Texture characteristics of images were analyzed and classified with MaZda application and statistical tests. Results NHL tissue MRI texture imaged before treatment and under chemotherapy was classified within several subgroups, showing best discrimination with 96% correct classification in non-linear discriminant analysis of T2-weighted images. Texture parameters of MRI data were successfully tested with statistical tests to assess the impact of the separability of the parameters in evaluating chemotherapy response in lymphoma tissue. Conclusion Texture characteristics of MRI data were classified successfully; this proved texture analysis to be potential quantitative means of representing lymphoma tissue changes during chemotherapy response monitoring.

  1. Texture classification of vegetation cover in high altitude wetlands zone

    International Nuclear Information System (INIS)

    Wentao, Zou; Bingfang, Wu; Hongbo, Ju; Hua, Liu

    2014-01-01

    The aim of this study was to investigate the utility of datasets composed of texture measures and other features for the classification of vegetation cover, specifically wetlands. QUEST decision tree classifier was applied to a SPOT-5 image sub-scene covering the typical wetlands area in Three River Sources region in Qinghai province, China. The dataset used for the classification comprised of: (1) spectral data and the components of principal component analysis; (2) texture measures derived from pixel basis; (3) DEM and other ancillary data covering the research area. Image textures is an important characteristic of remote sensing images; it can represent spatial variations with spectral brightness in digital numbers. When the spectral information is not enough to separate the different land covers, the texture information can be used to increase the classification accuracy. The texture measures used in this study were calculated from GLCM (Gray level Co-occurrence Matrix); eight frequently used measures were chosen to conduct the classification procedure. The results showed that variance, mean and entropy calculated by GLCM with a 9*9 size window were effective in distinguishing different vegetation types in wetlands zone. The overall accuracy of this method was 84.19% and the Kappa coefficient was 0.8261. The result indicated that the introduction of texture measures can improve the overall accuracy by 12.05% and the overall kappa coefficient by 0.1407 compared with the result using spectral and ancillary data

  2. Heterogeneous patterns enhancing static and dynamic texture classification

    International Nuclear Information System (INIS)

    Silva, Núbia Rosa da; Martinez Bruno, Odemir

    2013-01-01

    Some mixtures, such as colloids like milk, blood, and gelatin, have homogeneous appearance when viewed with the naked eye, however, to observe them at the nanoscale is possible to understand the heterogeneity of its components. The same phenomenon can occur in pattern recognition in which it is possible to see heterogeneous patterns in texture images. However, current methods of texture analysis can not adequately describe such heterogeneous patterns. Common methods used by researchers analyse the image information in a global way, taking all its features in an integrated manner. Furthermore, multi-scale analysis verifies the patterns at different scales, but still preserving the homogeneous analysis. On the other hand various methods use textons to represent the texture, breaking texture down into its smallest unit. To tackle this problem, we propose a method to identify texture patterns not small as textons at distinct scales enhancing the separability among different types of texture. We find sub patterns of texture according to the scale and then group similar patterns for a more refined analysis. Tests were performed in four static texture databases and one dynamical one. Results show that our method provide better classification rate compared with conventional approaches both in static and in dynamic texture.

  3. Texture operator for snow particle classification into snowflake and graupel

    Science.gov (United States)

    Nurzyńska, Karolina; Kubo, Mamoru; Muramoto, Ken-ichiro

    2012-11-01

    In order to improve the estimation of precipitation, the coefficients of Z-R relation should be determined for each snow type. Therefore, it is necessary to identify the type of falling snow. Consequently, this research addresses a problem of snow particle classification into snowflake and graupel in an automatic manner (as these types are the most common in the study region). Having correctly classified precipitation events, it is believed that it will be possible to estimate the related parameters accurately. The automatic classification system presented here describes the images with texture operators. Some of them are well-known from the literature: first order features, co-occurrence matrix, grey-tone difference matrix, run length matrix, and local binary pattern, but also a novel approach to design simple local statistic operators is introduced. In this work the following texture operators are defined: mean histogram, min-max histogram, and mean-variance histogram. Moreover, building a feature vector, which is based on the structure created in many from mentioned algorithms is also suggested. For classification, the k-nearest neighbourhood classifier was applied. The results showed that it is possible to achieve correct classification accuracy above 80% by most of the techniques. The best result of 86.06%, was achieved for operator built from a structure achieved in the middle stage of the co-occurrence matrix calculation. Next, it was noticed that describing an image with two texture operators does not improve the classification results considerably. In the best case the correct classification efficiency was 87.89% for a pair of texture operators created from local binary pattern and structure build in a middle stage of grey-tone difference matrix calculation. This also suggests that the information gathered by each texture operator is redundant. Therefore, the principal component analysis was applied in order to remove the unnecessary information and

  4. Adaptive Matrices and Filters for Color Texture Classification

    NARCIS (Netherlands)

    Giotis, Ioannis; Bunte, Kerstin; Petkov, Nicolai; Biehl, Michael

    In this paper we introduce an integrative approach towards color texture classification and recognition using a supervised learning framework. Our approach is based on Generalized Learning Vector Quantization (GLVQ), extended by an adaptive distance measure, which is defined in the Fourier domain,

  5. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  6. Edge detection and texture classification by cuttlefish.

    Science.gov (United States)

    Zylinski, Sarah; Osorio, Daniel; Shohet, Adam J

    2009-12-14

    Cephalopod mollusks including octopus and cuttlefish are adept at adaptive camouflage, varying their appearance to suit the surroundings. This behavior allows unique access into the vision of a non-human species because one can ask how these animals use spatial information to control their coloration pattern. There is particular interest in factors that affect the relative levels of expression of the Mottle and the Disruptive body patterns. Broadly speaking, the Mottle is displayed on continuous patterned surfaces whereas the Disruptive is used on discrete objects such as pebbles. Recent evidence from common cuttlefish, Sepia officinalis, suggests that multiple cues are relevant, including spatial scale, contrast, and depth. We analyze the body pattern responses of juvenile cuttlefish to a range of checkerboard stimuli. Our results suggest that the choice of camouflage pattern is consistent with a simple model of how cuttlefish classify visual textures, according to whether they are Uniform or patterned, and whether the pattern includes visual edges. In particular, cuttlefish appear to detect edges by sensing the relative spatial phases of two spatial frequency components (e.g., fundamental and the third harmonic Fourier component in a square wave). We discuss the relevance of these findings to vision and camouflage in aquatic environments.

  7. Classification of interstitial lung disease patterns with topological texture features

    Science.gov (United States)

    Huber, Markus B.; Nagarajan, Mahesh; Leinsinger, Gerda; Ray, Lawrence A.; Wismüller, Axel

    2010-03-01

    Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions and the significance thresholds were adjusted for multiple comparisons by the Bonferroni correction. The best classification results were obtained by the MF features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers. The highest accuracy was found for MF.euler (97.5%, 96.6%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced topological texture features can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.

  8. Parametric classification of handvein patterns based on texture features

    Science.gov (United States)

    Al Mahafzah, Harbi; Imran, Mohammad; Supreetha Gowda H., D.

    2018-04-01

    In this paper, we have developed Biometric recognition system adopting hand based modality Handvein,which has the unique pattern for each individual and it is impossible to counterfeit and fabricate as it is an internal feature. We have opted in choosing feature extraction algorithms such as LBP-visual descriptor, LPQ-blur insensitive texture operator, Log-Gabor-Texture descriptor. We have chosen well known classifiers such as KNN and SVM for classification. We have experimented and tabulated results of single algorithm recognition rate for Handvein under different distance measures and kernel options. The feature level fusion is carried out which increased the performance level.

  9. Texture Feature Extraction and Classification for Iris Diagnosis

    Science.gov (United States)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  10. Woven fabric defects detection based on texture classification algorithm

    International Nuclear Information System (INIS)

    Ben Salem, Y.; Nasri, S.

    2011-01-01

    In this paper we have compared two famous methods in texture classification to solve the problem of recognition and classification of defects occurring in a textile manufacture. We have compared local binary patterns method with co-occurrence matrix. The classifier used is the support vector machines (SVM). The system has been tested using TILDA database. The results obtained are interesting and show that LBP is a good method for the problems of recognition and classifcation defects, it gives a good running time especially for the real time applications.

  11. Magnetic resonance imaging texture analysis classification of primary breast cancer

    International Nuclear Information System (INIS)

    Waugh, S.A.; Lerski, R.A.; Purdie, C.A.; Jordan, L.B.; Vinnicombe, S.; Martin, P.; Thompson, A.M.

    2016-01-01

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  12. Magnetic resonance imaging texture analysis classification of primary breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Waugh, S.A.; Lerski, R.A. [Ninewells Hospital and Medical School, Department of Medical Physics, Dundee (United Kingdom); Purdie, C.A.; Jordan, L.B. [Ninewells Hospital and Medical School, Department of Pathology, Dundee (United Kingdom); Vinnicombe, S. [University of Dundee, Division of Imaging and Technology, Ninewells Hospital and Medical School, Dundee (United Kingdom); Martin, P. [Ninewells Hospital and Medical School, Department of Clinical Radiology, Dundee (United Kingdom); Thompson, A.M. [University of Texas MD Anderson Cancer Center, Department of Surgical Oncology, Houston, TX (United States)

    2016-02-15

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  13. Texture classification using non-Euclidean Minkowski dilation

    Science.gov (United States)

    Florindo, Joao B.; Bruno, Odemir M.

    2018-03-01

    This study presents a new method to extract meaningful descriptors of gray-scale texture images using Minkowski morphological dilation based on the Lp metric. The proposed approach is motivated by the success previously achieved by Bouligand-Minkowski fractal descriptors on texture classification. In essence, such descriptors are directly derived from the morphological dilation of a three-dimensional representation of the gray-level pixels using the classical Euclidean metric. In this way, we generalize the dilation for different values of p in the Lp metric (Euclidean is a particular case when p = 2) and obtain the descriptors from the cumulated distribution of the distance transform computed over the texture image. The proposed method is compared to other state-of-the-art approaches (such as local binary patterns and textons for example) in the classification of two benchmark data sets (UIUC and Outex). The proposed descriptors outperformed all the other approaches in terms of rate of images correctly classified. The interesting results suggest the potential of these descriptors in this type of task, with a wide range of possible applications to real-world problems.

  14. Median Robust Extended Local Binary Pattern for Texture Classification.

    Science.gov (United States)

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  15. Cellular automata rule characterization and classification using texture descriptors

    Science.gov (United States)

    Machicao, Jeaneth; Ribas, Lucas C.; Scabini, Leonardo F. S.; Bruno, Odermir M.

    2018-05-01

    The cellular automata (CA) spatio-temporal patterns have attracted the attention from many researchers since it can provide emergent behavior resulting from the dynamics of each individual cell. In this manuscript, we propose an approach of texture image analysis to characterize and classify CA rules. The proposed method converts the CA spatio-temporal patterns into a gray-scale image. The gray-scale is obtained by creating a binary number based on the 8-connected neighborhood of each dot of the CA spatio-temporal pattern. We demonstrate that this technique enhances the CA rule characterization and allow to use different texture image analysis algorithms. Thus, various texture descriptors were evaluated in a supervised training approach aiming to characterize the CA's global evolution. Our results show the efficiency of the proposed method for the classification of the elementary CA (ECAs), reaching a maximum of 99.57% of accuracy rate according to the Li-Packard scheme (6 classes) and 94.36% for the classification of the 88 rules scheme. Moreover, within the image analysis context, we found a better performance of the method by means of a transformation of the binary states to a gray-scale.

  16. Application of texture analysis method for mammogram density classification

    Science.gov (United States)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  17. Deep neural networks for texture classification-A theoretical analysis.

    Science.gov (United States)

    Basu, Saikat; Mukhopadhyay, Supratik; Karki, Manohar; DiBiano, Robert; Ganguly, Sangram; Nemani, Ramakrishna; Gayaka, Shreekant

    2018-01-01

    We investigate the use of Deep Neural Networks for the classification of image datasets where texture features are important for generating class-conditional discriminative representations. To this end, we first derive the size of the feature space for some standard textural features extracted from the input dataset and then use the theory of Vapnik-Chervonenkis dimension to show that hand-crafted feature extraction creates low-dimensional representations which help in reducing the overall excess error rate. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. The concept of intrinsic dimension is used to validate the intuition that texture-based datasets are inherently higher dimensional as compared to handwritten digits or other object recognition datasets and hence more difficult to be shattered by neural networks. We then derive the mean distance from the centroid to the nearest and farthest sampling points in an n-dimensional manifold and show that the Relative Contrast of the sample data vanishes as dimensionality of the underlying vector space tends to infinity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Classification of brain signals associated with imagination of hand grasping, opening and reaching by means of wavelet-based common spatial pattern and mutual information.

    Science.gov (United States)

    Amanpour, Behzad; Erfanian, Abbas

    2013-01-01

    An important issue in designing a practical brain-computer interface (BCI) is the selection of mental tasks to be imagined. Different types of mental tasks have been used in BCI including left, right, foot, and tongue motor imageries. However, the mental tasks are different from the actions to be controlled by the BCI. It is desirable to select a mental task to be consistent with the desired action to be performed by BCI. In this paper, we investigated the detecting the imagination of the hand grasping, hand opening, and hand reaching in one hand using electroencephalographic (EEG) signals. The results show that the ERD/ERS patterns, associated with the imagination of hand grasping, opening, and reaching are different. For classification of brain signals associated with these mental tasks and feature extraction, a method based on wavelet packet, regularized common spatial pattern (CSP), and mutual information is proposed. The results of an offline analysis on five subjects show that the two-class mental tasks can be classified with an average accuracy of 77.6% using proposed method. In addition, we examine the proposed method on datasets IVa from BCI Competition III and IIa from BCI Competition IV.

  19. A CNN Based Approach for Garments Texture Design Classification

    Directory of Open Access Journals (Sweden)

    S.M. Sofiqul Islam

    2017-05-01

    Full Text Available Identifying garments texture design automatically for recommending the fashion trends is important nowadays because of the rapid growth of online shopping. By learning the properties of images efficiently, a machine can give better accuracy of classification. Several Hand-Engineered feature coding exists for identifying garments design classes. Recently, Deep Convolutional Neural Networks (CNNs have shown better performances for different object recognition. Deep CNN uses multiple levels of representation and abstraction that helps a machine to understand the types of data more accurately. In this paper, a CNN model for identifying garments design classes has been proposed. Experimental results on two different datasets show better results than existing two well-known CNN models (AlexNet and VGGNet and some state-of-the-art Hand-Engineered feature extraction methods.

  20. EFFECTIVE MULTI-RESOLUTION TRANSFORM IDENTIFICATION FOR CHARACTERIZATION AND CLASSIFICATION OF TEXTURE GROUPS

    Directory of Open Access Journals (Sweden)

    S. Arivazhagan

    2011-11-01

    Full Text Available Texture classification is important in applications of computer image analysis for characterization or classification of images based on local spatial variations of intensity or color. Texture can be defined as consisting of mutually related elements. This paper proposes an experimental approach for identification of suitable multi-resolution transform for characterization and classification of different texture groups based on statistical and co-occurrence features derived from multi-resolution transformed sub bands. The statistical and co-occurrence feature sets are extracted for various multi-resolution transforms such as Discrete Wavelet Transform (DWT, Stationary Wavelet Transform (SWT, Double Density Wavelet Transform (DDWT and Dual Tree Complex Wavelet Transform (DTCWT and then, the transform that maximizes the texture classification performance for the particular texture group is identified.

  1. Image segmentation and particles classification using texture analysis method

    Directory of Open Access Journals (Sweden)

    Mayar Aly Atteya

    Full Text Available Introduction: Ingredients of oily fish include a large amount of polyunsaturated fatty acids, which are important elements in various metabolic processes of humans, and have also been used to prevent diseases. However, in an attempt to reduce cost, recent developments are starting a replace the ingredients of fish oil with products of microalgae, that also produce polyunsaturated fatty acids. To do so, it is important to closely monitor morphological changes in algae cells and monitor their age in order to achieve the best results. This paper aims to describe an advanced vision-based system to automatically detect, classify, and track the organic cells using a recently developed SOPAT-System (Smart On-line Particle Analysis Technology, a photo-optical image acquisition device combined with innovative image analysis software. Methods The proposed method includes image de-noising, binarization and Enhancement, as well as object recognition, localization and classification based on the analysis of particles’ size and texture. Results The methods allowed for correctly computing cell’s size for each particle separately. By computing an area histogram for the input images (1h, 18h, and 42h, the variation could be observed showing a clear increase in cell. Conclusion The proposed method allows for algae particles to be correctly identified with accuracies up to 99% and classified correctly with accuracies up to 100%.

  2. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  3. Shape and Texture Based Classification of Fish Species

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Ólafsdóttir, Hildur; Ersbøll, Bjarne Kjær

    2009-01-01

    In this paper we conduct a case study of ¯sh species classi- fication based on shape and texture. We consider three fish species: cod, haddock, and whiting. We derive shape and texture features from an appearance model of a set of training data. The fish in the training images were manual outlined......, and a few features including the eye and backbone contour were also annotated. From these annotations an optimal MDL curve correspondence and a subsequent image registration were derived. We have analyzed a series of shape and texture and combined shape and texture modes of variation for their ability...

  4. Ethnicity prediction and classification from iris texture patterns: A survey on recent advances

    CSIR Research Space (South Africa)

    Mabuza-Hocquet, Gugulethu

    2017-03-01

    Full Text Available The prediction and classification of ethnicity based on iris texture patterns using image processing, artificial intelligence and computer vision techniques is still a recent topic in iris biometrics. While the large body of knowledge and research...

  5. Hardwood species classification with DWT based hybrid texture ...

    Indian Academy of Sciences (India)

    to decompose the image up to 7 levels using Daubechies (db3) wavelet as decom ... binary pattern (DWTFOSLBPu2) texture features at the 4th level of image decomposi ...... In addition, inclusion of further levels of image decomposition gives rise to ..... Texture analysis of SAR sea ice imagery using gray level co-occurrence.

  6. Soil texture classification algorithm using RGB characteristics of soil images

    Science.gov (United States)

    Soil texture has an important influence on agriculture, affecting crop selection, movement of nutrients and water, soil electrical conductivity, and crop growth. Soil texture has traditionally been determined in the laboratory using pipette and hydrometer methods that require a considerable amount o...

  7. Wavelet-based prediction of oil prices

    International Nuclear Information System (INIS)

    Yousefi, Shahriar; Weinreich, Ilona; Reinarz, Dominik

    2005-01-01

    This paper illustrates an application of wavelets as a possible vehicle for investigating the issue of market efficiency in futures markets for oil. The paper provides a short introduction to the wavelets and a few interesting wavelet-based contributions in economics and finance are briefly reviewed. A wavelet-based prediction procedure is introduced and market data on crude oil is used to provide forecasts over different forecasting horizons. The results are compared with data from futures markets for oil and the relative performance of this procedure is used to investigate whether futures markets are efficiently priced

  8. Hydrologic-Process-Based Soil Texture Classifications for Improved Visualization of Landscape Function

    Science.gov (United States)

    Groenendyk, Derek G.; Ferré, Ty P.A.; Thorp, Kelly R.; Rice, Amy K.

    2015-01-01

    Soils lie at the interface between the atmosphere and the subsurface and are a key component that control ecosystem services, food production, and many other processes at the Earth’s surface. There is a long-established convention for identifying and mapping soils by texture. These readily available, georeferenced soil maps and databases are used widely in environmental sciences. Here, we show that these traditional soil classifications can be inappropriate, contributing to bias and uncertainty in applications from slope stability to water resource management. We suggest a new approach to soil classification, with a detailed example from the science of hydrology. Hydrologic simulations based on common meteorological conditions were performed using HYDRUS-1D, spanning textures identified by the United States Department of Agriculture soil texture triangle. We consider these common conditions to be: drainage from saturation, infiltration onto a drained soil, and combined infiltration and drainage events. Using a k-means clustering algorithm, we created soil classifications based on the modeled hydrologic responses of these soils. The hydrologic-process-based classifications were compared to those based on soil texture and a single hydraulic property, Ks. Differences in classifications based on hydrologic response versus soil texture demonstrate that traditional soil texture classification is a poor predictor of hydrologic response. We then developed a QGIS plugin to construct soil maps combining a classification with georeferenced soil data from the Natural Resource Conservation Service. The spatial patterns of hydrologic response were more immediately informative, much simpler, and less ambiguous, for use in applications ranging from trafficability to irrigation management to flood control. The ease with which hydrologic-process-based classifications can be made, along with the improved quantitative predictions of soil responses and visualization of landscape

  9. Construction of a class of Daubechies type wavelet bases

    International Nuclear Information System (INIS)

    Li Dengfeng; Wu Guochang

    2009-01-01

    Extensive work has been done in the theory and the construction of compactly supported orthonormal wavelet bases of L 2 (R). Some of the most distinguished work was done by Daubechies, who constructed a whole family of such wavelet bases. In this paper, we construct a class of orthonormal wavelet bases by using the principle of Daubechies, and investigate the length of support and the regularity of these wavelet bases.

  10. SAR Image Classification Based on Its Texture Features

    Institute of Scientific and Technical Information of China (English)

    LI Pingxiang; FANG Shenghui

    2003-01-01

    SAR images not only have the characteristics of all-ay, all-eather, but also provide object information which is different from visible and infrared sensors. However, SAR images have some faults, such as more speckles and fewer bands. The authors conducted the experiments of texture statistics analysis on SAR image features in order to improve the accuracy of SAR image interpretation.It is found that the texture analysis is an effective method for improving the accuracy of the SAR image interpretation.

  11. New Statistics for Texture Classification Based on Gabor Filters

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2007-09-01

    Full Text Available The paper introduces a new method of texture segmentation efficiency evaluation. One of the well known texture segmentation methods is based on Gabor filters because of their orientation and spatial frequency character. Several statistics are used to extract more information from results obtained by Gabor filtering. Big amount of input parameters causes a wide set of results which need to be evaluated. The evaluation method is based on the normal distributions Gaussian curves intersection assessment and provides a new point of view to the segmentation method selection.

  12. Texture-based classification of different gastric tumors at contrast-enhanced CT

    Energy Technology Data Exchange (ETDEWEB)

    Ba-Ssalamah, Ahmed, E-mail: ahmed.ba-ssalamah@meduniwien.ac.at [Department of Radiology, Medical University of Vienna (Austria); Muin, Dina; Schernthaner, Ruediger; Kulinna-Cosentini, Christiana; Bastati, Nina [Department of Radiology, Medical University of Vienna (Austria); Stift, Judith [Department of Pathology, Medical University of Vienna (Austria); Gore, Richard [Department of Radiology, University of Chicago Pritzker School of Medicine, Chicago, IL (United States); Mayerhoefer, Marius E. [Department of Radiology, Medical University of Vienna (Austria)

    2013-10-01

    Purpose: To determine the feasibility of texture analysis for the classification of gastric adenocarcinoma, lymphoma, and gastrointestinal stromal tumors on contrast-enhanced hydrodynamic-MDCT images. Materials and methods: The arterial phase scans of 47 patients with adenocarcinoma (AC) and a histologic tumor grade of [AC-G1, n = 4, G1, n = 4; AC-G2, n = 7; AC-G3, n = 16]; GIST, n = 15; and lymphoma, n = 5, and the venous phase scans of 48 patients with AC-G1, n = 3; AC-G2, n = 6; AC-G3, n = 14; GIST, n = 17; lymphoma, n = 8, were retrospectively reviewed. Based on regions of interest, texture analysis was performed, and features derived from the gray-level histogram, run-length and co-occurrence matrix, absolute gradient, autoregressive model, and wavelet transform were calculated. Fisher coefficients, probability of classification error, average correlation coefficients, and mutual information coefficients were used to create combinations of texture features that were optimized for tumor differentiation. Linear discriminant analysis in combination with a k-nearest neighbor classifier was used for tumor classification. Results: On arterial-phase scans, texture-based lesion classification was highly successful in differentiating between AC and lymphoma, and GIST and lymphoma, with misclassification rates of 3.1% and 0%, respectively. On venous-phase scans, texture-based classification was slightly less successful for AC vs. lymphoma (9.7% misclassification) and GIST vs. lymphoma (8% misclassification), but enabled the differentiation between AC and GIST (10% misclassification), and between the different grades of AC (4.4% misclassification). No texture feature combination was able to adequately distinguish between all three tumor types. Conclusion: Classification of different gastric tumors based on textural information may aid radiologists in establishing the correct diagnosis, at least in cases where the differential diagnosis can be narrowed down to two

  13. Texture-based classification of different gastric tumors at contrast-enhanced CT

    International Nuclear Information System (INIS)

    Ba-Ssalamah, Ahmed; Muin, Dina; Schernthaner, Ruediger; Kulinna-Cosentini, Christiana; Bastati, Nina; Stift, Judith; Gore, Richard; Mayerhoefer, Marius E.

    2013-01-01

    Purpose: To determine the feasibility of texture analysis for the classification of gastric adenocarcinoma, lymphoma, and gastrointestinal stromal tumors on contrast-enhanced hydrodynamic-MDCT images. Materials and methods: The arterial phase scans of 47 patients with adenocarcinoma (AC) and a histologic tumor grade of [AC-G1, n = 4, G1, n = 4; AC-G2, n = 7; AC-G3, n = 16]; GIST, n = 15; and lymphoma, n = 5, and the venous phase scans of 48 patients with AC-G1, n = 3; AC-G2, n = 6; AC-G3, n = 14; GIST, n = 17; lymphoma, n = 8, were retrospectively reviewed. Based on regions of interest, texture analysis was performed, and features derived from the gray-level histogram, run-length and co-occurrence matrix, absolute gradient, autoregressive model, and wavelet transform were calculated. Fisher coefficients, probability of classification error, average correlation coefficients, and mutual information coefficients were used to create combinations of texture features that were optimized for tumor differentiation. Linear discriminant analysis in combination with a k-nearest neighbor classifier was used for tumor classification. Results: On arterial-phase scans, texture-based lesion classification was highly successful in differentiating between AC and lymphoma, and GIST and lymphoma, with misclassification rates of 3.1% and 0%, respectively. On venous-phase scans, texture-based classification was slightly less successful for AC vs. lymphoma (9.7% misclassification) and GIST vs. lymphoma (8% misclassification), but enabled the differentiation between AC and GIST (10% misclassification), and between the different grades of AC (4.4% misclassification). No texture feature combination was able to adequately distinguish between all three tumor types. Conclusion: Classification of different gastric tumors based on textural information may aid radiologists in establishing the correct diagnosis, at least in cases where the differential diagnosis can be narrowed down to two

  14. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  15. A Spectral-Texture Kernel-Based Classification Method for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-11-01

    Full Text Available Classification of hyperspectral images always suffers from high dimensionality and very limited labeled samples. Recently, the spectral-spatial classification has attracted considerable attention and can achieve higher classification accuracy and smoother classification maps. In this paper, a novel spectral-spatial classification method for hyperspectral images by using kernel methods is investigated. For a given hyperspectral image, the principle component analysis (PCA transform is first performed. Then, the first principle component of the input image is segmented into non-overlapping homogeneous regions by using the entropy rate superpixel (ERS algorithm. Next, the local spectral histogram model is applied to each homogeneous region to obtain the corresponding texture features. Because this step is performed within each homogenous region, instead of within a fixed-size image window, the obtained local texture features in the image are more accurate, which can effectively benefit the improvement of classification accuracy. In the following step, a contextual spectral-texture kernel is constructed by combining spectral information in the image and the extracted texture information using the linearity property of the kernel methods. Finally, the classification map is achieved by the support vector machines (SVM classifier using the proposed spectral-texture kernel. Experiments on two benchmark airborne hyperspectral datasets demonstrate that our method can effectively improve classification accuracies, even though only a very limited training sample is available. Specifically, our method can achieve from 8.26% to 15.1% higher in terms of overall accuracy than the traditional SVM classifier. The performance of our method was further compared to several state-of-the-art classification methods of hyperspectral images using objective quantitative measures and a visual qualitative evaluation.

  16. Digitisation of films and texture analysis for digital classification of pulmonary opacities

    International Nuclear Information System (INIS)

    Desaga, J.F.; Dengler, J.; Wolf, T.; Engelmann, U.; Scheppelmann, D.; Meinzer, H.P.

    1988-01-01

    The study aimed at evaluating the effect of different methods of digitisation of radiographic films on the digital classification of pulmonary opacities. Test sets from the standard of the International Labour Office (ILO) Classification of Radiographs of Pneumoconiosis were prepared by film digitsation using a scanning microdensitometer or a video digitiser based on a personal computer equipped with a real time digitiser board and a vidicon or a Charge Coupled Device (CCD) camera. Seven different algorithms were used for texture analysis resulting in 16 texture parameters for each region. All methods used for texture analysis were independent of the mean grey value level and the size of the image analysed. Classification was performed by discriminant analysis using the classes from the ILO classification. A hit ratio of at least 85% was achieved for a digitisation by scanner digitisation or the vidicon, while the corresponding results of the CCD camera were significantly less good. Classification by texture analysis of opacities of chest X-rays of pneumoconiosis digitised by a personal computer based video digitiser and a vidicon are of equal quality compared to digitisation by a scanning microdensitometer. Correct classification of 90% was achieved via the described statistical approach. (orig.) [de

  17. Classification of Textures Using Filter Based Local Feature Extraction

    Directory of Open Access Journals (Sweden)

    Bocekci Veysel Gokhan

    2016-01-01

    Full Text Available In this work local features are used in feature extraction process in image processing for textures. The local binary pattern feature extraction method from textures are introduced. Filtering is also used during the feature extraction process for getting discriminative features. To show the effectiveness of the algorithm before the extraction process, three different noise are added to both train and test images. Wiener filter and median filter are used to remove the noise from images. We evaluate the performance of the method with Naïve Bayesian classifier. We conduct the comparative analysis on benchmark dataset with different filtering and size. Our experiments demonstrate that feature extraction process combine with filtering give promising results on noisy images.

  18. Textural pattern classification for oral squamous cell carcinoma.

    Science.gov (United States)

    Rahman, T Y; Mahanta, L B; Chakraborty, C; DAS, A K; Sarma, J D

    2018-01-01

    Despite being an area of cancer with highest worldwide incidence, oral cancer yet remains to be widely researched. Studies on computer-aided analysis of pathological slides of oral cancer contribute a lot to the diagnosis and treatment of the disease. Some researches in this direction have been carried out on oral submucous fibrosis. In this work an approach for analysing abnormality based on textural features present in squamous cell carcinoma histological slides have been considered. Histogram and grey-level co-occurrence matrix approaches for extraction of textural features from biopsy images with normal and malignant cells are used here. Further, we have used linear support vector machine classifier for automated diagnosis of the oral cancer, which gives 100% accuracy. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  19. Textural kinetics: a novel dynamic contrast-enhanced (DCE)-MRI feature for breast lesion classification.

    Science.gov (United States)

    Agner, Shannon C; Soman, Salil; Libfeld, Edward; McDonald, Margie; Thomas, Kathleen; Englander, Sarah; Rosen, Mark A; Chin, Deanna; Nosher, John; Madabhushi, Anant

    2011-06-01

    Dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) of the breast has emerged as an adjunct imaging tool to conventional X-ray mammography due to its high detection sensitivity. Despite the increasing use of breast DCE-MRI, specificity in distinguishing malignant from benign breast lesions is low, and interobserver variability in lesion classification is high. The novel contribution of this paper is in the definition of a new DCE-MRI descriptor that we call textural kinetics, which attempts to capture spatiotemporal changes in breast lesion texture in order to distinguish malignant from benign lesions. We qualitatively and quantitatively demonstrated on 41 breast DCE-MRI studies that textural kinetic features outperform signal intensity kinetics and lesion morphology features in distinguishing benign from malignant lesions. A probabilistic boosting tree (PBT) classifier in conjunction with textural kinetic descriptors yielded an accuracy of 90%, sensitivity of 95%, specificity of 82%, and an area under the curve (AUC) of 0.92. Graph embedding, used for qualitative visualization of a low-dimensional representation of the data, showed the best separation between benign and malignant lesions when using textural kinetic features. The PBT classifier results and trends were also corroborated via a support vector machine classifier which showed that textural kinetic features outperformed the morphological, static texture, and signal intensity kinetics descriptors. When textural kinetic attributes were combined with morphologic descriptors, the resulting PBT classifier yielded 89% accuracy, 99% sensitivity, 76% specificity, and an AUC of 0.91.

  20. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

    Science.gov (United States)

    Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma

    2018-04-01

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.

  1. Reliable Fault Classification of Induction Motors Using Texture Feature Extraction and a Multiclass Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Jia Uddin

    2014-01-01

    Full Text Available This paper proposes a method for the reliable fault detection and classification of induction motors using two-dimensional (2D texture features and a multiclass support vector machine (MCSVM. The proposed model first converts time-domain vibration signals to 2D gray images, resulting in texture patterns (or repetitive patterns, and extracts these texture features by generating the dominant neighborhood structure (DNS map. The principal component analysis (PCA is then used for the purpose of dimensionality reduction of the high-dimensional feature vector including the extracted texture features due to the fact that the high-dimensional feature vector can degrade classification performance, and this paper configures an effective feature vector including discriminative fault features for diagnosis. Finally, the proposed approach utilizes the one-against-all (OAA multiclass support vector machines (MCSVMs to identify induction motor failures. In this study, the Gaussian radial basis function kernel cooperates with OAA MCSVMs to deal with nonlinear fault features. Experimental results demonstrate that the proposed approach outperforms three state-of-the-art fault diagnosis algorithms in terms of fault classification accuracy, yielding an average classification accuracy of 100% even in noisy environments.

  2. Orthonormal Wavelet Bases for Quantum Molecular Dynamics

    International Nuclear Information System (INIS)

    Tymczak, C.; Wang, X.

    1997-01-01

    We report on the use of compactly supported, orthonormal wavelet bases for quantum molecular-dynamics (Car-Parrinello) algorithms. A wavelet selection scheme is developed and tested for prototypical problems, such as the three-dimensional harmonic oscillator, the hydrogen atom, and the local density approximation to atomic and molecular systems. Our method shows systematic convergence with increased grid size, along with improvement on compression rates, thereby yielding an optimal grid for self-consistent electronic structure calculations. copyright 1997 The American Physical Society

  3. Three-dimensional textural features of conventional MRI improve diagnostic classification of childhood brain tumours.

    Science.gov (United States)

    Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N

    2015-09-01

    The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.

  4. A Comparative Study of Land Cover Classification by Using Multispectral and Texture Data

    Directory of Open Access Journals (Sweden)

    Salman Qadri

    2016-01-01

    Full Text Available The main objective of this study is to find out the importance of machine vision approach for the classification of five types of land cover data such as bare land, desert rangeland, green pasture, fertile cultivated land, and Sutlej river land. A novel spectra-statistical framework is designed to classify the subjective land cover data types accurately. Multispectral data of these land covers were acquired by using a handheld device named multispectral radiometer in the form of five spectral bands (blue, green, red, near infrared, and shortwave infrared while texture data were acquired with a digital camera by the transformation of acquired images into 229 texture features for each image. The most discriminant 30 features of each image were obtained by integrating the three statistical features selection techniques such as Fisher, Probability of Error plus Average Correlation, and Mutual Information (F + PA + MI. Selected texture data clustering was verified by nonlinear discriminant analysis while linear discriminant analysis approach was applied for multispectral data. For classification, the texture and multispectral data were deployed to artificial neural network (ANN: n-class. By implementing a cross validation method (80-20, we received an accuracy of 91.332% for texture data and 96.40% for multispectral data, respectively.

  5. Wavelet-Based Signal Processing of Electromagnetic Pulse Generated Waveforms

    National Research Council Canada - National Science Library

    Ardolino, Richard S

    2007-01-01

    This thesis investigated and compared alternative signal processing techniques that used wavelet-based methods instead of traditional frequency domain methods for processing measured electromagnetic pulse (EMP) waveforms...

  6. An Active Patch Model for Real World Texture and Appearance Classification.

    Science.gov (United States)

    Mao, Junhua; Zhu, Jun; Yuille, Alan L

    2014-09-06

    This paper addresses the task of natural texture and appearance classification. Our goal is to develop a simple and intuitive method that performs at state of the art on datasets ranging from homogeneous texture (e.g., material texture), to less homogeneous texture (e.g., the fur of animals), and to inhomogeneous texture (the appearance patterns of vehicles). Our method uses a bag-of-words model where the features are based on a dictionary of active patches. Active patches are raw intensity patches which can undergo spatial transformations (e.g., rotation and scaling) and adjust themselves to best match the image regions. The dictionary of active patches is required to be compact and representative, in the sense that we can use it to approximately reconstruct the images that we want to classify. We propose a probabilistic model to quantify the quality of image reconstruction and design a greedy learning algorithm to obtain the dictionary. We classify images using the occurrence frequency of the active patches. Feature extraction is fast (about 100 ms per image) using the GPU. The experimental results show that our method improves the state of the art on a challenging material texture benchmark dataset (KTH-TIPS2). To test our method on less homogeneous or inhomogeneous images, we construct two new datasets consisting of appearance image patches of animals and vehicles cropped from the PASCAL VOC dataset. Our method outperforms competing methods on these datasets.

  7. Classification of Weed Species Using Artificial Neural Networks Based on Color Leaf Texture Feature

    Science.gov (United States)

    Li, Zhichen; An, Qiu; Ji, Changying

    The potential impact of herbicide utilization compel people to use new method of weed control. Selective herbicide application is optimal method to reduce herbicide usage while maintain weed control. The key of selective herbicide is how to discriminate weed exactly. The HIS color co-occurrence method (CCM) texture analysis techniques was used to extract four texture parameters: Angular second moment (ASM), Entropy(E), Inertia quadrature (IQ), and Inverse difference moment or local homogeneity (IDM).The weed species selected for studying were Arthraxon hispidus, Digitaria sanguinalis, Petunia, Cyperus, Alternanthera Philoxeroides and Corchoropsis psilocarpa. The software of neuroshell2 was used for designing the structure of the neural network, training and test the data. It was found that the 8-40-1 artificial neural network provided the best classification performance and was capable of classification accuracies of 78%.

  8. Texture-based classification for characterizing regions on remote sensing images

    Science.gov (United States)

    Borne, Frédéric; Viennois, Gaëlle

    2017-07-01

    Remote sensing classification methods mostly use only the physical properties of pixels or complex texture indexes but do not lead to recommendation for practical applications. Our objective was to design a texture-based method, called the Paysages A PRIori method (PAPRI), which works both at pixel and neighborhood level and which can handle different spatial scales of analysis. The aim was to stay close to the logic of a human expert and to deal with co-occurrences in a more efficient way than other methods. The PAPRI method is pixelwise and based on a comparison of statistical and spatial reference properties provided by the expert with local properties computed in varying size windows centered on the pixel. A specific distance is computed for different windows around the pixel and a local minimum leads to choosing the class in which the pixel is to be placed. The PAPRI method brings a significant improvement in classification quality for different kinds of images, including aerial, lidar, high-resolution satellite images as well as texture images from the Brodatz and Vistex databases. This work shows the importance of texture analysis in understanding remote sensing images and for future developments.

  9. Analysis of SURRGO Data and Obtaining Soil Texture Classifications for Simulating Hydrologic Processes

    Science.gov (United States)

    2016-07-01

    the general texture classifications. 2. Another source for soil information, such as the Food and Agriculture Organization of the United Nations (FAO...is to use another soils dataset that contains soil properties for the areas of interest, such as the Digital Soil Map of the World provided by the...www.nrcs.usda.gov/wps/portal/nrcs/main/soils/survey/ NOTE: The contents of this technical note are not to be used for advertising

  10. Analysis of the effect of spatial resolution on texture features in the classification of breast masses in mammograms

    International Nuclear Information System (INIS)

    Rangayyan, R.M.; Nguyen, T.M.; Ayres, F.J.; Nandi, A.K.

    2007-01-01

    The present study investigates the effect of spatial resolution on co-occurrence matrix-based texture features in discriminating breast lesions as benign masses or malignant tumors. The highest classification result, in terms of the area under the receiver operating characteristics (ROC) curve, of A z 0.74, was obtained at the spatial resolution of 800 μm using all 14 of Haralick's texture features computed using the margins, or ribbons, of the breast masses as seen on mammograms. Furthermore, our study indicates that texture features computed using the ribbons resulted in higher classification accuracy than the same texture features computed using the corresponding regions of interest within the mass boundaries drawn by an expert radiologist. Classification experiments using each single texture feature showed that the texture F 8 , sum entropy, gives consistently high classification results with an average A z of 0.64 across all levels of resolution. At certain levels of resolution, the textures F 5 , F 9 , and F 11 individually gave the highest classification result with A z = 0.70. (orig.)

  11. Automatic detection and classification of breast tumors in ultrasonic images using texture and morphological features.

    Science.gov (United States)

    Su, Yanni; Wang, Yuanyuan; Jiao, Jing; Guo, Yi

    2011-01-01

    Due to severe presence of speckle noise, poor image contrast and irregular lesion shape, it is challenging to build a fully automatic detection and classification system for breast ultrasonic images. In this paper, a novel and effective computer-aided method including generation of a region of interest (ROI), segmentation and classification of breast tumor is proposed without any manual intervention. By incorporating local features of texture and position, a ROI is firstly detected using a self-organizing map neural network. Then a modified Normalized Cut approach considering the weighted neighborhood gray values is proposed to partition the ROI into clusters and get the initial boundary. In addition, a regional-fitting active contour model is used to adjust the few inaccurate initial boundaries for the final segmentation. Finally, three textures and five morphologic features are extracted from each breast tumor; whereby a highly efficient Affinity Propagation clustering is used to fulfill the malignancy and benign classification for an existing database without any training process. The proposed system is validated by 132 cases (67 benignancies and 65 malignancies) with its performance compared to traditional methods such as level set segmentation, artificial neural network classifiers, and so forth. Experiment results show that the proposed system, which needs no training procedure or manual interference, performs best in detection and classification of ultrasonic breast tumors, while having the lowest computation complexity.

  12. A Study of Hand Back Skin Texture Patterns for Personal Identification and Gender Classification

    Directory of Open Access Journals (Sweden)

    Jin Xie

    2012-06-01

    Full Text Available Human hand back skin texture (HBST is often consistent for a person and distinctive from person to person. In this paper, we study the HBST pattern recognition problem with applications to personal identification and gender classification. A specially designed system is developed to capture HBST images, and an HBST image database was established, which consists of 1,920 images from 80 persons (160 hands. An efficient texton learning based method is then presented to classify the HBST patterns. First, textons are learned in the space of filter bank responses from a set of training images using the -minimization based sparse representation (SR technique. Then, under the SR framework, we represent the feature vector at each pixel over the learned dictionary to construct a representation coefficient histogram. Finally, the coefficient histogram is used as skin texture feature for classification. Experiments on personal identification and gender classification are performed by using the established HBST database. The results show that HBST can be used to assist human identification and gender classification.

  13. Statistical analysis of textural features for improved classification of oral histopathological images.

    Science.gov (United States)

    Muthu Rama Krishnan, M; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K

    2012-04-01

    The objective of this paper is to provide an improved technique, which can assist oncopathologists in correct screening of oral precancerous conditions specially oral submucous fibrosis (OSF) with significant accuracy on the basis of collagen fibres in the sub-epithelial connective tissue. The proposed scheme is composed of collagen fibres segmentation, its textural feature extraction and selection, screening perfomance enhancement under Gaussian transformation and finally classification. In this study, collagen fibres are segmented on R,G,B color channels using back-probagation neural network from 60 normal and 59 OSF histological images followed by histogram specification for reducing the stain intensity variation. Henceforth, textural features of collgen area are extracted using fractal approaches viz., differential box counting and brownian motion curve . Feature selection is done using Kullback-Leibler (KL) divergence criterion and the screening performance is evaluated based on various statistical tests to conform Gaussian nature. Here, the screening performance is enhanced under Gaussian transformation of the non-Gaussian features using hybrid distribution. Moreover, the routine screening is designed based on two statistical classifiers viz., Bayesian classification and support vector machines (SVM) to classify normal and OSF. It is observed that SVM with linear kernel function provides better classification accuracy (91.64%) as compared to Bayesian classifier. The addition of fractal features of collagen under Gaussian transformation improves Bayesian classifier's performance from 80.69% to 90.75%. Results are here studied and discussed.

  14. Breast tissue classification in digital breast tomosynthesis images using texture features: a feasibility study

    Science.gov (United States)

    Kontos, Despina; Berger, Rachelle; Bakic, Predrag R.; Maidment, Andrew D. A.

    2009-02-01

    Mammographic breast density is a known breast cancer risk factor. Studies have shown the potential to automate breast density estimation by using computerized texture-based segmentation of the dense tissue in mammograms. Digital breast tomosynthesis (DBT) is a tomographic x-ray breast imaging modality that could allow volumetric breast density estimation. We evaluated the feasibility of distinguishing between dense and fatty breast regions in DBT using computer-extracted texture features. Our long-term hypothesis is that DBT texture analysis can be used to develop 3D dense tissue segmentation algorithms for estimating volumetric breast density. DBT images from 40 women were analyzed. The dense tissue area was delineated within each central source projection (CSP) image using a thresholding technique (Cumulus, Univ. Toronto). Two (2.5cm)2 ROIs were manually selected: one within the dense tissue region and another within the fatty region. Corresponding (2.5cm)3 ROIs were placed within the reconstructed DBT images. Texture features, previously used for mammographic dense tissue segmentation, were computed. Receiver operating characteristic (ROC) curve analysis was performed to evaluate feature classification performance. Different texture features appeared to perform best in the 3D reconstructed DBT compared to the 2D CSP images. Fractal dimension was superior in DBT (AUC=0.90), while contrast was best in CSP images (AUC=0.92). We attribute these differences to the effects of tissue superimposition in CSP and the volumetric visualization of the breast tissue in DBT. Our results suggest that novel approaches, different than those conventionally used in projection mammography, need to be investigated in order to develop DBT dense tissue segmentation algorithms for estimating volumetric breast density.

  15. Using geometrical, textural, and contextual information of land parcels for classification of detailed urban land use

    Science.gov (United States)

    Wu, S.-S.; Qiu, X.; Usery, E.L.; Wang, L.

    2009-01-01

    Detailed urban land use data are important to government officials, researchers, and businesspeople for a variety of purposes. This article presents an approach to classifying detailed urban land use based on geometrical, textural, and contextual information of land parcels. An area of 6 by 14 km in Austin, Texas, with land parcel boundaries delineated by the Travis Central Appraisal District of Travis County, Texas, is tested for the approach. We derive fifty parcel attributes from relevant geographic information system (GIS) and remote sensing data and use them to discriminate among nine urban land uses: single family, multifamily, commercial, office, industrial, civic, open space, transportation, and undeveloped. Half of the 33,025 parcels in the study area are used as training data for land use classification and the other half are used as testing data for accuracy assessment. The best result with a decision tree classification algorithm has an overall accuracy of 96 percent and a kappa coefficient of 0.78, and two naive, baseline models based on the majority rule and the spatial autocorrelation rule have overall accuracy of 89 percent and 79 percent, respectively. The algorithm is relatively good at classifying single-family, multifamily, commercial, open space, and undeveloped land uses and relatively poor at classifying office, industrial, civic, and transportation land uses. The most important attributes for land use classification are the geometrical attributes, particularly those related to building areas. Next are the contextual attributes, particularly those relevant to the spatial relationship between buildings, then the textural attributes, particularly the semivariance texture statistic from 0.61-m resolution images.

  16. Wavelet-based characterization of gait signal for neurological abnormalities.

    Science.gov (United States)

    Baratin, E; Sugavaneswaran, L; Umapathy, K; Ioana, C; Krishnan, S

    2015-02-01

    Studies conducted by the World Health Organization (WHO) indicate that over one billion suffer from neurological disorders worldwide, and lack of efficient diagnosis procedures affects their therapeutic interventions. Characterizing certain pathologies of motor control for facilitating their diagnosis can be useful in quantitatively monitoring disease progression and efficient treatment planning. As a suitable directive, we introduce a wavelet-based scheme for effective characterization of gait associated with certain neurological disorders. In addition, since the data were recorded from a dynamic process, this work also investigates the need for gait signal re-sampling prior to identification of signal markers in the presence of pathologies. To benefit automated discrimination of gait data, certain characteristic features are extracted from the wavelet-transformed signals. The performance of the proposed approach was evaluated using a database consisting of 15 Parkinson's disease (PD), 20 Huntington's disease (HD), 13 Amyotrophic lateral sclerosis (ALS) and 16 healthy control subjects, and an average classification accuracy of 85% is achieved using an unbiased cross-validation strategy. The obtained results demonstrate the potential of the proposed methodology for computer-aided diagnosis and automatic characterization of certain neurological disorders. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Spectral multi-energy CT texture analysis with machine learning for tissue classification: an investigation using classification of benign parotid tumours as a testing paradigm.

    Science.gov (United States)

    Al Ajmi, Eiman; Forghani, Behzad; Reinhold, Caroline; Bayat, Maryam; Forghani, Reza

    2018-06-01

    There is a rich amount of quantitative information in spectral datasets generated from dual-energy CT (DECT). In this study, we compare the performance of texture analysis performed on multi-energy datasets to that of virtual monochromatic images (VMIs) at 65 keV only, using classification of the two most common benign parotid neoplasms as a testing paradigm. Forty-two patients with pathologically proven Warthin tumour (n = 25) or pleomorphic adenoma (n = 17) were evaluated. Texture analysis was performed on VMIs ranging from 40 to 140 keV in 5-keV increments (multi-energy analysis) or 65-keV VMIs only, which is typically considered equivalent to single-energy CT. Random forest (RF) models were constructed for outcome prediction using separate randomly selected training and testing sets or the entire patient set. Using multi-energy texture analysis, tumour classification in the independent testing set had accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 92%, 86%, 100%, 100%, and 83%, compared to 75%, 57%, 100%, 100%, and 63%, respectively, for single-energy analysis. Multi-energy texture analysis demonstrates superior performance compared to single-energy texture analysis of VMIs at 65 keV for classification of benign parotid tumours. • We present and validate a paradigm for texture analysis of DECT scans. • Multi-energy dataset texture analysis is superior to single-energy dataset texture analysis. • DECT texture analysis has high accura\\cy for diagnosis of benign parotid tumours. • DECT texture analysis with machine learning can enhance non-invasive diagnostic tumour evaluation.

  18. Comparison of models of automatic classification of textural patterns of mineral presents in Colombian coals

    International Nuclear Information System (INIS)

    Lopez Carvajal, Jaime; Branch Bedoya, John Willian

    2005-01-01

    The automatic classification of objects is a very interesting approach under several problem domains. This paper outlines some results obtained under different classification models to categorize textural patterns of minerals using real digital images. The data set used was characterized by a small size and noise presence. The implemented models were the Bayesian classifier, Neural Network (2-5-1), support vector machine, decision tree and 3-nearest neighbors. The results after applying crossed validation show that the Bayesian model (84%) proved better predictive capacity than the others, mainly due to its noise robustness behavior. The neuronal network (68%) and the SVM (67%) gave promising results, because they could be improved increasing the data amount used, while the decision tree (55%) and K-NN (54%) did not seem to be adequate for this problem, because of their sensibility to noise

  19. Application of texture analysis method for classification of benign and malignant thyroid nodules in ultrasound images.

    Science.gov (United States)

    Abbasian Ardakani, Ali; Gharbali, Akbar; Mohammadi, Afshin

    2015-01-01

    The aim of this study was to evaluate computer aided diagnosis (CAD) system with texture analysis (TA) to improve radiologists' accuracy in identification of thyroid nodules as malignant or benign. A total of 70 cases (26 benign and 44 malignant) were analyzed in this study. We extracted up to 270 statistical texture features as a descriptor for each selected region of interests (ROIs) in three normalization schemes (default, 3s and 1%-99%). Then features by the lowest probability of classification error and average correlation coefficients (POE+ACC), and Fisher coefficient (Fisher) eliminated to 10 best and most effective features. These features were analyzed under standard and nonstandard states. For TA of the thyroid nodules, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA) were applied. First Nearest-Neighbour (1-NN) classifier was performed for the features resulting from PCA and LDA. NDA features were classified by artificial neural network (A-NN). Receiver operating characteristic (ROC) curve analysis was used for examining the performance of TA methods. The best results were driven in 1-99% normalization with features extracted by POE+ACC algorithm and analyzed by NDA with the area under the ROC curve ( Az) of 0.9722 which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Our results indicate that TA is a reliable method, can provide useful information help radiologist in detection and classification of benign and malignant thyroid nodules.

  20. Breast tissue classification in digital tomosynthesis images based on global gradient minimization and texture features

    Science.gov (United States)

    Qin, Xulei; Lu, Guolan; Sechopoulos, Ioannis; Fei, Baowei

    2014-03-01

    Digital breast tomosynthesis (DBT) is a pseudo-three-dimensional x-ray imaging modality proposed to decrease the effect of tissue superposition present in mammography, potentially resulting in an increase in clinical performance for the detection and diagnosis of breast cancer. Tissue classification in DBT images can be useful in risk assessment, computer-aided detection and radiation dosimetry, among other aspects. However, classifying breast tissue in DBT is a challenging problem because DBT images include complicated structures, image noise, and out-of-plane artifacts due to limited angular tomographic sampling. In this project, we propose an automatic method to classify fatty and glandular tissue in DBT images. First, the DBT images are pre-processed to enhance the tissue structures and to decrease image noise and artifacts. Second, a global smooth filter based on L0 gradient minimization is applied to eliminate detailed structures and enhance large-scale ones. Third, the similar structure regions are extracted and labeled by fuzzy C-means (FCM) classification. At the same time, the texture features are also calculated. Finally, each region is classified into different tissue types based on both intensity and texture features. The proposed method is validated using five patient DBT images using manual segmentation as the gold standard. The Dice scores and the confusion matrix are utilized to evaluate the classified results. The evaluation results demonstrated the feasibility of the proposed method for classifying breast glandular and fat tissue on DBT images.

  1. Macroscopic Rock Texture Image Classification Using a Hierarchical Neuro-Fuzzy Class Method

    Directory of Open Access Journals (Sweden)

    Laercio B. Gonçalves

    2010-01-01

    Full Text Available We used a Hierarchical Neuro-Fuzzy Class Method based on binary space partitioning (NFHB-Class Method for macroscopic rock texture classification. The relevance of this study is in helping Geologists in the diagnosis and planning of oil reservoir exploration. The proposed method is capable of generating its own decision structure, with automatic extraction of fuzzy rules. These rules are linguistically interpretable, thus explaining the obtained data structure. The presented image classification for macroscopic rocks is based on texture descriptors, such as spatial variation coefficient, Hurst coefficient, entropy, and cooccurrence matrix. Four rock classes have been evaluated by the NFHB-Class Method: gneiss (two subclasses, basalt (four subclasses, diabase (five subclasses, and rhyolite (five subclasses. These four rock classes are of great interest in the evaluation of oil boreholes, which is considered a complex task by geologists. We present a computer method to solve this problem. In order to evaluate system performance, we used 50 RGB images for each rock classes and subclasses, thus producing a total of 800 images. For all rock classes, the NFHB-Class Method achieved a percentage of correct hits over 73%. The proposed method converged for all tests presented in the case study.

  2. Metrics and textural features of MRI diffusion to improve classification of pediatric posterior fossa tumors.

    Science.gov (United States)

    Rodriguez Gutierrez, D; Awwad, A; Meijer, L; Manita, M; Jaspan, T; Dineen, R A; Grundy, R G; Auer, D P

    2014-05-01

    Qualitative radiologic MR imaging review affords limited differentiation among types of pediatric posterior fossa brain tumors and cannot detect histologic or molecular subtypes, which could help to stratify treatment. This study aimed to improve current posterior fossa discrimination of histologic tumor type by using support vector machine classifiers on quantitative MR imaging features. This retrospective study included preoperative MRI in 40 children with posterior fossa tumors (17 medulloblastomas, 16 pilocytic astrocytomas, and 7 ependymomas). Shape, histogram, and textural features were computed from contrast-enhanced T2WI and T1WI and diffusivity (ADC) maps. Combinations of features were used to train tumor-type-specific classifiers for medulloblastoma, pilocytic astrocytoma, and ependymoma types in separation and as a joint posterior fossa classifier. A tumor-subtype classifier was also produced for classic medulloblastoma. The performance of different classifiers was assessed and compared by using randomly selected subsets of training and test data. ADC histogram features (25th and 75th percentiles and skewness) yielded the best classification of tumor type (on average >95.8% of medulloblastomas, >96.9% of pilocytic astrocytomas, and >94.3% of ependymomas by using 8 training samples). The resulting joint posterior fossa classifier correctly assigned >91.4% of the posterior fossa tumors. For subtype classification, 89.4% of classic medulloblastomas were correctly classified on the basis of ADC texture features extracted from the Gray-Level Co-Occurence Matrix. Support vector machine-based classifiers using ADC histogram features yielded very good discrimination among pediatric posterior fossa tumor types, and ADC textural features show promise for further subtype discrimination. These findings suggest an added diagnostic value of quantitative feature analysis of diffusion MR imaging in pediatric neuro-oncology. © 2014 by American Journal of Neuroradiology.

  3. Using texture analysis to improve per-pixel classification of very high resolution images for mapping plastic greenhouses

    Science.gov (United States)

    Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.

    The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the

  4. Classification of grass pollen through the quantitative analysis of surface ornamentation and texture.

    Science.gov (United States)

    Mander, Luke; Li, Mao; Mio, Washington; Fowlkes, Charless C; Punyasena, Surangi W

    2013-11-07

    Taxonomic identification of pollen and spores uses inherently qualitative descriptions of morphology. Consequently, identifications are restricted to categories that can be reliably classified by multiple analysts, resulting in the coarse taxonomic resolution of the pollen and spore record. Grass pollen represents an archetypal example; it is not routinely identified below family level. To address this issue, we developed quantitative morphometric methods to characterize surface ornamentation and classify grass pollen grains. This produces a means of quantifying morphological features that are traditionally described qualitatively. We used scanning electron microscopy to image 240 specimens of pollen from 12 species within the grass family (Poaceae). We classified these species by developing algorithmic features that quantify the size and density of sculptural elements on the pollen surface, and measure the complexity of the ornamentation they form. These features yielded a classification accuracy of 77.5%. In comparison, a texture descriptor based on modelling the statistical distribution of brightness values in image patches yielded a classification accuracy of 85.8%, and seven human subjects achieved accuracies between 68.33 and 81.67%. The algorithmic features we developed directly relate to biologically meaningful features of grass pollen morphology, and could facilitate direct interpretation of unsupervised classification results from fossil material.

  5. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  6. Non-negative matrix factorization in texture feature for classification of dementia with MRI data

    Science.gov (United States)

    Sarwinda, D.; Bustamam, A.; Ardaneswari, G.

    2017-07-01

    This paper investigates applications of non-negative matrix factorization as feature selection method to select the features from gray level co-occurrence matrix. The proposed approach is used to classify dementia using MRI data. In this study, texture analysis using gray level co-occurrence matrix is done to feature extraction. In the feature extraction process of MRI data, we found seven features from gray level co-occurrence matrix. Non-negative matrix factorization selected three features that influence of all features produced by feature extractions. A Naïve Bayes classifier is adapted to classify dementia, i.e. Alzheimer's disease, Mild Cognitive Impairment (MCI) and normal control. The experimental results show that non-negative factorization as feature selection method able to achieve an accuracy of 96.4% for classification of Alzheimer's and normal control. The proposed method also compared with other features selection methods i.e. Principal Component Analysis (PCA).

  7. Classification of Carotid Plaque Echogenicity by Combining Texture Features and Morphologic Characteristics.

    Science.gov (United States)

    Huang, Xiaowei; Zhang, Yanling; Qian, Ming; Meng, Long; Xiao, Yang; Niu, Lili; Zheng, Rongqin; Zheng, Hairong

    2016-10-01

    Anechoic carotid plaques on sonography have been used to predict future cardiovascular or cerebrovascular events. The purpose of this study was to investigate whether carotid plaque echogenicity could be assessed objectively by combining texture features extracted by MaZda software (Institute of Electronics, Technical University of Lodz, Lodz, Poland) and morphologic characteristics, which may provide a promising method for early prediction of acute cardiovascular disease. A total of 268 plaque images were collected from 136 volunteers and classified into 85 hyperechoic, 83 intermediate, and 100 anechoic plaques. About 300 texture features were extracted from histogram, absolute gradient, run-length matrix, gray-level co-occurrence matrix, autoregressive model, and wavelet transform algorithms by MaZda. The morphologic characteristics, including degree of stenosis, maximum plaque intima-media thickness, and maximum plaque length, were measured by B-mode sonography. Statistically significant features were selected by analysis of covariance. The most discriminative features were obtained from statistically significant features by linear discriminant analysis. The K-nearest neighbor classifier was used to classify plaque echogenicity based on statistically significant and most discriminative features. A total of 30 statistically significant features were selected among the plaques, and 2 most discriminative features were obtained from the statistically significant features. The classification accuracy rates for 3 types of plaques based on statistically significant and most discriminative features were 72.03% (κ= 0.571; P MaZda and morphologic characteristics.

  8. Idiopathic interstitial pneumonias and emphysema: detection and classification using a texture-discriminative approach

    Science.gov (United States)

    Fetita, C.; Chang-Chien, K. C.; Brillet, P. Y.; Pr"teux, F.; Chang, R. F.

    2012-03-01

    Our study aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on three-dimensional (3-D) mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). According to a preliminary evaluation on an extended database, the proposed method can overcome the drawbacks of a previously developed approach and achieve higher sensitivity and specificity.

  9. Classification of mass and normal breast tissue: A convolution neural network classifier with spatial domain and texture images

    International Nuclear Information System (INIS)

    Sahiner, B.; Chan, H.P.; Petrick, N.; Helvie, M.A.; Adler, D.D.; Goodsitt, M.M.; Wei, D.

    1996-01-01

    The authors investigated the classification of regions of interest (ROI's) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a back-propagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained form the ROI's using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequently used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROI's containing biopsy-proven masses and 504 ROI's containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms

  10. Wavelet Based Diagnosis and Protection of Electric Motors

    OpenAIRE

    Khan, M. Abdesh Shafiel Kafiey; Rahman, M. Azizur

    2010-01-01

    In this chapter, a short review of conventional Fourier transforms and new wavelet based faults diagnostic and protection techniques for electric motors is presented. The new hybrid wavelet packet transform (WPT) and neural network (NN) based faults diagnostic algorithm is developed and implemented for electric motors. The proposed WPT and NN

  11. 3D Wavelet-Based Filter and Method

    Science.gov (United States)

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  12. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    Science.gov (United States)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  13. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  14. Sparse data structure design for wavelet-based methods

    Directory of Open Access Journals (Sweden)

    Latu Guillaume

    2011-12-01

    Full Text Available This course gives an introduction to the design of efficient datatypes for adaptive wavelet-based applications. It presents some code fragments and benchmark technics useful to learn about the design of sparse data structures and adaptive algorithms. Material and practical examples are given, and they provide good introduction for anyone involved in the development of adaptive applications. An answer will be given to the question: how to implement and efficiently use the discrete wavelet transform in computer applications? A focus will be made on time-evolution problems, and use of wavelet-based scheme for adaptively solving partial differential equations (PDE. One crucial issue is that the benefits of the adaptive method in term of algorithmic cost reduction can not be wasted by overheads associated to sparse data management.

  15. An automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images

    International Nuclear Information System (INIS)

    Park, Seong Hoon; Seo, Joon Beom; Kim, Nam Kug; Lee, Young Kyung; Kim, Song Soo; Chae, Eun Jin; Lee, June Goo

    2007-01-01

    To develop an automated classification system for the differentiation of obstructive lung diseases based on the textural analysis of HRCT images, and to evaluate the accuracy and usefulness of the system. For textural analysis, histogram features, gradient features, run length encoding, and a co-occurrence matrix were employed. A Bayesian classifier was used for automated classification. The images (image number n = 256) were selected from the HRCT images obtained from 17 healthy subjects (n = 67), 26 patients with bronchiolitis obliterans (n = 70), 28 patients with mild centrilobular emphysema (n = 65), and 21 patients with panlobular emphysema or severe centrilobular emphysema (n = 63). An five-fold cross-validation method was used to assess the performance of the system. Class-specific sensitivities were analyzed and the overall accuracy of the system was assessed with kappa statistics. The sensitivity of the system for each class was as follows: normal lung 84.9%, bronchiolitis obliterans 83.8%, mild centrilobular emphysema 77.0%, and panlobular emphysema or severe centrilobular emphysema 95.8%. The overall performance for differentiating each disease and the normal lung was satisfactory with a kappa value of 0.779. An automated classification system for the differentiation between obstructive lung diseases based on the textural analysis of HRCT images was developed. The proposed system discriminates well between the various obstructive lung diseases and the normal lung

  16. Enhanced ATM Security using Biometric Authentication and Wavelet Based AES

    Directory of Open Access Journals (Sweden)

    Sreedharan Ajish

    2016-01-01

    Full Text Available The traditional ATM terminal customer recognition systems rely only on bank cards, passwords and such identity verification methods are not perfect and functions are too single. Biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. This paper presents a highly secured ATM banking system using biometric authentication and wavelet based Advanced Encryption Standard (AES algorithm. Two levels of security are provided in this proposed design. Firstly we consider the security level at the client side by providing biometric authentication scheme along with a password of 4-digit long. Biometric authentication is achieved by considering the fingerprint image of the client. Secondly we ensure a secured communication link between the client machine to the bank server using an optimized energy efficient and wavelet based AES processor. The fingerprint image is the data for encryption process and 4-digit long password is the symmetric key for the encryption process. The performance of ATM machine depends on ultra-high-speed encryption, very low power consumption, and algorithmic integrity. To get a low power consuming and ultra-high speed encryption at the ATM machine, an optimized and wavelet based AES algorithm is proposed. In this system biometric and cryptography techniques are used together for personal identity authentication to improve the security level. The design of the wavelet based AES processor is simulated and the design of the energy efficient AES processor is simulated in Quartus-II software. Simulation results ensure its proper functionality. A comparison among other research works proves its superiority.

  17. Wavelet-based verification of the quantitative precipitation forecast

    Science.gov (United States)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  18. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    Science.gov (United States)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  19. Enhanced land use/cover classification of heterogeneous tropical landscapes using support vector machines and textural homogeneity

    Science.gov (United States)

    Paneque-Gálvez, Jaime; Mas, Jean-François; Moré, Gerard; Cristóbal, Jordi; Orta-Martínez, Martí; Luz, Ana Catarina; Guèze, Maximilien; Macía, Manuel J.; Reyes-García, Victoria

    2013-08-01

    Land use/cover classification is a key research field in remote sensing and land change science as thematic maps derived from remotely sensed data have become the basis for analyzing many socio-ecological issues. However, land use/cover classification remains a difficult task and it is especially challenging in heterogeneous tropical landscapes where nonetheless such maps are of great importance. The present study aims at establishing an efficient classification approach to accurately map all broad land use/cover classes in a large, heterogeneous tropical area, as a basis for further studies (e.g., land use/cover change, deforestation and forest degradation). Specifically, we first compare the performance of parametric (maximum likelihood), non-parametric (k-nearest neighbor and four different support vector machines - SVM), and hybrid (unsupervised-supervised) classifiers, using hard and soft (fuzzy) accuracy assessments. We then assess, using the maximum likelihood algorithm, what textural indices from the gray-level co-occurrence matrix lead to greater classification improvements at the spatial resolution of Landsat imagery (30 m), and rank them accordingly. Finally, we use the textural index that provides the most accurate classification results to evaluate whether its usefulness varies significantly with the classifier used. We classified imagery corresponding to dry and wet seasons and found that SVM classifiers outperformed all the rest. We also found that the use of some textural indices, but particularly homogeneity and entropy, can significantly improve classifications. We focused on the use of the homogeneity index, which has so far been neglected in land use/cover classification efforts, and found that this index along with reflectance bands significantly increased the overall accuracy of all the classifiers, but particularly of SVM. We observed that improvements in producer's and user's accuracies through the inclusion of homogeneity were different

  20. Different approaches for the texture classification of a remote sensing image bank

    Science.gov (United States)

    Durand, Philippe; Brunet, Gerard; Ghorbanzadeh, Dariush; Jaupi, Luan

    2018-04-01

    In this paper, we summarize and compare two different approaches used by the authors, to classify different natural textures. The first approach, which is simple and inexpensive in computing time, uses a data bank image and an expert system able to classify different textures from a number of rules established by discipline specialists. The second method uses the same database and a neural networks approach.

  1. 3-D Solid Texture Classification Using Locally-Oriented Wavelet Transforms

    OpenAIRE

    Dicente Cid Yashin; Müller Henning; Platon Alexandra; Poletti Pierre-Alexandre; Depeursinge Adrien

    2017-01-01

    Many image acquisition techniques used in biomedical imaging material analysis and structural geology are capable of acquiring 3D solid images. Computational analysis of these images is complex but necessary since it is difficult for humans to visualize and quantify their detailed 3D content. One of the most common methods to analyze 3D data is to characterize the volumetric texture patterns. Texture analysis generally consists of encoding the local organization of image scales and directions...

  2. Classification of JERS-1 Image Mosaic of Central Africa Using A Supervised Multiscale Classifier of Texture Features

    Science.gov (United States)

    Saatchi, Sassan; DeGrandi, Franco; Simard, Marc; Podest, Erika

    1999-01-01

    In this paper, a multiscale approach is introduced to classify the Japanese Research Satellite-1 (JERS-1) mosaic image over the Central African rainforest. A series of texture maps are generated from the 100 m mosaic image at various scales. Using a quadtree model and relating classes at each scale by a Markovian relationship, the multiscale images are classified from course to finer scale. The results are verified at various scales and the evolution of classification is monitored by calculating the error at each stage.

  3. A Wavelet-Based Approach to Fall Detection

    Directory of Open Access Journals (Sweden)

    Luca Palmerini

    2015-05-01

    Full Text Available Falls among older people are a widely documented public health problem. Automatic fall detection has recently gained huge importance because it could allow for the immediate communication of falls to medical assistance. The aim of this work is to present a novel wavelet-based approach to fall detection, focusing on the impact phase and using a dataset of real-world falls. Since recorded falls result in a non-stationary signal, a wavelet transform was chosen to examine fall patterns. The idea is to consider the average fall pattern as the “prototype fall”.In order to detect falls, every acceleration signal can be compared to this prototype through wavelet analysis. The similarity of the recorded signal with the prototype fall is a feature that can be used in order to determine the difference between falls and daily activities. The discriminative ability of this feature is evaluated on real-world data. It outperforms other features that are commonly used in fall detection studies, with an Area Under the Curve of 0.918. This result suggests that the proposed wavelet-based feature is promising and future studies could use this feature (in combination with others considering different fall phases in order to improve the performance of fall detection algorithms.

  4. Wavelet-based moment invariants for pattern recognition

    Science.gov (United States)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  5. Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps

    Science.gov (United States)

    Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong

    2018-02-01

    Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.

  6. From cardinal spline wavelet bases to highly coherent dictionaries

    International Nuclear Information System (INIS)

    Andrle, Miroslav; Rebollo-Neira, Laura

    2008-01-01

    Wavelet families arise by scaling and translations of a prototype function, called the mother wavelet. The construction of wavelet bases for cardinal spline spaces is generally carried out within the multi-resolution analysis scheme. Thus, the usual way of increasing the dimension of the multi-resolution subspaces is by augmenting the scaling factor. We show here that, when working on a compact interval, the identical effect can be achieved without changing the wavelet scale but reducing the translation parameter. By such a procedure we generate a redundant frame, called a dictionary, spanning the same spaces as a wavelet basis but with wavelets of broader support. We characterize the correlation of the dictionary elements by measuring their 'coherence' and produce examples illustrating the relevance of highly coherent dictionaries to problems of sparse signal representation. (fast track communication)

  7. Adaptive Image Transmission Scheme over Wavelet-Based OFDM System

    Institute of Scientific and Technical Information of China (English)

    GAOXinying; YUANDongfeng; ZHANGHaixia

    2005-01-01

    In this paper an adaptive image transmission scheme is proposed over Wavelet-based OFDM (WOFDM) system with Unequal error protection (UEP) by the design of non-uniform signal constellation in MLC. Two different data division schemes: byte-based and bitbased, are analyzed and compared. Different bits are protected unequally according to their different contribution to the image quality in bit-based data division scheme, which causes UEP combined with this scheme more powerful than that with byte-based scheme. Simulation results demonstrate that image transmission by UEP with bit-based data division scheme presents much higher PSNR values and surprisingly better image quality. Furthermore, by considering the tradeoff of complexity and BER performance, Haar wavelet with the shortest compactly supported filter length is the most suitable one among orthogonal Daubechies wavelet series in our proposed system.

  8. Research on Wavelet-Based Algorithm for Image Contrast Enhancement

    Institute of Scientific and Technical Information of China (English)

    Wu Ying-qian; Du Pei-jun; Shi Peng-fei

    2004-01-01

    A novel wavelet-based algorithm for image enhancement is proposed in the paper. On the basis of multiscale analysis, the proposed algorithm solves efficiently the problem of noise over-enhancement, which commonly occurs in the traditional methods for contrast enhancement. The decomposed coefficients at same scales are processed by a nonlinear method, and the coefficients at different scales are enhanced in different degree. During the procedure, the method takes full advantage of the properties of Human visual system so as to achieve better performance. The simulations demonstrate that these characters of the proposed approach enable it to fully enhance the content in images, to efficiently alleviate the enhancement of noise and to achieve much better enhancement effect than the traditional approaches.

  9. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek; Mü nch, Andreas; Sü li, Endre; Wagner, Barbara

    2016-01-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  10. Anisotropy in wavelet-based phase field models

    KAUST Repository

    Korzec, Maciek

    2016-04-01

    When describing the anisotropic evolution of microstructures in solids using phase-field models, the anisotropy of the crystalline phases is usually introduced into the interfacial energy by directional dependencies of the gradient energy coefficients. We consider an alternative approach based on a wavelet analogue of the Laplace operator that is intrinsically anisotropic and linear. The paper focuses on the classical coupled temperature/Ginzburg--Landau type phase-field model for dendritic growth. For the model based on the wavelet analogue, existence, uniqueness and continuous dependence on initial data are proved for weak solutions. Numerical studies of the wavelet based phase-field model show dendritic growth similar to the results obtained for classical phase-field models.

  11. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  12. Wavelet based free-form deformations for nonrigid registration

    Science.gov (United States)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  13. A vegetation height classification approach based on texture analysis of a single VHR image

    International Nuclear Information System (INIS)

    Petrou, Z I; Manakos, I; Stathaki, T; Tarantino, C; Adamo, M; Blonda, P

    2014-01-01

    Vegetation height is a crucial feature in various applications related to ecological mapping, enhancing the discrimination among different land cover or habitat categories and facilitating a series of environmental tasks, ranging from biodiversity monitoring and assessment to landscape characterization, disaster management and conservation planning. Primary sources of information on vegetation height include in situ measurements and data from active satellite or airborne sensors, which, however, may often be non-affordable or unavailable for certain regions. Alternative approaches on extracting height information from very high resolution (VHR) satellite imagery based on texture analysis, have recently been presented, with promising results. Following the notion that multispectral image bands may often be highly correlated, data transformation and dimensionality reduction techniques are expected to reduce redundant information, and thus, the computational cost of the approaches, without significantly compromising their accuracy. In this paper, dimensionality reduction is performed on a VHR image and textural characteristics are calculated on its reconstructed approximations, to show that their discriminatory capabilities are maintained up to a large degree. Texture analysis is also performed on the projected data to investigate whether the different height categories can be distinguished in a similar way

  14. A statistical-textural-features based approach for classification of solid drugs using surface microscopic images.

    Science.gov (United States)

    Tahir, Fahima; Fahiem, Muhammad Abuzar

    2014-01-01

    The quality of pharmaceutical products plays an important role in pharmaceutical industry as well as in our lives. Usage of defective tablets can be harmful for patients. In this research we proposed a nondestructive method to identify defective and nondefective tablets using their surface morphology. Three different environmental factors temperature, humidity and moisture are analyzed to evaluate the performance of the proposed method. Multiple textural features are extracted from the surface of the defective and nondefective tablets. These textural features are gray level cooccurrence matrix, run length matrix, histogram, autoregressive model and HAAR wavelet. Total textural features extracted from images are 281. We performed an analysis on all those 281, top 15, and top 2 features. Top 15 features are extracted using three different feature reduction techniques: chi-square, gain ratio and relief-F. In this research we have used three different classifiers: support vector machine, K-nearest neighbors and naïve Bayes to calculate the accuracies against proposed method using two experiments, that is, leave-one-out cross-validation technique and train test models. We tested each classifier against all selected features and then performed the comparison of their results. The experimental work resulted in that in most of the cases SVM performed better than the other two classifiers.

  15. Automatic classification of cardioembolic and arteriosclerotic ischemic strokes from apparent diffusion coefficient datasets using texture analysis and deep learning

    Science.gov (United States)

    Villafruela, Javier; Crites, Sebastian; Cheng, Bastian; Knaack, Christian; Thomalla, Götz; Menon, Bijoy K.; Forkert, Nils D.

    2017-03-01

    Stroke is a leading cause of death and disability in the western hemisphere. Acute ischemic strokes can be broadly classified based on the underlying cause into atherosclerotic strokes, cardioembolic strokes, small vessels disease, and stroke with other causes. The ability to determine the exact origin of an acute ischemic stroke is highly relevant for optimal treatment decision and preventing recurrent events. However, the differentiation of atherosclerotic and cardioembolic phenotypes can be especially challenging due to similar appearance and symptoms. The aim of this study was to develop and evaluate the feasibility of an image-based machine learning approach for discriminating between arteriosclerotic and cardioembolic acute ischemic strokes using 56 apparent diffusion coefficient (ADC) datasets from acute stroke patients. For this purpose, acute infarct lesions were semi-atomically segmented and 30,981 geometric and texture image features were extracted for each stroke volume. To improve the performance and accuracy, categorical Pearson's χ2 test was used to select the most informative features while removing redundant attributes. As a result, only 289 features were finally included for training of a deep multilayer feed-forward neural network without bootstrapping. The proposed method was evaluated using a leave-one-out cross validation scheme. The proposed classification method achieved an average area under receiver operator characteristic curve value of 0.93 and a classification accuracy of 94.64%. These first results suggest that the proposed image-based classification framework can support neurologists in clinical routine differentiating between atherosclerotic and cardioembolic phenotypes.

  16. Textural Classification of Mammographic Parenchymal Patterns with the SONNET Selforganizing Neural Network

    Directory of Open Access Journals (Sweden)

    Daniel Howard

    2008-01-01

    Full Text Available In nationwide mammography screening, thousands of mammography examinations must be processed. Each consists of two standard views of each breast, and each mammogram must be visually examined by an experienced radiologist to assess it for any anomalies. The ability to detect an anomaly in mammographic texture is important to successful outcomes in mammography screening and, in this study, a large number of mammograms were digitized with a highly accurate scanner; and textural features were derived from the mammograms as input data to a SONNET selforganizing neural network. The paper discusses how SONNET was used to produce a taxonomic organization of the mammography archive in an unsupervised manner. This process is subject to certain choices of SONNET parameters, in these numerical experiments using the craniocaudal view, and typically produced O(10, for example, 39 mammogram classes, by analysis of features from O(103 mammogram images. The mammogram taxonomy captured typical subtleties to discriminate mammograms, and it is submitted that this may be exploited to aid the detection of mammographic anomalies, for example, by acting as a preprocessing stage to simplify the task for a computational detection scheme, or by ordering mammography examinations by mammogram taxonomic class prior to screening in order to encourage more successful visual examination during screening. The resulting taxonomy may help train screening radiologists and conceivably help to settle legal cases concerning a mammography screening examination because the taxonomy can reveal the frequency of mammographic patterns in a population.

  17. Fast Schemes for Computing Similarities between Gaussian HMMs and Their Applications in Texture Image Classification

    Directory of Open Access Journals (Sweden)

    Chen Ling

    2005-01-01

    Full Text Available An appropriate definition and efficient computation of similarity (or distance measures between two stochastic models are of theoretical and practical interest. In this work, a similarity measure, that is, a modified "generalized probability product kernel," of Gaussian hidden Markov models is introduced. Two efficient schemes for computing this similarity measure are presented. The first scheme adopts a forward procedure analogous to the approach commonly used in probability evaluation of observation sequences on HMMs. The second scheme is based on the specially defined similarity transition matrix of two Gaussian hidden Markov models. Two scaling procedures are also proposed to solve the out-of-precision problem in the implementation. The effectiveness of the proposed methods has been evaluated on simulated observations with predefined model parameters, and on natural texture images. Promising experimental results have been observed.

  18. Structural classification of proteins using texture descriptors extracted from the cellular automata image.

    Science.gov (United States)

    Kavianpour, Hamidreza; Vasighi, Mahdi

    2017-02-01

    Nowadays, having knowledge about cellular attributes of proteins has an important role in pharmacy, medical science and molecular biology. These attributes are closely correlated with the function and three-dimensional structure of proteins. Knowledge of protein structural class is used by various methods for better understanding the protein functionality and folding patterns. Computational methods and intelligence systems can have an important role in performing structural classification of proteins. Most of protein sequences are saved in databanks as characters and strings and a numerical representation is essential for applying machine learning methods. In this work, a binary representation of protein sequences is introduced based on reduced amino acids alphabets according to surrounding hydrophobicity index. Many important features which are hidden in these long binary sequences can be clearly displayed through their cellular automata images. The extracted features from these images are used to build a classification model by support vector machine. Comparing to previous studies on the several benchmark datasets, the promising classification rates obtained by tenfold cross-validation imply that the current approach can help in revealing some inherent features deeply hidden in protein sequences and improve the quality of predicting protein structural class.

  19. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Science.gov (United States)

    Cho, Nam-Hoon; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701

  20. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Directory of Open Access Journals (Sweden)

    Tae-Yun Kim

    2014-01-01

    Full Text Available One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  1. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Baofeng Li

    2009-01-01

    Full Text Available Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  2. FPGA Accelerator for Wavelet-Based Automated Global Image Registration

    Directory of Open Access Journals (Sweden)

    Li Baofeng

    2009-01-01

    Full Text Available Abstract Wavelet-based automated global image registration (WAGIR is fundamental for most remote sensing image processing algorithms and extremely computation-intensive. With more and more algorithms migrating from ground computing to onboard computing, an efficient dedicated architecture of WAGIR is desired. In this paper, a BWAGIR architecture is proposed based on a block resampling scheme. BWAGIR achieves a significant performance by pipelining computational logics, parallelizing the resampling process and the calculation of correlation coefficient and parallel memory access. A proof-of-concept implementation with 1 BWAGIR processing unit of the architecture performs at least 7.4X faster than the CL cluster system with 1 node, and at least 3.4X than the MPM massively parallel machine with 1 node. Further speedup can be achieved by parallelizing multiple BWAGIR units. The architecture with 5 units achieves a speedup of about 3X against the CL with 16 nodes and a comparative speed with the MPM with 30 nodes. More importantly, the BWAGIR architecture can be deployed onboard economically.

  3. An image adaptive, wavelet-based watermarking of digital images

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  4. High Order Wavelet-Based Multiresolution Technology for Airframe Noise Prediction, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a novel, high-accuracy, high-fidelity, multiresolution (MRES), wavelet-based framework for efficient prediction of airframe noise sources and...

  5. Wavelet-Based Bayesian Methods for Image Analysis and Automatic Target Recognition

    National Research Council Canada - National Science Library

    Nowak, Robert

    2001-01-01

    .... We have developed two new techniques. First, we have develop a wavelet-based approach to image restoration and deconvolution problems using Bayesian image models and an alternating-maximation method...

  6. Identification of immune cell infiltration in hematoxylin-eosin stained breast cancer samples: texture-based classification of tissue morphologies

    Science.gov (United States)

    Turkki, Riku; Linder, Nina; Kovanen, Panu E.; Pellinen, Teijo; Lundin, Johan

    2016-03-01

    The characteristics of immune cells in the tumor microenvironment of breast cancer capture clinically important information. Despite the heterogeneity of tumor-infiltrating immune cells, it has been shown that the degree of infiltration assessed by visual evaluation of hematoxylin-eosin (H and E) stained samples has prognostic and possibly predictive value. However, quantification of the infiltration in H and E-stained tissue samples is currently dependent on visual scoring by an expert. Computer vision enables automated characterization of the components of the tumor microenvironment, and texture-based methods have successfully been used to discriminate between different tissue morphologies and cell phenotypes. In this study, we evaluate whether local binary pattern texture features with superpixel segmentation and classification with support vector machine can be utilized to identify immune cell infiltration in H and E-stained breast cancer samples. Guided with the pan-leukocyte CD45 marker, we annotated training and test sets from 20 primary breast cancer samples. In the training set of arbitrary sized image regions (n=1,116) a 3-fold cross-validation resulted in 98% accuracy and an area under the receiver-operating characteristic curve (AUC) of 0.98 to discriminate between immune cell -rich and - poor areas. In the test set (n=204), we achieved an accuracy of 96% and AUC of 0.99 to label cropped tissue regions correctly into immune cell -rich and -poor categories. The obtained results demonstrate strong discrimination between immune cell -rich and -poor tissue morphologies. The proposed method can provide a quantitative measurement of the degree of immune cell infiltration and applied to digitally scanned H and E-stained breast cancer samples for diagnostic purposes.

  7. Detection of High-Density Crowds in Aerial Images Using Texture Classification

    Directory of Open Access Journals (Sweden)

    Oliver Meynberg

    2016-06-01

    Full Text Available Automatic crowd detection in aerial images is certainly a useful source of information to prevent crowd disasters in large complex scenarios of mass events. A number of publications employ regression-based methods for crowd counting and crowd density estimation. However, these methods work only when a correct manual count is available to serve as a reference. Therefore, it is the objective of this paper to detect high-density crowds in aerial images, where counting– or regression–based approaches would fail. We compare two texture–classification methodologies on a dataset of aerial image patches which are grouped into ranges of different crowd density. These methodologies are: (1 a Bag–of–words (BoW model with two alternative local features encoded as Improved Fisher Vectors and (2 features based on a Gabor filter bank. Our results show that a classifier using either BoW or Gabor features can detect crowded image regions with 97% classification accuracy. In our tests of four classes of different crowd-density ranges, BoW–based features have a 5%–12% better accuracy than Gabor.

  8. TextureCam Field Test Results from the Mojave Desert, California: Autonomous Instrument Classification of Sediment and Rock Surfaces

    Science.gov (United States)

    Castano, R.; Abbey, W. J.; Bekker, D. L.; Cabrol, N. A.; Francis, R.; Manatt, K.; Ortega, K.; Thompson, D. R.; Wagstaff, K.

    2013-12-01

    TextureCam is an intelligent camera that uses integrated image analysis to classify sediment and rock surfaces into basic visual categories. This onboard image understanding can improve the autonomy of exploration spacecraft during the long periods when they are out of contact with operators. This could increase the number of science activities performed in each command cycle by, for example, autonomously targeting science features of opportunity with narrow field of view remote sensing, identifying clean surfaces for autonomous placement of arm-mounted instruments, or by detecting high value images for prioritized downlink. TextureCam incorporates image understanding directly into embedded hardware with a Field Programmable Gate Array (FPGA). This allows the instrument to perform the classification in real time without taxing the primary spacecraft computing resources. We use a machine learning approach in which operators train a statistical model of surface appearance using examples from previously acquired images. A random forest model extrapolates from these training cases, using the statistics of small image patches to characterize the texture of each pixel independently. Applying this model to each pixel in a new image yields a map of surface units. We deployed a prototype instrument in the Cima Volcanic Fields during a series of experiments in May 2013. We imaged each environment with a tripod-mounted RGB camera connected directly to the FPGA board for real time processing. Our first scenario assessed ground surface cover on open terrain atop a weathered volcanic flow. We performed a transect consisting of 16 forward-facing images collected at 1m intervals. We trained the system to categorize terrain into four classes: sediment, basalt cobbles, basalt pebbles, and basalt with iron oxide weathering. Accuracy rates with regards to the fraction of the actual feature that was labeled correctly by the automated system were calculated. Lower accuracy rates were

  9. Cloud field classification based upon high spatial resolution textural features. II - Simplified vector approaches

    Science.gov (United States)

    Chen, D. W.; Sengupta, S. K.; Welch, R. M.

    1989-01-01

    This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.

  10. Wavelet-based ground vehicle recognition using acoustic signals

    Science.gov (United States)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  11. Embedded wavelet-based face recognition under variable position

    Science.gov (United States)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  12. Heart Rate Variability and Wavelet-based Studies on ECG Signals from Smokers and Non-smokers

    Science.gov (United States)

    Pal, K.; Goel, R.; Champaty, B.; Samantray, S.; Tibarewala, D. N.

    2013-12-01

    The current study deals with the heart rate variability (HRV) and wavelet-based ECG signal analysis of smokers and non-smokers. The results of HRV indicated dominance towards the sympathetic nervous system activity in smokers. The heart rate was found to be higher in case of smokers as compared to non-smokers ( p smokers from the non-smokers. The results indicated that when RMSSD, SD1 and RR-mean features were used concurrently a classification efficiency of > 90 % was achieved. The wavelet decomposition of the ECG signal was done using the Daubechies (db 6) wavelet family. No difference was observed between the smokers and non-smokers which apparently suggested that smoking does not affect the conduction pathway of heart.

  13. Identification and classification of similar looking food grains

    Science.gov (United States)

    Anami, B. S.; Biradar, Sunanda D.; Savakar, D. G.; Kulkarni, P. V.

    2013-01-01

    This paper describes the comparative study of Artificial Neural Network (ANN) and Support Vector Machine (SVM) classifiers by taking a case study of identification and classification of four pairs of similar looking food grains namely, Finger Millet, Mustard, Soyabean, Pigeon Pea, Aniseed, Cumin-seeds, Split Greengram and Split Blackgram. Algorithms are developed to acquire and process color images of these grains samples. The developed algorithms are used to extract 18 colors-Hue Saturation Value (HSV), and 42 wavelet based texture features. Back Propagation Neural Network (BPNN)-based classifier is designed using three feature sets namely color - HSV, wavelet-texture and their combined model. SVM model for color- HSV model is designed for the same set of samples. The classification accuracies ranging from 93% to 96% for color-HSV, ranging from 78% to 94% for wavelet texture model and from 92% to 97% for combined model are obtained for ANN based models. The classification accuracy ranging from 80% to 90% is obtained for color-HSV based SVM model. Training time required for the SVM based model is substantially lesser than ANN for the same set of images.

  14. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    Science.gov (United States)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  15. Classification of brain tumors using texture based analysis of T1-post contrast MR scans in a preclinical model

    Science.gov (United States)

    Tang, Tien T.; Zawaski, Janice A.; Francis, Kathleen N.; Qutub, Amina A.; Gaber, M. Waleed

    2018-02-01

    Accurate diagnosis of tumor type is vital for effective treatment planning. Diagnosis relies heavily on tumor biopsies and other clinical factors. However, biopsies do not fully capture the tumor's heterogeneity due to sampling bias and are only performed if the tumor is accessible. An alternative approach is to use features derived from routine diagnostic imaging such as magnetic resonance (MR) imaging. In this study we aim to establish the use of quantitative image features to classify brain tumors and extend the use of MR images beyond tumor detection and localization. To control for interscanner, acquisition and reconstruction protocol variations, the established workflow was performed in a preclinical model. Using glioma (U87 and GL261) and medulloblastoma (Daoy) models, T1-weighted post contrast scans were acquired at different time points post-implant. The tumor regions at the center, middle, and peripheral were analyzed using in-house software to extract 32 different image features consisting of first and second order features. The extracted features were used to construct a decision tree, which could predict tumor type with 10-fold cross-validation. Results from the final classification model demonstrated that middle tumor region had the highest overall accuracy at 79%, while the AUC accuracy was over 90% for GL261 and U87 tumors. Our analysis further identified image features that were unique to certain tumor region, although GL261 tumors were more homogenous with no significant differences between the central and peripheral tumor regions. In conclusion our study shows that texture features derived from MR scans can be used to classify tumor type with high success rates. Furthermore, the algorithm we have developed can be implemented with any imaging datasets and may be applicable to multiple tumor types to determine diagnosis.

  16. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    Science.gov (United States)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  17. A New Wavelet-Based Document Image Segmentation Scheme

    Institute of Scientific and Technical Information of China (English)

    赵健; 李道京; 俞卞章; 耿军平

    2002-01-01

    The document image segmentation is very useful for printing, faxing and data processing. An algorithm is developed for segmenting and classifying document image. Feature used for classification is based on the histogram distribution pattern of different image classes. The important attribute of the algorithm is using wavelet correlation image to enhance raw image's pattern, so the classification accuracy is improved. In this paper document image is divided into four types: background, photo, text and graph. Firstly, the document image background has been distingusished easily by former normally method; secondly, three image types will be distinguished by their typical histograms, in order to make histograms feature clearer, each resolution' s HH wavelet subimage is used to add to the raw image at their resolution. At last, the photo, text and praph have been devided according to how the feature fit to the Laplacian distrbution by -X2 and L. Simulations show that classification accuracy is significantly improved. The comparison with related shows that our algorithm provides both lower classification error rates and better visual results.

  18. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  19. A New Wavelet-Based ECG Delineator for the Evaluation of the Ventricular Innervation

    DEFF Research Database (Denmark)

    Cesari, Matteo; Mehlsen, Jesper; Mehlsen, Anne-Birgitte

    2017-01-01

    T-wave amplitude (TWA) has been proposed as a marker of the innervation of the myocardium. Until now, TWA has been calculated manually or with poor algorithms, thus making its use not efficient in a clinical environment. We introduce a new wavelet-based algorithm for the delineation QRS complexes...

  20. Enhancement of Tropical Land Cover Mapping with Wavelet-Based Fusion and Unsupervised Clustering of SAR and Landsat Image Data

    Science.gov (United States)

    LeMoigne, Jacqueline; Laporte, Nadine; Netanyahuy, Nathan S.; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    The characterization and the mapping of land cover/land use of forest areas, such as the Central African rainforest, is a very complex task. This complexity is mainly due to the extent of such areas and, as a consequence, to the lack of full and continuous cloud-free coverage of those large regions by one single remote sensing instrument, In order to provide improved vegetation maps of Central Africa and to develop forest monitoring techniques for applications at the local and regional scales, we propose to utilize multi-sensor remote sensing observations coupled with in-situ data. Fusion and clustering of multi-sensor data are the first steps towards the development of such a forest monitoring system. In this paper, we will describe some preliminary experiments involving the fusion of SAR and Landsat image data of the Lope Reserve in Gabon. Similarly to previous fusion studies, our fusion method is wavelet-based. The fusion provides a new image data set which contains more detailed texture features and preserves the large homogeneous regions that are observed by the Thematic Mapper sensor. The fusion step is followed by unsupervised clustering and provides a vegetation map of the area.

  1. Classification of pre-sliced pork and Turkey ham qualities based on image colour and textural features and their relationships with consumer responses.

    Science.gov (United States)

    Iqbal, Abdullah; Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2010-03-01

    Images of three qualities of pre-sliced pork and Turkey hams were evaluated for colour and textural features to characterize and classify them, and to model the ham appearance grading and preference responses of a group of consumers. A total of 26 colour features and 40 textural features were extracted for analysis. Using Mahalanobis distance and feature inter-correlation analyses, two best colour [mean of S (saturation in HSV colour space), std. deviation of b*, which indicates blue to yellow in L*a*b* colour space] and three textural features [entropy of b*, contrast of H (hue of HSV colour space), entropy of R (red of RGB colour space)] for pork, and three colour (mean of R, mean of H, std. deviation of a*, which indicates green to red in L*a*b* colour space) and two textural features [contrast of B, contrast of L* (luminance or lightness in L*a*b* colour space)] for Turkey hams were selected as features with the highest discriminant power. High classification performances were reached for both types of hams (>99.5% for pork and >90.5% for Turkey) using the best selected features or combinations of them. In spite of the poor/fair agreement among ham consumers as determined by Kappa analysis (Kappa-valuetexture appearance and acceptability), a dichotomous logistic regression model using the best image features was able to explain the variability of consumers' responses for all sensorial attributes with accuracies higher than 74.1% for pork hams and 83.3% for Turkey hams. Copyright 2009 Elsevier Ltd. All rights reserved.

  2. Target Identification Using Harmonic Wavelet Based ISAR Imaging

    Science.gov (United States)

    Shreyamsha Kumar, B. K.; Prabhakar, B.; Suryanarayana, K.; Thilagavathi, V.; Rajagopal, R.

    2006-12-01

    A new approach has been proposed to reduce the computations involved in the ISAR imaging, which uses harmonic wavelet-(HW) based time-frequency representation (TFR). Since the HW-based TFR falls into a category of nonparametric time-frequency (T-F) analysis tool, it is computationally efficient compared to parametric T-F analysis tools such as adaptive joint time-frequency transform (AJTFT), adaptive wavelet transform (AWT), and evolutionary AWT (EAWT). Further, the performance of the proposed method of ISAR imaging is compared with the ISAR imaging by other nonparametric T-F analysis tools such as short-time Fourier transform (STFT) and Choi-Williams distribution (CWD). In the ISAR imaging, the use of HW-based TFR provides similar/better results with significant (92%) computational advantage compared to that obtained by CWD. The ISAR images thus obtained are identified using a neural network-based classification scheme with feature set invariant to translation, rotation, and scaling.

  3. Construction of Orthonormal Piecewise Polynomial Scaling and Wavelet Bases on Non-Equally Spaced Knots

    Directory of Open Access Journals (Sweden)

    Jean Pierre Astruc

    2007-01-01

    Full Text Available This paper investigates the mathematical framework of multiresolution analysis based on irregularly spaced knots sequence. Our presentation is based on the construction of nested nonuniform spline multiresolution spaces. From these spaces, we present the construction of orthonormal scaling and wavelet basis functions on bounded intervals. For any arbitrary degree of the spline function, we provide an explicit generalization allowing the construction of the scaling and wavelet bases on the nontraditional sequences. We show that the orthogonal decomposition is implemented using filter banks where the coefficients depend on the location of the knots on the sequence. Examples of orthonormal spline scaling and wavelet bases are provided. This approach can be used to interpolate irregularly sampled signals in an efficient way, by keeping the multiresolution approach.

  4. Cloud field classification based upon high spatial resolution textural features. I - Gray level co-occurrence matrix approach

    Science.gov (United States)

    Welch, R. M.; Sengupta, S. K.; Chen, D. W.

    1988-01-01

    Stratocumulus, cumulus, and cirrus clouds were identified on the basis of cloud textural features which were derived from a single high-resolution Landsat MSS NIR channel using a stepwise linear discriminant analysis. It is shown that, using this method, it is possible to distinguish high cirrus clouds from low clouds with high accuracy on the basis of spatial brightness patterns. The largest probability of misclassification is associated with confusion between the stratocumulus breakup regions and the fair-weather cumulus.

  5. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  6. Wavelet based Image Registration Technique for Matching Dental x-rays

    OpenAIRE

    P. Ramprasad; H. C. Nagaraj; M. K. Parasuram

    2008-01-01

    Image registration plays an important role in the diagnosis of dental pathologies such as dental caries, alveolar bone loss and periapical lesions etc. This paper presents a new wavelet based algorithm for registering noisy and poor contrast dental x-rays. Proposed algorithm has two stages. First stage is a preprocessing stage, removes the noise from the x-ray images. Gaussian filter has been used. Second stage is a geometric transformation stage. Proposed work uses two l...

  7. A Novel Error Resilient Scheme for Wavelet-based Image Coding Over Packet Networks

    OpenAIRE

    WenZhu Sun; HongYu Wang; DaXing Qian

    2012-01-01

    this paper presents a robust transmission strategy for wavelet based scalable bit stream over packet erasure channel. By taking the advantage of the bit plane coding and the multiple description coding, the proposed strategy adopts layered multiple description coding (LMDC) for the embedded wavelet coders to improve the error resistant capability of the important bit planes in the meaning of D(R) function. Then, the post-compression rate-distortion (PCRD) optimization process is used to impro...

  8. Wavelet-based partial volume effect correction for simultaneous MR/PET of the carotid arteries

    Energy Technology Data Exchange (ETDEWEB)

    Bini, Jason; Eldib, Mootaz [Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, NY, NY (United States); Department of Biomedical Engineering, The City College of New York, NY, NY (United States); Robson, Philip M; Fayad, Zahi A [Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, NY, NY (United States)

    2014-07-29

    Simultaneous MR/PET scanners allow for the exploration and development of novel PVE correction techniques without the challenges of coregistration of MR and PET. The development of a wavelet-based PVE correction method, to improve PET quantification, has proven successful in brain PET.{sup 2} We report here the first attempt to apply these methods to simultaneous MR/PET imaging of the carotid arteries.

  9. Model-free stochastic processes studied with q-wavelet-based informational tools

    International Nuclear Information System (INIS)

    Perez, D.G.; Zunino, L.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2007-01-01

    We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework

  10. Performance Analysis of Wavelet Based MC-CDMA System with Implementation of Various Antenna Diversity Schemes

    OpenAIRE

    Islam, Md. Matiqul; Kabir, M. Hasnat; Ullah, Sk. Enayet

    2012-01-01

    The impact of using wavelet based technique on the performance of a MC-CDMA wireless communication system has been investigated. The system under proposed study incorporates Walsh Hadamard codes to discriminate the message signal for individual user. A computer program written in Mathlab source code is developed and this simulation study is made with implementation of various antenna diversity schemes and fading (Rayleigh and Rician) channel. Computer simulation results demonstrate that the p...

  11. Wavelet-based partial volume effect correction for simultaneous MR/PET of the carotid arteries

    International Nuclear Information System (INIS)

    Bini, Jason; Eldib, Mootaz; Robson, Philip M; Fayad, Zahi A

    2014-01-01

    Simultaneous MR/PET scanners allow for the exploration and development of novel PVE correction techniques without the challenges of coregistration of MR and PET. The development of a wavelet-based PVE correction method, to improve PET quantification, has proven successful in brain PET. 2 We report here the first attempt to apply these methods to simultaneous MR/PET imaging of the carotid arteries.

  12. Detection of Dendritic Spines Using Wavelet-Based Conditional Symmetric Analysis and Regularized Morphological Shared-Weight Neural Networks

    Directory of Open Access Journals (Sweden)

    Shuihua Wang

    2015-01-01

    Full Text Available Identification and detection of dendritic spines in neuron images are of high interest in diagnosis and treatment of neurological and psychiatric disorders (e.g., Alzheimer’s disease, Parkinson’s diseases, and autism. In this paper, we have proposed a novel automatic approach using wavelet-based conditional symmetric analysis and regularized morphological shared-weight neural networks (RMSNN for dendritic spine identification involving the following steps: backbone extraction, localization of dendritic spines, and classification. First, a new algorithm based on wavelet transform and conditional symmetric analysis has been developed to extract backbone and locate the dendrite boundary. Then, the RMSNN has been proposed to classify the spines into three predefined categories (mushroom, thin, and stubby. We have compared our proposed approach against the existing methods. The experimental result demonstrates that the proposed approach can accurately locate the dendrite and accurately classify the spines into three categories with the accuracy of 99.1% for “mushroom” spines, 97.6% for “stubby” spines, and 98.6% for “thin” spines.

  13. Texture analysis applied to second harmonic generation image data for disease classification and development of a multi-view second harmonic generation imaging platform

    Science.gov (United States)

    Wen, Lianggong

    Many diseases, e.g. ovarian cancer, breast cancer and pulmonary fibrosis, are commonly associated with drastic alterations in surrounding connective tissue, and changes in the extracellular matrix (ECM) are associated with the vast majority of cellular processes in disease progression and carcinogenesis: cell differentiation, proliferation, biosynthetic ability, polarity, and motility. We use second harmonic generation (SHG) microscopy for imaging the ECM because it is a non-invasive, non-linear laser scanning technique with high sensitivity and specificity for visualizing fibrillar collagen. In this thesis, we are interested in developing imaging techniques to understand how the ECM, especially the collagen architecture, is remodeled in diseases. To quantitate remodeling, we implement a 3D texture analysis to delineate the collagen fibrillar morphology observed in SHG microscopy images of human normal and high grade malignant ovarian tissues. In the learning stage, a dictionary of "textons"---frequently occurring texture features that are identified by measuring the image response to a filter bank of various shapes, sizes, and orientations---is created. By calculating a representative model based on the texton distribution for each tissue type using a training set of respective mages, we then perform classification between normal and high grade malignant ovarian tissues classification based on the area under receiver operating characteristic curves (true positives versus false positives). The local analysis algorithm is a more general method to probe rapidly changing fibrillar morphologies than global analyses such as FFT. It is also more versatile than other texture approaches as the filter bank can be highly tailored to specific applications (e.g., different disease states) by creating customized libraries based on common image features. Further, we describe the development of a multi-view 3D SHG imaging platform. Unlike fluorescence microscopy, SHG excites

  14. WAVELET-BASED ALGORITHM FOR DETECTION OF BEARING FAULTS IN A GAS TURBINE ENGINE

    Directory of Open Access Journals (Sweden)

    Sergiy Enchev

    2014-07-01

    Full Text Available Presented is a gas turbine engine bearing diagnostic system that integrates information from various advanced vibration analysis techniques to achieve robust bearing health state awareness. This paper presents a computational algorithm for identifying power frequency variations and integer harmonics by using wavelet-based transform. The continuous wavelet transform with  the complex Morlet wavelet is adopted to detect the harmonics presented in a power signal. The algorithm based on the discrete stationary wavelet transform is adopted to denoise the wavelet ridges.

  15. Automated segmentation of ultrasonic breast lesions using statistical texture classification and active contour based on probability distance.

    Science.gov (United States)

    Liu, Bo; Cheng, H D; Huang, Jianhua; Tian, Jiawei; Liu, Jiafeng; Tang, Xianglong

    2009-08-01

    Because of its complicated structure, low signal/noise ratio, low contrast and blurry boundaries, fully automated segmentation of a breast ultrasound (BUS) image is a difficult task. In this paper, a novel segmentation method for BUS images without human intervention is proposed. Unlike most published approaches, the proposed method handles the segmentation problem by using a two-step strategy: ROI generation and ROI segmentation. First, a well-trained texture classifier categorizes the tissues into different classes, and the background knowledge rules are used for selecting the regions of interest (ROIs) from them. Second, a novel probability distance-based active contour model is applied for segmenting the ROIs and finding the accurate positions of the breast tumors. The active contour model combines both global statistical information and local edge information, using a level set approach. The proposed segmentation method was performed on 103 BUS images (48 benign and 55 malignant). To validate the performance, the results were compared with the corresponding tumor regions marked by an experienced radiologist. Three error metrics, true-positive ratio (TP), false-negative ratio (FN) and false-positive ratio (FP) were used for measuring the performance of the proposed method. The final results (TP = 91.31%, FN = 8.69% and FP = 7.26%) demonstrate that the proposed method can segment BUS images efficiently, quickly and automatically.

  16. Classification of focal liver lesions on ultrasound images by extracting hybrid textural features and using an artificial neural network.

    Science.gov (United States)

    Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Jiang, Yuan Yuan; Kim, Sung Min

    2015-01-01

    This paper focuses on the improvement of the diagnostic accuracy of focal liver lesions by quantifying the key features of cysts, hemangiomas, and malignant lesions on ultrasound images. The focal liver lesions were divided into 29 cysts, 37 hemangiomas, and 33 malignancies. A total of 42 hybrid textural features that composed of 5 first order statistics, 18 gray level co-occurrence matrices, 18 Law's, and echogenicity were extracted. A total of 29 key features that were selected by principal component analysis were used as a set of inputs for a feed-forward neural network. For each lesion, the performance of the diagnosis was evaluated by using the positive predictive value, negative predictive value, sensitivity, specificity, and accuracy. The results of the experiment indicate that the proposed method exhibits great performance, a high diagnosis accuracy of over 96% among all focal liver lesion groups (cyst vs. hemangioma, cyst vs. malignant, and hemangioma vs. malignant) on ultrasound images. The accuracy was slightly increased when echogenicity was included in the optimal feature set. These results indicate that it is possible for the proposed method to be applied clinically.

  17. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    Science.gov (United States)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  18. Traffic characterization and modeling of wavelet-based VBR encoded video

    Energy Technology Data Exchange (ETDEWEB)

    Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  19. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals

    Directory of Open Access Journals (Sweden)

    Suyi Li

    2017-01-01

    Full Text Available The noninvasive peripheral oxygen saturation (SpO2 and the pulse rate can be extracted from photoplethysmography (PPG signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects’ PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.

  20. A Wavelet-Based Finite Element Method for the Self-Shielding Issue in Neutron Transport

    International Nuclear Information System (INIS)

    Le Tellier, R.; Fournier, D.; Ruggieri, J. M.

    2009-01-01

    This paper describes a new approach for treating the energy variable of the neutron transport equation in the resolved resonance energy range. The aim is to avoid recourse to a case-specific spatially dependent self-shielding calculation when considering a broad group structure. This method consists of a discontinuous Galerkin discretization of the energy using wavelet-based elements. A Σ t -orthogonalization of the element basis is presented in order to make the approach tractable for spatially dependent problems. First numerical tests of this method are carried out in a limited framework under the Livolant-Jeanpierre hypotheses in an infinite homogeneous medium. They are mainly focused on the way to construct the wavelet-based element basis. Indeed, the prior selection of these wavelet functions by a thresholding strategy applied to the discrete wavelet transform of a given quantity is a key issue for the convergence rate of the method. The Canuto thresholding approach applied to an approximate flux is found to yield a nearly optimal convergence in many cases. In these tests, the capability of such a finite element discretization to represent the flux depression in a resonant region is demonstrated; a relative accuracy of 10 -3 on the flux (in L 2 -norm) is reached with less than 100 wavelet coefficients per group. (authors)

  1. Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.

    Science.gov (United States)

    Paul, Sabyasachi; Sarkar, P K

    2013-04-01

    Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.

  2. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI)

    International Nuclear Information System (INIS)

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-01-01

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting

  3. Application of wavelet-based multi-model Kalman filters to real-time flood forecasting

    Science.gov (United States)

    Chou, Chien-Ming; Wang, Ru-Yih

    2004-04-01

    This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.

  4. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals.

    Science.gov (United States)

    Li, Suyi; Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji; Diao, Shu

    2017-01-01

    The noninvasive peripheral oxygen saturation (SpO 2 ) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO 2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis.

  5. A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals

    Science.gov (United States)

    Jiang, Shanqing; Jiang, Shan; Wu, Jiang; Xiong, Wenji

    2017-01-01

    The noninvasive peripheral oxygen saturation (SpO2) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects' PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis. PMID:29250135

  6. MR textural analysis on T2 FLAIR images for the prediction of true oligodendroglioma by the 2016 WHO genetic classification.

    Science.gov (United States)

    Rui, Wenting; Ren, Yan; Wang, Yin; Gao, Xinyi; Xu, Xiao; Yao, Zhenwei

    2017-11-15

    The genetic status of 1p/19q is important for differentiating oligodendroglioma, isocitrate-dehydrogenase (IDH)-mutant, and 1p/19q-codeleted from diffuse astrocytoma, IDH-mutant according to the 2016 World Health Organization (WHO) criteria. To assess the value of magnetic resonance textural analysis (MRTA) on T 2 fluid-attenuated inversion recovery (FLAIR) images for making a genetically integrated diagnosis of true oligodendroglioma by WHO guidelines. Retrospective case control. In all, there were 54 patients with a histopathological diagnosis of diffuse glioma (grade II). All were tested for IDH and 1p/19q. 3.0T, including T 2 FLAIR sequence, axial T 1 -weighted, and T 2 -weighted sequence. MRTA on a representative tumor region of interest (ROI) was made on preoperative T 2 FLAIR images around the area that had the largest diameter of solid tumor using Omni Kinetics software. Differences between IDH-mutant and 1p/19q-codeleted and IDH-mutant and 1p/19q-intact gliomas were analyzed by the Mann-Whitney rank sum test. Receiver operating characteristic curves (ROC) were created to assess MRTA diagnostic performance. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated with a cutoff value according to the Youden Index. Comparisons demonstrated significant differences in kurtosis (P = 0.007), energy (0.008), entropy (0.008), mean deviation (MD) (features comprising entropy (area under the curve [AUC] = 0.718, sensitivity = 97.1%) and energy (0.719, 94.1%) had the highest sensitivity but lower specificity (both 45%). Second-order features such as HGLRE (AUC = 0.750, sensitivity = 73.5%, specificity = 80.0%) and sum average (0.751, 70.6%, 80.0%) had relatively higher specificity, and all had AUC >0.7. MD had the highest diagnostic performance, with AUC = 0.878, sensitivity = 94.1%, specificity = 75.0%, PPV = 86.5%, and NPV = 88.2%. MRTA on T 2 FLAIR images may be

  7. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  8. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  9. Symmetric textures

    International Nuclear Information System (INIS)

    Ramond, P.

    1993-01-01

    The Wolfenstein parametrization is extended to the quark masses in the deep ultraviolet, and an algorithm to derive symmetric textures which are compatible with existing data is developed. It is found that there are only five such textures

  10. Quality Variation Control for Three-Dimensional Wavelet-Based Video Coders

    Directory of Open Access Journals (Sweden)

    Vidhya Seran

    2007-02-01

    Full Text Available The fluctuation of quality in time is a problem that exists in motion-compensated-temporal-filtering (MCTF- based video coding. The goal of this paper is to design a solution for overcoming the distortion fluctuation challenges faced by wavelet-based video coders. We propose a new technique for determining the number of bits to be allocated to each temporal subband in order to minimize the fluctuation in the quality of the reconstructed video. Also, the wavelet filter properties are explored to design suitable scaling coefficients with the objective of smoothening the temporal PSNR. The biorthogonal 5/3 wavelet filter is considered in this paper and experimental results are presented for 2D+t and t+2D MCTF wavelet coders.

  11. Quality Variation Control for Three-Dimensional Wavelet-Based Video Coders

    Directory of Open Access Journals (Sweden)

    Seran Vidhya

    2007-01-01

    Full Text Available The fluctuation of quality in time is a problem that exists in motion-compensated-temporal-filtering (MCTF- based video coding. The goal of this paper is to design a solution for overcoming the distortion fluctuation challenges faced by wavelet-based video coders. We propose a new technique for determining the number of bits to be allocated to each temporal subband in order to minimize the fluctuation in the quality of the reconstructed video. Also, the wavelet filter properties are explored to design suitable scaling coefficients with the objective of smoothening the temporal PSNR. The biorthogonal 5/3 wavelet filter is considered in this paper and experimental results are presented for 2D+t and t+2D MCTF wavelet coders.

  12. Neuro-Fuzzy Wavelet Based Adaptive MPPT Algorithm for Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Syed Zulqadar Hassan

    2017-03-01

    Full Text Available An intelligent control of photovoltaics is necessary to ensure fast response and high efficiency under different weather conditions. This is often arduous to accomplish using traditional linear controllers, as photovoltaic systems are nonlinear and contain several uncertainties. Based on the analysis of the existing literature of Maximum Power Point Tracking (MPPT techniques, a high performance neuro-fuzzy indirect wavelet-based adaptive MPPT control is developed in this work. The proposed controller combines the reasoning capability of fuzzy logic, the learning capability of neural networks and the localization properties of wavelets. In the proposed system, the Hermite Wavelet-embedded Neural Fuzzy (HWNF-based gradient estimator is adopted to estimate the gradient term and makes the controller indirect. The performance of the proposed controller is compared with different conventional and intelligent MPPT control techniques. MATLAB results show the superiority over other existing techniques in terms of fast response, power quality and efficiency.

  13. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  14. Fast and robust wavelet-based dynamic range compression and contrast enhancement model with color restoration

    Science.gov (United States)

    Unaldi, Numan; Asari, Vijayan K.; Rahman, Zia-ur

    2009-05-01

    Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured from high dynamic range scenes with non-uniform lighting conditions. The fast image enhancement algorithm that provides dynamic range compression, while preserving the local contrast and tonal rendition, is also a good candidate for real time video processing applications. Although the colors of the enhanced images produced by the proposed algorithm are consistent with the colors of the original image, the proposed algorithm fails to produce color constant results for some "pathological" scenes that have very strong spectral characteristics in a single band. The linear color restoration process is the main reason for this drawback. Hence, a different approach is required for the final color restoration process. In this paper the latest version of the proposed algorithm, which deals with this issue is presented. The results obtained by applying the algorithm to numerous natural images show strong robustness and high image quality.

  15. Wavelet-based tracking of bacteria in unreconstructed off-axis holograms.

    Science.gov (United States)

    Marin, Zach; Wallace, J Kent; Nadeau, Jay; Khalil, Andre

    2018-03-01

    We propose an automated wavelet-based method of tracking particles in unreconstructed off-axis holograms to provide rough estimates of the presence of motion and particle trajectories in digital holographic microscopy (DHM) time series. The wavelet transform modulus maxima segmentation method is adapted and tailored to extract Airy-like diffraction disks, which represent bacteria, from DHM time series. In this exploratory analysis, the method shows potential for estimating bacterial tracks in low-particle-density time series, based on a preliminary analysis of both living and dead Serratia marcescens, and for rapidly providing a single-bit answer to whether a sample chamber contains living or dead microbes or is empty. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  17. Wavelet-based spectral finite element dynamic analysis for an axially moving Timoshenko beam

    Science.gov (United States)

    Mokhtari, Ali; Mirdamadi, Hamid Reza; Ghayour, Mostafa

    2017-08-01

    In this article, wavelet-based spectral finite element (WSFE) model is formulated for time domain and wave domain dynamic analysis of an axially moving Timoshenko beam subjected to axial pretension. The formulation is similar to conventional FFT-based spectral finite element (SFE) model except that Daubechies wavelet basis functions are used for temporal discretization of the governing partial differential equations into a set of ordinary differential equations. The localized nature of Daubechies wavelet basis functions helps to rule out problems of SFE model due to periodicity assumption, especially during inverse Fourier transformation and back to time domain. The high accuracy of WSFE model is then evaluated by comparing its results with those of conventional finite element and SFE results. The effects of moving beam speed and axial tensile force on vibration and wave characteristics, and static and dynamic stabilities of moving beam are investigated.

  18. Wavelet-Based Poisson Solver for Use in Particle-in-Cell Simulations

    CERN Document Server

    Terzic, Balsa; Mihalcea, Daniel; Pogorelov, Ilya V

    2005-01-01

    We report on a successful implementation of a wavelet-based Poisson solver for use in 3D particle-in-cell simulations. One new aspect of our algorithm is its ability to treat the general (inhomogeneous) Dirichlet boundary conditions. The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modelling of the Fermilab/NICADD and AES/JLab photoinjectors.

  19. A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum

    Directory of Open Access Journals (Sweden)

    Pan Liu

    2017-05-01

    Full Text Available This paper presents a wavelet-based Gaussian method (WGM for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF. The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.

  20. Passive microrheology of soft materials with atomic force microscopy: A wavelet-based spectral analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Torres, C.; Streppa, L. [CNRS, UMR5672, Laboratoire de Physique, Ecole Normale Supérieure de Lyon, 46 Allée d' Italie, Université de Lyon, 69007 Lyon (France); Arneodo, A.; Argoul, F. [CNRS, UMR5672, Laboratoire de Physique, Ecole Normale Supérieure de Lyon, 46 Allée d' Italie, Université de Lyon, 69007 Lyon (France); CNRS, UMR5798, Laboratoire Ondes et Matière d' Aquitaine, Université de Bordeaux, 351 Cours de la Libération, 33405 Talence (France); Argoul, P. [Université Paris-Est, Ecole des Ponts ParisTech, SDOA, MAST, IFSTTAR, 14-20 Bd Newton, Cité Descartes, 77420 Champs sur Marne (France)

    2016-01-18

    Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale method to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.

  1. Wavelet-based Poisson Solver for use in Particle-In-Cell Simulations

    International Nuclear Information System (INIS)

    Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

    2005-01-01

    We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors

  2. SpotCaliper: fast wavelet-based spot detection with accurate size estimation.

    Science.gov (United States)

    Püspöki, Zsuzsanna; Sage, Daniel; Ward, John Paul; Unser, Michael

    2016-04-15

    SpotCaliper is a novel wavelet-based image-analysis software providing a fast automatic detection scheme for circular patterns (spots), combined with the precise estimation of their size. It is implemented as an ImageJ plugin with a friendly user interface. The user is allowed to edit the results by modifying the measurements (in a semi-automated way), extract data for further analysis. The fine tuning of the detections includes the possibility of adjusting or removing the original detections, as well as adding further spots. The main advantage of the software is its ability to capture the size of spots in a fast and accurate way. http://bigwww.epfl.ch/algorithms/spotcaliper/ zsuzsanna.puspoki@epfl.ch Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Wavelet-based linear-response time-dependent density-functional theory

    International Nuclear Information System (INIS)

    Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.

    2012-01-01

    Highlights: ► We has been implemented LR-TD-DFT in the pseudopotential wavelet-based program. ► We have compared the results against all-electron Gaussian-type program. ► Orbital energies converges significantly faster for BigDFT than for DEMON2K. ► We report the X-ray crystal structure of the small organic molecule flugi6. ► Measured and calculated absorption spectrum of flugi6 is also reported. - Abstract: Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BIGDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program DEMON2K for the calculation of electronic absorption spectra of N 2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BIGDFT than for DEMON2K. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BIGDFT, while all virtual orbitals are included in TD-DFT calculations in DEMON2K. As a reality check, we report the X-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidazo[1, 2-a]pyridin-3-amine.

  4. Real-time classification of humans versus animals using profiling sensors and hidden Markov tree model

    Science.gov (United States)

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2015-07-01

    Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation-maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.

  5. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    Science.gov (United States)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the

  6. Wavelet based artificial neural network applied for energy efficiency enhancement of decoupled HVAC system

    International Nuclear Information System (INIS)

    Jahedi, G.; Ardehali, M.M.

    2012-01-01

    Highlights: ► In HVAC systems, temperature and relative humidity are coupled and dynamic mathematical models are non-linear. ► A wavelet-based ANN is used in series with an infinite impulse response filter for self tuning of PD controller. ► Energy consumption is evaluated for a decoupled bi-linear HVAC system with variable air volume and variable water flow. ► Substantial enhancement in energy efficiency is realized, when the gain coefficients of PD controllers are tuned adaptively. - Abstract: Control methodologies could lower energy demand and consumption of heating, ventilating and air conditioning (HVAC) systems and, simultaneously, achieve better comfort conditions. However, the application of classical controllers is unsatisfactory as HVAC systems are non-linear and the control variables such as temperature and relative humidity (RH) inside the thermal zone are coupled. The objective of this study is to develop and simulate a wavelet-based artificial neural network (WNN) for self tuning of a proportional-derivative (PD) controller for a decoupled bi-linear HVAC system with variable air volume and variable water flow responsible for controlling temperature and RH of a thermal zone, where thermal comfort and energy consumption of the system are evaluated. To achieve the objective, a WNN is used in series with an infinite impulse response (IIR) filter for faster and more accurate identification of system dynamics, as needed for on-line use and off-line batch mode training. The WNN-IIR algorithm is used for self-tuning of two PD controllers for temperature and RH. The simulation results show that the WNN-IIR controller performance is superior, as compared with classical PD controller. The enhancement in efficiency of the HVAC system is accomplished due to substantially lower consumption of energy during the transient operation, when the gain coefficients of PD controllers are tuned in an adaptive manner, as the steady state setpoints for temperature and

  7. Control of equipment isolation system using wavelet-based hybrid sliding mode control

    Science.gov (United States)

    Huang, Shieh-Kung; Loh, Chin-Hsiung

    2017-04-01

    -structural components. The aim of this paper is to develop a hybrid control algorithm on the control of both structures and equipments simultaneously to overcome the limitations of classical feedback control through combining the advantage of classic LQR and SMC. To suppress vibrations with the frequency contents of strong earthquakes differing from the natural frequencies of civil structures, the hybrid control algorithms integrated with the wavelet-base vibration control algorithm is developed. The performance of classical, hybrid, and wavelet-based hybrid control algorithms as well as the responses of structure and non-structural components are evaluated and discussed through numerical simulation in this study.

  8. TEXTURAL FRACTOGRAPHY

    Directory of Open Access Journals (Sweden)

    Hynek Lauschmann

    2011-05-01

    Full Text Available The reconstitution of the history of a fatigue process is based on the knowledge of any correspondences between the morphology of the crack surface and the velocity of the crack growth (crack growth rate - CGR. The textural fractography is oriented to mezoscopic SEM magnifications (30 to 500x. Images contain complicated textures without distinct borders. The aim is to find any characteristics of this texture, which correlate with CGR. Pre-processing of images is necessary to obtain a homogeneous texture. Three methods of textural analysis have been developed and realized as computational programs: the method based on the spectral structure of the image, the method based on a Gibbs random field (GRF model, and the method based on the idealization of light objects into a fibre process. In order to extract and analyze the fibre process, special methods - tracing fibres and a database-oriented analysis of a fibre process - have been developed.

  9. A new approach to pre-processing digital image for wavelet-based watermark

    Science.gov (United States)

    Agreste, Santa; Andaloro, Guido

    2008-11-01

    The growth of the Internet has increased the phenomenon of digital piracy, in multimedia objects, like software, image, video, audio and text. Therefore it is strategic to individualize and to develop methods and numerical algorithms, which are stable and have low computational cost, that will allow us to find a solution to these problems. We describe a digital watermarking algorithm for color image protection and authenticity: robust, not blind, and wavelet-based. The use of Discrete Wavelet Transform is motivated by good time-frequency features and a good match with Human Visual System directives. These two combined elements are important for building an invisible and robust watermark. Moreover our algorithm can work with any image, thanks to the step of pre-processing of the image that includes resize techniques that adapt to the size of the original image for Wavelet transform. The watermark signal is calculated in correlation with the image features and statistic properties. In the detection step we apply a re-synchronization between the original and watermarked image according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has been shown to be resistant against geometric, filtering, and StirMark attacks with a low rate of false alarm.

  10. A wavelet-based PWTD algorithm-accelerated time domain surface integral equation solver

    KAUST Repository

    Liu, Yang

    2015-10-26

    © 2015 IEEE. The multilevel plane-wave time-domain (PWTD) algorithm allows for fast and accurate analysis of transient scattering from, and radiation by, electrically large and complex structures. When used in tandem with marching-on-in-time (MOT)-based surface integral equation (SIE) solvers, it reduces the computational and memory costs of transient analysis from equation and equation to equation and equation, respectively, where Nt and Ns denote the number of temporal and spatial unknowns (Ergin et al., IEEE Trans. Antennas Mag., 41, 39-52, 1999). In the past, PWTD-accelerated MOT-SIE solvers have been applied to transient problems involving half million spatial unknowns (Shanker et al., IEEE Trans. Antennas Propag., 51, 628-641, 2003). Recently, a scalable parallel PWTD-accelerated MOT-SIE solver that leverages a hiearchical parallelization strategy has been developed and successfully applied to the transient problems involving ten million spatial unknowns (Liu et. al., in URSI Digest, 2013). We further enhanced the capabilities of this solver by implementing a compression scheme based on local cosine wavelet bases (LCBs) that exploits the sparsity in the temporal dimension (Liu et. al., in URSI Digest, 2014). Specifically, the LCB compression scheme was used to reduce the memory requirement of the PWTD ray data and computational cost of operations in the PWTD translation stage.

  11. Wavelet-based linear-response time-dependent density-functional theory

    Science.gov (United States)

    Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.; Philouze, Christian; Balakirev, Maxim Y.

    2012-06-01

    Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BIGDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program DEMON2K for the calculation of electronic absorption spectra of N2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BIGDFT than for DEMON2K. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BIGDFT, while all virtual orbitals are included in TD-DFT calculations in DEMON2K. As a reality check, we report the X-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidazo[1, 2-a]pyridin-3-amine.

  12. Wavelet-based blind identification of the UCLA Factor building using ambient and earthquake responses

    International Nuclear Information System (INIS)

    Hazra, B; Narasimhan, S

    2010-01-01

    Blind source separation using second-order blind identification (SOBI) has been successfully applied to the problem of output-only identification, popularly known as ambient system identification. In this paper, the basic principles of SOBI for the static mixtures case is extended using the stationary wavelet transform (SWT) in order to improve the separability of sources, thereby improving the quality of identification. Whereas SOBI operates on the covariance matrices constructed directly from measurements, the method presented in this paper, known as the wavelet-based modified cross-correlation method, operates on multiple covariance matrices constructed from the correlation of the responses. The SWT is selected because of its time-invariance property, which means that the transform of a time-shifted signal can be obtained as a shifted version of the transform of the original signal. This important property is exploited in the construction of several time-lagged covariance matrices. The issue of non-stationary sources is addressed through the formation of several time-shifted, windowed covariance matrices. Modal identification results are presented for the UCLA Factor building using ambient vibration data and for recorded responses from the Parkfield earthquake, and compared with published results for this building. Additionally, the effect of sensor density on the identification results is also investigated

  13. Finding the multipath propagation of multivariable crude oil prices using a wavelet-based network approach

    Science.gov (United States)

    Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun

    2016-04-01

    The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.

  14. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    Science.gov (United States)

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Wavelet Based Hilbert Transform with Digital Design and Application to QCM-SS Watermarking

    Directory of Open Access Journals (Sweden)

    S. P. Maity

    2008-04-01

    Full Text Available In recent time, wavelet transforms are used extensively for efficient storage, transmission and representation of multimedia signals. Hilbert transform pairs of wavelets is the basic unit of many wavelet theories such as complex filter banks, complex wavelet and phaselet etc. Moreover, Hilbert transform finds various applications in communications and signal processing such as generation of single sideband (SSB modulation, quadrature carrier multiplexing (QCM and bandpass representation of a signal. Thus wavelet based discrete Hilbert transform design draws much attention of researchers for couple of years. This paper proposes an (i algorithm for generation of low computation cost Hilbert transform pairs of symmetric filter coefficients using biorthogonal wavelets, (ii approximation to its rational coefficients form for its efficient hardware realization and without much loss in signal representation, and finally (iii development of QCM-SS (spread spectrum image watermarking scheme for doubling the payload capacity. Simulation results show novelty of the proposed Hilbert transform design and its application to watermarking compared to existing algorithms.

  16. Online Epileptic Seizure Prediction Using Wavelet-Based Bi-Phase Correlation of Electrical Signals Tomography.

    Science.gov (United States)

    Vahabi, Zahra; Amirfattahi, Rasoul; Shayegh, Farzaneh; Ghassemi, Fahimeh

    2015-09-01

    Considerable efforts have been made in order to predict seizures. Among these methods, the ones that quantify synchronization between brain areas, are the most important methods. However, to date, a practically acceptable result has not been reported. In this paper, we use a synchronization measurement method that is derived according to the ability of bi-spectrum in determining the nonlinear properties of a system. In this method, first, temporal variation of the bi-spectrum of different channels of electro cardiography (ECoG) signals are obtained via an extended wavelet-based time-frequency analysis method; then, to compare different channels, the bi-phase correlation measure is introduced. Since, in this way, the temporal variation of the amount of nonlinear coupling between brain regions, which have not been considered yet, are taken into account, results are more reliable than the conventional phase-synchronization measures. It is shown that, for 21 patients of FSPEEG database, bi-phase correlation can discriminate the pre-ictal and ictal states, with very low false positive rates (FPRs) (average: 0.078/h) and high sensitivity (100%). However, the proposed seizure predictor still cannot significantly overcome the random predictor for all patients.

  17. Wavelet-Based Visible and Infrared Image Fusion: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Angel D. Sappa

    2016-06-01

    Full Text Available This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR and Long Wave InfraRed (LWIR.

  18. A Wavelet-based method for processing signal of fog in strap-down inertial systems

    Energy Technology Data Exchange (ETDEWEB)

    Han, D.; Xiong, C.; Liu, H. [Huazhong University of Science & Technology, Wuhan (China)

    2009-07-01

    Fibre optical gyroscopes (FOGs) have been applied widely in many fields in contrast, with their counterparts such as mechanical gyroscopes and ring laser gyroscopes. The precision of FOG is affected significantly by bias drift, angle random walk temperature effects and noises. Especially, uncertain disturbances resulting from road irregularities often affect accuracy of strap-down inertial system (SINS). Hence, eliminating, uncertain disturbances from outputs of it FOG plays a crucial role to improve accuracy of SINS. This paper presents a wavelet-based method for denoising signals of FOGs in SINS used for exploring and rescuing robots in coal mines. Property of road irregularities in mines is taken into account as a key factor resulting in uncertain disturbances in this research. Both frequency band and amplitude of uncertain disturbances are introduced to choose filtering thresholds. Experimental results have demonstrated that the proposed method can efficiently eliminate uncertain disturbances due to road irregularities from outputs of FOGs and improve accuracy of surrogate data. It indicates that the proposed method has a significant potential in FOG-related applications.

  19. Wavelet-based adaptation methodology combined with finite difference WENO to solve ideal magnetohydrodynamics

    Science.gov (United States)

    Do, Seongju; Li, Haojun; Kang, Myungjoo

    2017-06-01

    In this paper, we present an accurate and efficient wavelet-based adaptive weighted essentially non-oscillatory (WENO) scheme for hydrodynamics and ideal magnetohydrodynamics (MHD) equations arising from the hyperbolic conservation systems. The proposed method works with the finite difference weighted essentially non-oscillatory (FD-WENO) method in space and the third order total variation diminishing (TVD) Runge-Kutta (RK) method in time. The philosophy of this work is to use the lifted interpolating wavelets as not only detector for singularities but also interpolator. Especially, flexible interpolations can be performed by an inverse wavelet transformation. When the divergence cleaning method introducing auxiliary scalar field ψ is applied to the base numerical schemes for imposing divergence-free condition to the magnetic field in a MHD equation, the approximations to derivatives of ψ require the neighboring points. Moreover, the fifth order WENO interpolation requires large stencil to reconstruct high order polynomial. In such cases, an efficient interpolation method is necessary. The adaptive spatial differentiation method is considered as well as the adaptation of grid resolutions. In order to avoid the heavy computation of FD-WENO, in the smooth regions fixed stencil approximation without computing the non-linear WENO weights is used, and the characteristic decomposition method is replaced by a component-wise approach. Numerical results demonstrate that with the adaptive method we are able to resolve the solutions that agree well with the solution of the corresponding fine grid.

  20. Application of wavelet based MFDFA on Mueller matrix images for cervical pre-cancer detection

    Science.gov (United States)

    Zaffar, Mohammad; Pradhan, Asima

    2018-02-01

    A systematic study has been conducted on application of wavelet based multifractal de-trended fluctuation analysis (MFDFA) on Mueller matrix (MM) images of cervical tissue sections for early cancer detection. Changes in multiple scattering and orientation of fibers are observed by utilizing a discrete wavelet transform (Daubechies) which identifies fluctuations over polynomial trends. Fluctuation profiles, after 9th level decomposition, for all elements of MM qualitatively establish a demarcation of different grades of cancer from normal tissue. Moreover, applying MFDFA on MM images, Hurst exponent profiles for images of MM qualitatively are seen to display differences. In addition, the values of Hurst exponent increase for the diagonal elements of MM with increasing grades of the cervical cancer, while the value for the elements which correspond to linear polarizance decrease. However, for circular polarizance the value increases with increasing grades. These fluctuation profiles reveal the trend of local variation of refractive -indices and along with Hurst exponent profile, may serve as a useful biological metric in the early detection of cervical cancer. The quantitative measurements of Hurst exponent for diagonal and first column (polarizance governing elements) elements which reflect changes in multiple scattering and structural anisotropy in stroma, may be sensitive indicators of pre-cancer.

  1. Use of wavelet based iterative filtering to improve denoising of spectral information for in-vivo gamma spectrometry

    International Nuclear Information System (INIS)

    Paul, Sabyasachi; Sarkar, P.K.

    2012-05-01

    The characterization of radionuclide in the in-vivo monitoring analysis using gamma spectrometry poses difficulty due to very low activity level in biological systems. The large statistical fluctuations often make identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet based noise filtering methodology has been developed for better detection of gamma peaks while analyzing noisy spectrometric data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for the noise rejection and inverse transform after soft thresholding over the generated coefficients. Analyses of in-vivo monitoring data of 235 U and 238 U have been carried out using this method without disturbing the peak position and amplitude while achieving a threefold improvement in the signal to noise ratio, compared to the original measured spectrum. When compared with other data filtering techniques, the wavelet based method shows better results. (author)

  2. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task.

  3. Automatic diagnosis of abnormal macula in retinal optical coherence tomography images using wavelet-based convolutional neural network features and random forests classifier.

    Science.gov (United States)

    Rasti, Reza; Mehridehnavi, Alireza; Rabbani, Hossein; Hajizadeh, Fedra

    2018-03-01

    The present research intends to propose a fully automatic algorithm for the classification of three-dimensional (3-D) optical coherence tomography (OCT) scans of patients suffering from abnormal macula from normal candidates. The method proposed does not require any denoising, segmentation, retinal alignment processes to assess the intraretinal layers, as well as abnormalities or lesion structures. To classify abnormal cases from the control group, a two-stage scheme was utilized, which consists of automatic subsystems for adaptive feature learning and diagnostic scoring. In the first stage, a wavelet-based convolutional neural network (CNN) model was introduced and exploited to generate B-scan representative CNN codes in the spatial-frequency domain, and the cumulative features of 3-D volumes were extracted. In the second stage, the presence of abnormalities in 3-D OCTs was scored over the extracted features. Two different retinal SD-OCT datasets are used for evaluation of the algorithm based on the unbiased fivefold cross-validation (CV) approach. The first set constitutes 3-D OCT images of 30 normal subjects and 30 diabetic macular edema (DME) patients captured from the Topcon device. The second publicly available set consists of 45 subjects with a distribution of 15 patients in age-related macular degeneration, DME, and normal classes from the Heidelberg device. With the application of the algorithm on overall OCT volumes and 10 repetitions of the fivefold CV, the proposed scheme obtained an average precision of 99.33% on dataset1 as a two-class classification problem and 98.67% on dataset2 as a three-class classification task. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  4. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available the Fisher linear discriminant and the iris region of interest is extracted. Four texture description methods are compared for segmenting iris texture using a region based pattern classification approach: Grey Level Co-occurrence Matrix (GLCM), Discrete...

  5. A wavelet-based intermittency detection technique from PIV investigations in transitional boundary layers

    Science.gov (United States)

    Simoni, Daniele; Lengani, Davide; Guida, Roberto

    2016-09-01

    The transition process of the boundary layer growing over a flat plate with pressure gradient simulating the suction side of a low-pressure turbine blade and elevated free-stream turbulence intensity level has been analyzed by means of PIV and hot-wire measurements. A detailed view of the instantaneous flow field in the wall-normal plane highlights the physics characterizing the complex process leading to the formation of large-scale coherent structures during breaking down of the ordered motion of the flow, thus generating randomized oscillations (i.e., turbulent spots). This analysis gives the basis for the development of a new procedure aimed at determining the intermittency function describing (statistically) the transition process. To this end, a wavelet-based method has been employed for the identification of the large-scale structures created during the transition process. Successively, a probability density function of these events has been defined so that an intermittency function is deduced. This latter strictly corresponds to the intermittency function of the transitional flow computed trough a classic procedure based on hot-wire data. The agreement between the two procedures in the intermittency shape and spot production rate proves the capability of the method in providing the statistical representation of the transition process. The main advantages of the procedure here proposed concern with its applicability to PIV data; it does not require a threshold level to discriminate first- and/or second-order time-derivative of hot-wire time traces (that makes the method not influenced by the operator); and it provides a clear evidence of the connection between the flow physics and the statistical representation of transition based on theory of turbulent spot propagation.

  6. WaVPeak: Picking NMR peaks through wavelet-based smoothing and volume-based filtering

    KAUST Repository

    Liu, Zhi

    2012-02-10

    Motivation: Nuclear magnetic resonance (NMR) has been widely used as a powerful tool to determine the 3D structures of proteins in vivo. However, the post-spectra processing stage of NMR structure determination usually involves a tremendous amount of time and expert knowledge, which includes peak picking, chemical shift assignment and structure calculation steps. Detecting accurate peaks from the NMR spectra is a prerequisite for all following steps, and thus remains a key problem in automatic NMR structure determination. Results: We introduce WaVPeak, a fully automatic peak detection method. WaVPeak first smoothes the given NMR spectrum by wavelets. The peaks are then identified as the local maxima. The false positive peaks are filtered out efficiently by considering the volume of the peaks. WaVPeak has two major advantages over the state-of-the-art peak-picking methods. First, through wavelet-based smoothing, WaVPeak does not eliminate any data point in the spectra. Therefore, WaVPeak is able to detect weak peaks that are embedded in the noise level. NMR spectroscopists need the most help isolating these weak peaks. Second, WaVPeak estimates the volume of the peaks to filter the false positives. This is more reliable than intensity-based filters that are widely used in existing methods. We evaluate the performance of WaVPeak on the benchmark set proposed by PICKY (Alipanahi et al., 2009), one of the most accurate methods in the literature. The dataset comprises 32 2D and 3D spectra from eight different proteins. Experimental results demonstrate that WaVPeak achieves an average of 96%, 91%, 88%, 76% and 85% recall on 15N-HSQC, HNCO, HNCA, HNCACB and CBCA(CO)NH, respectively. When the same number of peaks are considered, WaVPeak significantly outperforms PICKY. The Author(s) 2012. Published by Oxford University Press.

  7. Wavelet-based multiscale window transform and energy and vorticity analysis

    Science.gov (United States)

    Liang, Xiang San

    A new methodology, Multiscale Energy and Vorticity Analysis (MS-EVA), is developed to investigate sub-mesoscale, meso-scale, and large-scale dynamical interactions in geophysical fluid flows which are intermittent in space and time. The development begins with the construction of a wavelet-based functional analysis tool, the multiscale window transform (MWT), which is local, orthonormal, self-similar, and windowed on scale. The MWT is first built over the real line then modified onto a finite domain. Properties are explored, the most important one being the property of marginalization which brings together a quadratic quantity in physical space with its phase space representation. Based on MWT the MS-EVA is developed. Energy and enstrophy equations for the large-, meso-, and sub-meso-scale windows are derived and their terms interpreted. The processes thus represented are classified into four categories: transport; transfer, conversion, and dissipation/diffusion. The separation of transport from transfer is made possible with the introduction of the concept of perfect transfer. By the property of marginalization, the classical energetic analysis proves to be a particular case of the MS-EVA. The MS-EVA developed is validated with classical instability problems. The validation is carried out through two steps. First, it is established that the barotropic and baroclinic instabilities are indicated by the spatial averages of certain transfer term interaction analyses. Then calculations of these indicators are made with an Eady model and a Kuo model. The results agree precisely with what is expected from their analytical solutions, and the energetics reproduced reveal a consistent and important aspect of the unknown dynamic structures of instability processes. As an application, the MS-EVA is used to investigate the Iceland-Faeroe frontal (IFF) variability. A MS-EVA-ready dataset is first generated, through a forecasting study with the Harvard Ocean Prediction System

  8. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    Science.gov (United States)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems

  9. Wavelet-based multi-resolution analysis and artificial neural networks for forecasting temperature and thermal power consumption

    OpenAIRE

    Eynard , Julien; Grieu , Stéphane; Polit , Monique

    2011-01-01

    15 pages; International audience; As part of the OptiEnR research project, the present paper deals with outdoor temperature and thermal power consumption forecasting. This project focuses on optimizing the functioning of a multi-energy district boiler (La Rochelle, west coast of France), adding to the plant a thermal storage unit and implementing a model-based predictive controller. The proposed short-term forecast method is based on the concept of time series and uses both a wavelet-based mu...

  10. Real-time wavelet-based inline banknote-in-bundle counting for cut-and-bundle machines

    Science.gov (United States)

    Petker, Denis; Lohweg, Volker; Gillich, Eugen; Türke, Thomas; Willeke, Harald; Lochmüller, Jens; Schaede, Johannes

    2011-03-01

    Automatic banknote sheet cut-and-bundle machines are widely used within the scope of banknote production. Beside the cutting-and-bundling, which is a mature technology, image-processing-based quality inspection for this type of machine is attractive. We present in this work a new real-time Touchless Counting and perspective cutting blade quality insurance system, based on a Color-CCD-Camera and a dual-core Computer, for cut-and-bundle applications in banknote production. The system, which applies Wavelet-based multi-scale filtering is able to count banknotes inside a 100-bundle within 200-300 ms depending on the window size.

  11. Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti STBC in Different Modulation Schemes

    OpenAIRE

    Shams, Rifat Ara; Kabir, M. Hasnat; Ullah, Sheikh Enayet

    2012-01-01

    In this paper, the impact of Forward Error Correction (FEC) code namely Trellis code with interleaver on the performance of wavelet based MC-CDMA wireless communication system with the implementation of Alamouti antenna diversity scheme has been investigated in terms of Bit Error Rate (BER) as a function of Signal-to-Noise Ratio (SNR) per bit. Simulation of the system under proposed study has been done in M-ary modulation schemes (MPSK, MQAM and DPSK) over AWGN and Rayleigh fading channel inc...

  12. Wavelet-based multiscale analysis of minimum toe clearance variability in the young and elderly during walking.

    Science.gov (United States)

    Khandoker, Ahsan H; Karmakar, Chandan K; Begg, Rezaul K; Palaniswami, Marimuthu

    2007-01-01

    As humans age or are influenced by pathology of the neuromuscular system, gait patterns are known to adjust, accommodating for reduced function in the balance control system. The aim of this study was to investigate the effectiveness of a wavelet based multiscale analysis of a gait variable [minimum toe clearance (MTC)] in deriving indexes for understanding age-related declines in gait performance and screening of balance impairments in the elderly. MTC during walking on a treadmill for 30 healthy young, 27 healthy elderly and 10 falls risk elderly subjects with a history of tripping falls were analyzed. The MTC signal from each subject was decomposed to eight detailed signals at different wavelet scales by using the discrete wavelet transform. The variances of detailed signals at scales 8 to 1 were calculated. The multiscale exponent (beta) was then estimated from the slope of the variance progression at successive scales. The variance at scale 5 was significantly (ppathological conditions. Early detection of gait pattern changes due to ageing and balance impairments using wavelet-based multiscale analysis might provide the opportunity to initiate preemptive measures to be undertaken to avoid injurious falls.

  13. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  14. Description of textures by a structural analysis.

    Science.gov (United States)

    Tomita, F; Shirai, Y; Tsuji, S

    1982-02-01

    A structural analysis system for describing natural textures is introduced. The analyzer automatically extracts the texture elements in an input image, measures their properties, classifies them into some distinctive classes (one ``ground'' class and some ``figure'' classes), and computes the distributions of the gray level, the shape, and the placement of the texture elements in each class. These descriptions are used for classification of texture images. An analysis-by-synthesis method for evaluating texture analyzers is also presented. We propose a synthesizer which generates a texture image based on the descriptions. By comparing the reconstructed image with the original one, we can see what information is preserved and what is lost in the descriptions.

  15. Some Numerical Characteristics of Image Texture

    Directory of Open Access Journals (Sweden)

    O. Samarina

    2012-05-01

    Full Text Available Texture classification is one of the basic images processing tasks. In this paper we present some numerical characteristics to the images analysis and processing. It can be used at the solving of images classification problems, their recognition, problems of remote sounding, biomedical images analysis, geological researches.

  16. A wavelet-based evaluation of time-varying long memory of equity markets: A paradigm in crisis

    Science.gov (United States)

    Tan, Pei P.; Chin, Cheong W.; Galagedera, Don U. A.

    2014-09-01

    This study, using wavelet-based method investigates the dynamics of long memory in the returns and volatility of equity markets. In the sample of five developed and five emerging markets we find that the daily return series from January 1988 to June 2013 may be considered as a mix of weak long memory and mean-reverting processes. In the case of volatility in the returns, there is evidence of long memory, which is stronger in emerging markets than in developed markets. We find that although the long memory parameter may vary during crisis periods (1997 Asian financial crisis, 2001 US recession and 2008 subprime crisis) the direction of change may not be consistent across all equity markets. The degree of return predictability is likely to diminish during crisis periods. Robustness of the results is checked with de-trended fluctuation analysis approach.

  17. A Sequential, Implicit, Wavelet-Based Solver for Multi-Scale Time-Dependent Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Donald A. McLaren

    2013-04-01

    Full Text Available This paper describes and tests a wavelet-based implicit numerical method for solving partial differential equations. Intended for problems with localized small-scale interactions, the method exploits the form of the wavelet decomposition to divide the implicit system created by the time-discretization into multiple smaller systems that can be solved sequentially. Included is a test on a basic non-linear problem, with both the results of the test, and the time required to calculate them, compared with control results based on a single system with fine resolution. The method is then tested on a non-trivial problem, its computational time and accuracy checked against control results. In both tests, it was found that the method requires less computational expense than the control. Furthermore, the method showed convergence towards the fine resolution control results.

  18. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    Science.gov (United States)

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.

  19. Wavelet based edge detection algorithm for web surface inspection of coated board web

    Energy Technology Data Exchange (ETDEWEB)

    Barjaktarovic, M; Petricevic, S, E-mail: slobodan@etf.bg.ac.r [School of Electrical Engineering, Bulevar Kralja Aleksandra 73, 11000 Belgrade (Serbia)

    2010-07-15

    This paper presents significant improvement of the already installed vision system. System was designed for real time coated board inspection. The improvement is achieved with development of a new algorithm for edge detection. The algorithm is based on the redundant (undecimated) wavelet transform. Compared to the existing algorithm better delineation of edges is achieved. This yields to better defect detection probability and more accurate geometrical classification, which will provide additional reduction of waste. Also, algorithm will provide detailed classification and more reliably tracking of defects. This improvement requires minimal changes in processing hardware, only a replacement of the graphic card would be needed, adding only negligibly to the system cost. Other changes are accomplished entirely in the image processing software.

  20. Texture collapse

    International Nuclear Information System (INIS)

    Prokopec, T.; Sornborger, A.; Brandenberger, R.H.

    1992-01-01

    We study single-texture collapse using a leapfrog discretization method on a 30x30x30 spatial lattice. We investigate the influence of boundary conditions, physical size of the lattice, type of space-time background (flat, i.e., nonexpanding, vs radiation-dominated and matter-dominated universes), and spatial distribution of the initial texture configuration on collapse time and critical winding. For a spherically symmetric initial configuration of size equal to the horizon size on a lattice containing 12 (30) horizon volumes, the critical winding is found to be 0.621±0.001 (0.602±0.003) (flat case), 0.624±0.002 (0.604±0.005) (radiation era), 0.628±0.002 (0.612±0.003) (matter era). The larger the physical size of the lattice (in units of the horizon size), the smaller is the critical winding, and in the limit of an infinite lattice, we argue that the critical winding approaches 0.5. For radially asymmetric cases, contraction of one axis ( /Ipancake case) slightly reduces collapse time and critical winding, and contraction of two axes (d/Icigar case) reduces collapse time and critical winding significantly

  1. A wavelet-based technique to predict treatment outcome for Major Depressive Disorder

    Science.gov (United States)

    Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad

    2017-01-01

    Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant’s treatment outcome may help during antidepressant’s selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant’s treatment outcome for the MDD patients. PMID:28152063

  2. A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.

    Science.gov (United States)

    Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed

    2017-01-01

    Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.

  3. Texture analysis using Gabor wavelets

    Science.gov (United States)

    Naghdy, Golshah A.; Wang, Jian; Ogunbona, Philip O.

    1996-04-01

    Receptive field profiles of simple cells in the visual cortex have been shown to resemble even- symmetric or odd-symmetric Gabor filters. Computational models employed in the analysis of textures have been motivated by two-dimensional Gabor functions arranged in a multi-channel architecture. More recently wavelets have emerged as a powerful tool for non-stationary signal analysis capable of encoding scale-space information efficiently. A multi-resolution implementation in the form of a dyadic decomposition of the signal of interest has been popularized by many researchers. In this paper, Gabor wavelet configured in a 'rosette' fashion is used as a multi-channel filter-bank feature extractor for texture classification. The 'rosette' spans 360 degrees of orientation and covers frequencies from dc. In the proposed algorithm, the texture images are decomposed by the Gabor wavelet configuration and the feature vectors corresponding to the mean of the outputs of the multi-channel filters extracted. A minimum distance classifier is used in the classification procedure. As a comparison the Gabor filter has been used to classify the same texture images from the Brodatz album and the results indicate the superior discriminatory characteristics of the Gabor wavelet. With the test images used it can be concluded that the Gabor wavelet model is a better approximation of the cortical cell receptive field profiles.

  4. Fuzzy-Wavelet Based Double Line Transmission System Protection Scheme in the Presence of SVC

    Science.gov (United States)

    Goli, Ravikumar; Shaik, Abdul Gafoor; Tulasi Ram, Sankara S.

    2015-06-01

    Increasing the power transfer capability and efficient utilization of available transmission lines, improving the power system controllability and stability, power oscillation damping and voltage compensation have made strides and created Flexible AC Transmission (FACTS) devices in recent decades. Shunt FACTS devices can have adverse effects on distance protection both in steady state and transient periods. Severe under reaching is the most important problem of relay which is caused by current injection at the point of connection to the system. Current absorption of compensator leads to overreach of relay. This work presents an efficient method based on wavelet transforms, fault detection, classification and location using Fuzzy logic technique which is almost independent of fault impedance, fault distance and fault inception angle. The proposed protection scheme is found to be fast, reliable and accurate for various types of faults on transmission lines with and without Static Var compensator at different locations and with various incidence angles.

  5. Wavelet-based information filtering for fault diagnosis of electric drive systems in electric ships.

    Science.gov (United States)

    Silva, Andre A; Gupta, Shalabh; Bazzi, Ali M; Ulatowski, Arthur

    2017-09-22

    Electric machines and drives have enjoyed extensive applications in the field of electric vehicles (e.g., electric ships, boats, cars, and underwater vessels) due to their ease of scalability and wide range of operating conditions. This stems from their ability to generate the desired torque and power levels for propulsion under various external load conditions. However, as with the most electrical systems, the electric drives are prone to component failures that can degrade their performance, reduce the efficiency, and require expensive maintenance. Therefore, for safe and reliable operation of electric vehicles, there is a need for automated early diagnostics of critical failures such as broken rotor bars and electrical phase failures. In this regard, this paper presents a fault diagnosis methodology for electric drives in electric ships. This methodology utilizes the two-dimensional, i.e. scale-shift, wavelet transform of the sensor data to filter optimal information-rich regions which can enhance the diagnosis accuracy as well as reduce the computational complexity of the classifier. The methodology was tested on sensor data generated from an experimentally validated simulation model of electric drives under various cruising speed conditions. The results in comparison with other existing techniques show a high correct classification rate with low false alarm and miss detection rates. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Multilayer densities using a wavelet-based gravity method and their tectonic implications beneath the Tibetan Plateau

    Science.gov (United States)

    Xu, Chuang; Luo, Zhicai; Sun, Rong; Zhou, Hao; Wu, Yihao

    2018-06-01

    Determining density structure of the Tibetan Plateau is helpful in better understanding of tectonic structure and development. Seismic method, as traditional approach obtaining a large number of achievements of density structure in the Tibetan Plateau except in the centre and west, is primarily inhibited by the poor seismic station coverage. As the implementation of satellite gravity missions, gravity method is more competitive because of global homogeneous gravity coverage. In this paper, a novel wavelet-based gravity method with high computation efficiency and excellent local identification capability is developed to determine multilayer densities beneath the Tibetan Plateau. The inverted six-layer densities from 0 to 150 km depth can reveal rich tectonic structure and development of study area: (1) The densities present a clockwise pattern, nearly east-west high-low alternating pattern in the west and nearly south-north high-low alternating pattern in the east, which is almost perpendicular to surface movement direction relative to the stable Eurasia from the Global Positioning System velocity field; (2) Apparent fold structure approximately from 10 to 110 km depth can be inferred from the multilayer densities, the deformational direction of which is nearly south-north in the west and east-west in the east; (3) Possible channel flows approximately from 30 to 110 km depth can also be observed clearly during the multilayer densities. Moreover, the inverted multilayer densities are in agreement with previous studies, which verify the correctness and effectiveness of our method.

  7. Wavelet-based study of valence-arousal model of emotions on EEG signals with LabVIEW.

    Science.gov (United States)

    Guzel Aydin, Seda; Kaya, Turgay; Guler, Hasan

    2016-06-01

    This paper illustrates the wavelet-based feature extraction for emotion assessment using electroencephalogram (EEG) signal through graphical coding design. Two-dimensional (valence-arousal) emotion model was studied. Different emotions (happy, joy, melancholy, and disgust) were studied for assessment. These emotions were stimulated by video clips. EEG signals obtained from four subjects were decomposed into five frequency bands (gamma, beta, alpha, theta, and delta) using "db5" wavelet function. Relative features were calculated to obtain further information. Impact of the emotions according to valence value was observed to be optimal on power spectral density of gamma band. The main objective of this work is not only to investigate the influence of the emotions on different frequency bands but also to overcome the difficulties in the text-based program. This work offers an alternative approach for emotion evaluation through EEG processing. There are a number of methods for emotion recognition such as wavelet transform-based, Fourier transform-based, and Hilbert-Huang transform-based methods. However, the majority of these methods have been applied with the text-based programming languages. In this study, we proposed and implemented an experimental feature extraction with graphics-based language, which provides great convenience in bioelectrical signal processing.

  8. Multilayer Densities Using a Wavelet-based Gravity Method and Their Tectonic Implications beneath the Tibetan Plateau

    Science.gov (United States)

    Xu, Chuang; Luo, Zhicai; Sun, Rong; Zhou, Hao; Wu, Yihao

    2018-03-01

    Determining density structure of the Tibetan Plateau is helpful in better understanding tectonic structure and development. Seismic method, as traditional approach obtaining a large number of achievements of density structure in the Tibetan Plateau except in the center and west, is primarily inhibited by the poor seismic station coverage. As the implementation of satellite gravity missions, gravity method is more competitive because of global homogeneous gravity coverage. In this paper, a novel wavelet-based gravity method with high computation efficiency and excellent local identification capability is developed to determine multilayer densities beneath the Tibetan Plateau. The inverted 6-layer densities from 0 km to 150 km depth can reveal rich tectonic structure and development of study area: (1) The densities present a clockwise pattern, nearly east-west high-low alternating pattern in the west and nearly south-north high-low alternating pattern in the east, which is almost perpendicular to surface movement direction relative to the stable Eurasia from the Global Positioning System velocity field; (2) Apparent fold structure approximately from 10 km to 110 km depth can be inferred from the multilayer densities, the deformational direction of which is nearly south-north in the west and east-west in the east; (3) Possible channel flows approximately from 30 km to 110 km depth can be also observed clearly during the multilayer densities. Moreover, the inverted multilayer densities are in agreement with previous studies, which verify the correctness and effectiveness of our method.

  9. Proposing Wavelet-Based Low-Pass Filter and Input Filter to Improve Transient Response of Grid-Connected Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Bijan Rahmani

    2016-08-01

    Full Text Available Available photovoltaic (PV systems show a prolonged transient response, when integrated into the power grid via active filters. On one hand, the conventional low-pass filter, employed within the integrated PV system, works with a large delay, particularly in the presence of system’s low-order harmonics. On the other hand, the switching of the DC (direct current–DC converters within PV units also prolongs the transient response of an integrated system, injecting harmonics and distortion through the PV-end current. This paper initially develops a wavelet-based low-pass filter to improve the transient response of the interconnected PV systems to grid lines. Further, a damped input filter is proposed within the PV system to address the raised converter’s switching issue. Finally, Matlab/Simulink simulations validate the effectiveness of the proposed wavelet-based low-pass filter and damped input filter within an integrated PV system.

  10. Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN

    International Nuclear Information System (INIS)

    Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

    2004-01-01

    This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain

  11. A novel approach for detection and classification of mammographic microcalcifications using wavelet analysis and extreme learning machine.

    Science.gov (United States)

    Malar, E; Kandaswamy, A; Chakravarthy, D; Giri Dharan, A

    2012-09-01

    The objective of this paper is to reveal the effectiveness of wavelet based tissue texture analysis for microcalcification detection in digitized mammograms using Extreme Learning Machine (ELM). Microcalcifications are tiny deposits of calcium in the breast tissue which are potential indicators for early detection of breast cancer. The dense nature of the breast tissue and the poor contrast of the mammogram image prohibit the effectiveness in identifying microcalcifications. Hence, a new approach to discriminate the microcalcifications from the normal tissue is done using wavelet features and is compared with different feature vectors extracted using Gray Level Spatial Dependence Matrix (GLSDM) and Gabor filter based techniques. A total of 120 Region of Interests (ROIs) extracted from 55 mammogram images of mini-Mias database, including normal and microcalcification images are used in the current research. The network is trained with the above mentioned features and the results denote that ELM produces relatively better classification accuracy (94%) with a significant reduction in training time than the other artificial neural networks like Bayesnet classifier, Naivebayes classifier, and Support Vector Machine. ELM also avoids problems like local minima, improper learning rate, and over fitting. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Modélisation de texture basée sur les ondelettes pour la détection de parcelles viticoles à partir d'images Pleiades panchromatiques

    OpenAIRE

    Regniers , Olivier; Bombrun , Lionel; Germain , Christian

    2014-01-01

    National audience; This study evaluates the potential of wavelet-based SIRV texture modeling for the detection of vineyards in very high resolution Pléiades data and compares the performances of these models with reference methods such as grey level co-occurrence matrices and a segmentation approach based on Gabor filter. The obtained results show that SIRV models enable to reach high detection rates while reducing the false alarm rate in comparison to the other approaches. These models also ...

  13. Ion beam texturing

    Science.gov (United States)

    Hudson, W. R.

    1977-01-01

    A microscopic surface texture was created by sputter-etching a surface while simultaneously sputter-depositing a lower sputter yield material onto the surface. A xenon ion-beam source was used to perform the texturing process on samples as large as 3-cm diameter. Textured surfaces have been characterized with SEM photomicrographs for a large number of materials including Cu, Al, Si, Ti, Ni, Fe, stainless steel, Au, and Ag. A number of texturing parameters are studied including the variation of texture with ion-beam powder, surface temperature, and the rate of texture growth with sputter etching time.

  14. Online Semiparametric Identification of Lithium-Ion Batteries Using the Wavelet-Based Partially Linear Battery Model

    Directory of Open Access Journals (Sweden)

    Caiping Zhang

    2013-05-01

    Full Text Available Battery model identification is very important for reliable battery management as well as for battery system design process. The common problem in identifying battery models is how to determine the most appropriate mathematical model structure and parameterized coefficients based on the measured terminal voltage and current. This paper proposes a novel semiparametric approach using the wavelet-based partially linear battery model (PLBM and a recursive penalized wavelet estimator for online battery model identification. Three main contributions are presented. First, the semiparametric PLBM is proposed to simulate the battery dynamics. Compared with conventional electrical models of a battery, the proposed PLBM is equipped with a semiparametric partially linear structure, which includes a parametric part (involving the linear equivalent circuit parameters and a nonparametric part [involving the open-circuit voltage (OCV]. Thus, even with little prior knowledge about the OCV, the PLBM can be identified using a semiparametric identification framework. Second, we model the nonparametric part of the PLBM using the truncated wavelet multiresolution analysis (MRA expansion, which leads to a parsimonious model structure that is highly desirable for model identification; using this model, the PLBM could be represented in a linear-in-parameter manner. Finally, to exploit the sparsity of the wavelet MRA representation and allow for online implementation, a penalized wavelet estimator that uses a modified online cyclic coordinate descent algorithm is proposed to identify the PLBM in a recursive fashion. The simulation and experimental results demonstrate that the proposed PLBM with the corresponding identification algorithm can accurately simulate the dynamic behavior of a lithium-ion battery in the Federal Urban Driving Schedule tests.

  15. Transformations in destination texture

    DEFF Research Database (Denmark)

    Gyimóthy, Szilvia

    2018-01-01

    This article takes heterogeographical approaches to understand Bollywood-induced destination transformations in Switzerland. Positioned within the theoretical field of mediatized mobility, the study contextualizes Bollywood-induced tourism in Europe the concept of texture. Textural analysis (base...

  16. Bayesian exploration for intelligent identification of textures

    Directory of Open Access Journals (Sweden)

    Jeremy A. Fishel

    2012-06-01

    Full Text Available In order to endow robots with humanlike abilities to characterize and identify objects, they must be provided with tactile sensors and intelligent algorithms to select, control and interpret data from useful exploratory movements. Humans make informed decisions on the sequence of exploratory movements that would yield the most information for the task, depending on what the object may be and prior knowledge of what to expect from possible exploratory movements. This study is focused on texture discrimination, a subset of a much larger group of exploratory movements and percepts that humans use to discriminate, characterize, and identify objects. Using a testbed equipped with a biologically inspired tactile sensor (the BioTac® we produced sliding movements similar to those that humans make when exploring textures. Measurement of tactile vibrations and reaction forces when exploring textures were used to extract measures of textural properties inspired from psychophysical literature (traction, roughness, and fineness. Different combinations of normal force and velocity were identified to be useful for each of these three properties. A total of 117 textures were explored with these three movements to create a database of prior experience to use for identifying these same textures in future encounters. When exploring a texture, the discrimination algorithm adaptively selects the optimal movement to make and property to measure based on previous experience to differentiate the texture from a set of plausible candidates, a process we call Bayesian exploration. Performance of 99.6% in correctly discriminating pairs of similar textures was found to exceed human capabilities. Absolute classification from the entire set of 117 textures generally required a small number of well-chosen exploratory movements (median=5 and yielded a 95.4% success rate. The method of Bayesian exploration developed and tested in this paper may generalize well to other

  17. Texture Classification with Change Point Statistics.

    Science.gov (United States)

    1981-07-01

    it is necessary to let T approach the value of n for an nxn image. This is motivated by the fact that the computation of Ut, T is so costly, and if T...LP4 - - + ML4 + + + pS4 + - + LP5 + - + ML5 + + + PS5 + + + LP6 + - + ML6 + + + PS6 + - + LP7 - + - ML7 - + - PS7 + - + LP8 - + - ML8 - - - PS8...LP2 + + + ML2 - - - PS2 + - - LP3 - + - ML3 - - - PS3 + + + LP4 - + + ML4 + + + PS4 - - - LP5 - - + ML5 - + - PS5 - + + LP6 - + + KL6 - - - PS6

  18. An assessment study of the wavelet-based index of magnetic storm activity (WISA) and its comparison to the Dst index

    Science.gov (United States)

    Xu, Zhonghua; Zhu, Lie; Sojka, Jan; Kokoszka, Piotr; Jach, Agnieszka

    2008-08-01

    A wavelet-based index of storm activity (WISA) has been recently developed [Jach, A., Kokoszka, P., Sojka, L., Zhu, L., 2006. Wavelet-based index of magnetic storm activity. Journal of Geophysical Research 111, A09215, doi:10.1029/2006JA011635] to complement the traditional Dst index. The new index can be computed automatically by using the wavelet-based statistical procedure without human intervention on the selection of quiet days and the removal of secular variations. In addition, the WISA is flexible on data stretch and has a higher temporal resolution (1 min), which can provide a better description of the dynamical variations of magnetic storms. In this work, we perform a systematic assessment study on the WISA index. First, we statistically compare the WISA to the Dst for various quiet and disturbed periods and analyze the differences of their spectral features. Then we quantitatively assess the flexibility of the WISA on data stretch and study the effects of varying number of stations on the index. In addition, the ability of the WISA for handling the missing data is also quantitatively assessed. The assessment results show that the hourly averaged WISA index can describe storm activities equally well as the Dst index, but its full automation, high flexibility on data stretch, easiness of using the data from varying number of stations, high temporal resolution, and high tolerance to missing data from individual station can be very valuable and essential for real-time monitoring of the dynamical variations of magnetic storm activities and space weather applications, thus significantly complementing the existing Dst index.

  19. Evaluating the Performance of Wavelet-based Data-driven Models for Multistep-ahead Flood Forecasting in an Urbanized Watershed

    Science.gov (United States)

    Kasaee Roodsari, B.; Chandler, D. G.

    2015-12-01

    A real-time flood forecast system is presented to provide emergency management authorities sufficient lead time to execute plans for evacuation and asset protection in urban watersheds. This study investigates the performance of two hybrid models for real-time flood forecasting at different subcatchments of Ley Creek watershed, a heavily urbanized watershed in the vicinity of Syracuse, New York. Hybrid models include Wavelet-Based Artificial Neural Network (WANN) and Wavelet-Based Adaptive Neuro-Fuzzy Inference System (WANFIS). Both models are developed on the basis of real time stream network sensing. The wavelet approach is applied to decompose the collected water depth timeseries to Approximation and Detail components. The Approximation component is then used as an input to ANN and ANFIS models to forecast water level at lead times of 1 to 10 hours. The performance of WANN and WANFIS models are compared to ANN and ANFIS models for different lead times. Initial results demonstrated greater predictive power of hybrid models.

  20. Dilatometry study of textures

    International Nuclear Information System (INIS)

    Sofrenovic, R.; Lazarevic, Dj.

    1965-01-01

    Presence of textures in the metal uranium fuel is harmful because of anisotropy properties of uranium during thermal treatment, and especially during irradiation. Anisotropic radiation swelling of uranium can cause deformation of fuel element due to existence of textures. The objective of this work was studying of the influence of phase transformations on textures in uranium which has undergone plastic deformation due to rotational casting. Dilatometry method was adopted for testing the textures. This report describes the device for dilatometry testing and the measured preliminary results are shown

  1. Multi Angle Imaging With Spectral Remote Sensing for Scene Classification

    National Research Council Canada - National Science Library

    Prasert, Sunyaruk

    2005-01-01

    .... This study analyses the BRDF (Bidirectional Reflectance Distribution Function) impact and effectiveness of texture analysis on terrain classification within Fresno County area in state of California...

  2. Textured perovskite cells

    NARCIS (Netherlands)

    Deelen, J. van; Tezsevin, Y.; Barink, M.

    2017-01-01

    Most research of texturization of solar cells has been devoted to Si based cells. For perovskites, it was assumed that texturization would not have much of an impact because of the relatively low refractive indexes lead to relatively low reflection as compared to the Si based cells. However, our

  3. Wavelet-based regularization and edge preservation for submillimetre 3D list-mode reconstruction data from a high resolution small animal PET system

    Energy Technology Data Exchange (ETDEWEB)

    Jesus Ochoa Dominguez, Humberto de, E-mail: hochoa@uacj.mx [Departamento de Ingenieria Eectrica y Computacion, Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico); Ortega Maynez, Leticia; Osiris Vergara Villegas, Osslan; Gordillo Castillo, Nelly; Guadalupe Cruz Sanchez, Vianey; Gutierrez Casas, Efren David [Departamento de Ingenieria Eectrica y Computacion, Universidad Autonoma de Ciudad Juarez, Avenida del Charro 450 Norte, C.P. 32310 Ciudad Juarez, Chihuahua (Mexico)

    2011-10-01

    The data obtained from a PET system tend to be noisy because of the limitations of the current instrumentation and the detector efficiency. This problem is particularly severe in images of small animals as the noise contaminates areas of interest within small organs. Therefore, denoising becomes a challenging task. In this paper, a novel wavelet-based regularization and edge preservation method is proposed to reduce such noise. To demonstrate this method, image reconstruction using a small mouse {sup 18}F NEMA phantom and a {sup 18}F mouse was performed. Investigation on the effects of the image quality was addressed for each reconstruction case. Results show that the proposed method drastically reduces the noise and preserves the image details.

  4. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  5. Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery.

    Directory of Open Access Journals (Sweden)

    Ahmad Chaddad

    Full Text Available This paper proposes to characterize the continuum of colorectal cancer (CRC using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma.In the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG filter, discrete wavelets (DW and gray level co-occurrence matrices (GLCM. To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models.Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01. Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%.These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images.

  6. A Wavelet-Based Unified Power Quality Conditioner to Eliminate Wind Turbine Non-Ideality Consequences on Grid-Connected Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Bijan Rahmani

    2016-05-01

    Full Text Available The integration of renewable power sources with power grids presents many challenges, such as synchronization with the grid, power quality problems and so on. The shunt active power filter (SAPF can be a solution to address the issue while suppressing the grid-end current harmonics and distortions. Nonetheless, available SAPFs work somewhat unpredictably in practice. This is attributed to the dependency of the SAPF controller on nonlinear complicated equations and two distorted variables, such as load current and voltage, to produce the current reference. This condition will worsen when the plant includes wind turbines which inherently produce 3rd, 5th, 7th and 11th voltage harmonics. Moreover, the inability of the typical phase locked loop (PLL used to synchronize the SAPF reference with the power grid also disrupts SAPF operation. This paper proposes an improved synchronous reference frame (SRF which is equipped with a wavelet-based PLL to control the SAPF, using one variable such as load current. Firstly the fundamental positive sequence of the source voltage, obtained using a wavelet, is used as the input signal of the PLL through an orthogonal signal generator process. Then, the generated orthogonal signals are applied through the SRF-based compensation algorithm to synchronize the SAPF’s reference with power grid. To further force the remained uncompensated grid current harmonics to pass through the SAPF, an improved series filter (SF equipped with a current harmonic suppression loop is proposed. Concurrent operation of the improved SAPF and SF is coordinated through a unified power quality conditioner (UPQC. The DC-link capacitor of the proposed UPQC, used to interconnect a photovoltaic (PV system to the power grid, is regulated by an adaptive controller. Matlab/Simulink results confirm that the proposed wavelet-based UPQC results in purely sinusoidal grid-end currents with total harmonic distortion (THD = 1.29%, which leads to high

  7. Methods of making textured catalysts

    Science.gov (United States)

    Werpy, Todd [West Richland, WA; Frye, Jr., John G.; Wang, Yong [Richland, WA; Zacher, Alan H [Kennewick, WA

    2010-08-17

    A textured catalyst having a hydrothermally-stable support, a metal oxide and a catalyst component is described. Methods of conducting aqueous phase reactions that are catalyzed by a textured catalyst are also described. The invention also provides methods of making textured catalysts and methods of making chemical products using a textured catalyst.

  8. Mobile Healthcare for Automatic Driving Sleep-Onset Detection Using Wavelet-Based EEG and Respiration Signals

    Directory of Open Access Journals (Sweden)

    Boon-Giin Lee

    2014-09-01

    Full Text Available Driving drowsiness is a major cause of traffic accidents worldwide and has drawn the attention of researchers in recent decades. This paper presents an application for in-vehicle non-intrusive mobile-device-based automatic detection of driver sleep-onset in real time. The proposed application classifies the driving mental fatigue condition by analyzing the electroencephalogram (EEG and respiration signals of a driver in the time and frequency domains. Our concept is heavily reliant on mobile technology, particularly remote physiological monitoring using Bluetooth. Respiratory events are gathered, and eight-channel EEG readings are captured from the frontal, central, and parietal (Fpz-Cz, Pz-Oz regions. EEGs are preprocessed with a Butterworth bandpass filter, and features are subsequently extracted from the filtered EEG signals by employing the wavelet-packet-transform (WPT method to categorize the signals into four frequency bands: α, β, θ, and δ. A mutual information (MI technique selects the most descriptive features for further classification. The reduction in the number of prominent features improves the sleep-onset classification speed in the support vector machine (SVM and results in a high sleep-onset recognition rate. Test results reveal that the combined use of the EEG and respiration signals results in 98.6% recognition accuracy. Our proposed application explores the possibility of processing long-term multi-channel signals.

  9. Area Determination of Diabetic Foot Ulcer Images Using a Cascaded Two-Stage SVM-Based Classification.

    Science.gov (United States)

    Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu

    2017-09-01

    The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.

  10. Computer Texture Mapping for Laser Texturing of Injection Mold

    Directory of Open Access Journals (Sweden)

    Yongquan Zhou

    2014-04-01

    Full Text Available Laser texturing is a relatively new multiprocess technique that has been used for machining 3D curved surfaces; it is more flexible and efficient to create decorative texture on 3D curved surfaces of injection molds so as to improve the surface quality and achieve cosmetic surface of molded plastic parts. In this paper, a novel method of laser texturing 3D curved surface based on 3-axis galvanometer scanning unit has been presented to prevent the texturing of injection mold surface from much distortion which is often caused by traditional texturing processes. The novel method has been based on the computer texture mapping technology which has been developed and presented. The developed texture mapping algorithm includes surface triangulation, notations, distortion measurement, control, and numerical method. An interface of computer texture mapping has been built to implement the algorithm of texture mapping approach to controlled distortion rate of 3D texture math model from 2D original texture applied to curvature surface. Through a case study of laser texturing of a high curvature surface of injection mold of a mice top case, it shows that the novel method of laser texturing meets the quality standard of laser texturing of injection mold.

  11. Ion-beam texturing of uniaxially textured Ni films

    International Nuclear Information System (INIS)

    Park, S.J.; Norton, D.P.; Selvamanickam, Venkat

    2005-01-01

    The formation of biaxial texture in uniaxially textured Ni thin films via Ar-ion irradiation is reported. The ion-beam irradiation was not simultaneous with deposition. Instead, the ion beam irradiates the uniaxially textured film surface with no impinging deposition flux, which differs from conventional ion-beam-assisted deposition. The uniaxial texture is established via a nonion beam process, with the in-plane texture imposed on the uniaxial film via ion beam bombardment. Within this sequential ion beam texturing method, grain alignment is driven by selective etching and grain overgrowth

  12. Inline inspection of textured plastics surfaces

    Science.gov (United States)

    Michaeli, Walter; Berdel, Klaus

    2011-02-01

    This article focuses on the inspection of plastics web materials exhibiting irregular textures such as imitation wood or leather. They are produced in a continuous process at high speed. In this process, various defects occur sporadically. However, current inspection systems for plastics surfaces are able to inspect unstructured products or products with regular, i.e., highly periodic, textures, only. The proposed inspection algorithm uses the local binary pattern operator for texture feature extraction. For classification, semisupervised as well as supervised approaches are used. A simple concept for semisupervised classification is presented and applied for defect detection. The resulting defect-maps are presented to the operator. He assigns class labels that are used to train the supervised classifier in order to distinguish between different defect types. A concept for parallelization is presented allowing the efficient use of standard multicore processor PC hardware. Experiments with images of a typical product acquired in an industrial setting show a detection rate of 97% while achieving a false alarm rate below 1%. Real-time tests show that defects can be reliably detected even at haul-off speeds of 30 m/min. Further applications of the presented concept can be found in the inspection of other materials.

  13. Semantic attributes based texture generation

    Science.gov (United States)

    Chi, Huifang; Gan, Yanhai; Qi, Lin; Dong, Junyu; Madessa, Amanuel Hirpa

    2018-04-01

    Semantic attributes are commonly used for texture description. They can be used to describe the information of a texture, such as patterns, textons, distributions, brightness, and so on. Generally speaking, semantic attributes are more concrete descriptors than perceptual features. Therefore, it is practical to generate texture images from semantic attributes. In this paper, we propose to generate high-quality texture images from semantic attributes. Over the last two decades, several works have been done on texture synthesis and generation. Most of them focusing on example-based texture synthesis and procedural texture generation. Semantic attributes based texture generation still deserves more devotion. Gan et al. proposed a useful joint model for perception driven texture generation. However, perceptual features are nonobjective spatial statistics used by humans to distinguish different textures in pre-attentive situations. To give more describing information about texture appearance, semantic attributes which are more in line with human description habits are desired. In this paper, we use sigmoid cross entropy loss in an auxiliary model to provide enough information for a generator. Consequently, the discriminator is released from the relatively intractable mission of figuring out the joint distribution of condition vectors and samples. To demonstrate the validity of our method, we compare our method to Gan et al.'s method on generating textures by designing experiments on PTD and DTD. All experimental results show that our model can generate textures from semantic attributes.

  14. Chameleons: Reptilian Texture

    Science.gov (United States)

    Petersen, Hugh

    2009-01-01

    This article presents an art project inspired by a drawing of a chameleon the author saw in an art-supply catalog. Chameleons prove to be a good subject to highlight shape, color and texture with eigth-graders. In this project, middle- and high-school students draw a chameleon, learn how to use shapes to add to their chameleon drawing, learn how…

  15. Strings, texture, and inflation

    International Nuclear Information System (INIS)

    Hodges, H.M.; Primack, J.R.

    1991-01-01

    We examine mechanisms, several of which are proposed here, to generate structure formation, or to just add large-scale features, through either gauged or global cosmic strings or global texture, within the framework of inflation. We first explore the possibility that strings or texture form if there is no coupling between the topological theory and the inflaton or spacetime curvature, via (1) quantum creation, and (2) a sufficiently high reheat temperature. In addition, we examine the prospects for the inflaton field itself to generate strings or texture. Then, models with the string/texture field coupled to the curvature, and an equivalent model with coupling to the inflaton field, are considered in detail. The requirement that inflationary density fluctuations are not so large as to conflict with observations leads to a number of constraints on model parameters. We find that strings of relevance for structure formation can form in the absence of coupling to the inflaton or curvature through the process of quantum creation, but only if the strings are strongly type I, or if they are global strings. If formed after reheating, naturalness suggests that gauged cosmic strings correspond to a type-I superconductor. Similarly, gauged strings formed during inflation via conformal coupling ξ=1/6 to the spacetime curvature (in a model suggested by Yokoyama in order to evade the millisecond pulsar constraint on cosmic strings) are expected to be strongly type I

  16. Texture analysis of

    NARCIS (Netherlands)

    Lubsch, A.; Timmermans, K.

    2017-01-01

    Texture analysis is a method to test the physical properties of a material by tension and compression. The growing interest in commercialisation of seaweeds for human food has stimulated research into the physical properties of seaweed tissue. These are important parameters for the survival of

  17. A neural network detection model of spilled oil based on the texture analysis of SAR image

    Science.gov (United States)

    An, Jubai; Zhu, Lisong

    2006-01-01

    A Radial Basis Function Neural Network (RBFNN) Model is investigated for the detection of spilled oil based on the texture analysis of SAR imagery. In this paper, to take the advantage of the abundant texture information of SAR imagery, the texture features are extracted by both wavelet transform and the Gray Level Co-occurrence matrix. The RBFNN Model is fed with a vector of these texture features. The RBFNN Model is trained and tested by the sample data set of the feature vectors. Finally, a SAR image is classified by this model. The classification results of a spilled oil SAR image show that the classification accuracy for oil spill is 86.2 by the RBFNN Model using both wavelet texture and gray texture, while the classification accuracy for oil spill is 78.0 by same RBFNN Model using only wavelet texture as the input of this RBFNN model. The model using both wavelet transform and the Gray Level Co-occurrence matrix is more effective than that only using wavelet texture. Furthermore, it keeps the complicated proximity and has a good performance of classification.

  18. Modelling and short-term forecasting of daily peak power demand in Victoria using two-dimensional wavelet based SDP models

    International Nuclear Information System (INIS)

    Truong, Nguyen-Vu; Wang, Liuping; Wong, Peter K.C.

    2008-01-01

    Power demand forecasting is of vital importance to the management and planning of power system operations which include generation, transmission, distribution, as well as system's security analysis and economic pricing processes. This paper concerns the modeling and short-term forecast of daily peak power demand in the state of Victoria, Australia. In this study, a two-dimensional wavelet based state dependent parameter (SDP) modelling approach is used to produce a compact mathematical model for this complex nonlinear dynamic system. In this approach, a nonlinear system is expressed by a set of linear regressive input and output terms (state variables) multiplied by the respective state dependent parameters that carry the nonlinearities in the form of 2-D wavelet series expansions. This model is identified based on historical data, descriptively representing the relationship and interaction between various components which affect the peak power demand of a certain day. The identified model has been used to forecast daily peak power demand in the state of Victoria, Australia in the time period from the 9th of August 2007 to the 24th of August 2007. With a MAPE (mean absolute prediction error) of 1.9%, it has clearly implied the effectiveness of the identified model. (author)

  19. Nondestructive Damage Assessment of Composite Structures Based on Wavelet Analysis of Modal Curvatures: State-of-the-Art Review and Description of Wavelet-Based Damage Assessment Benchmark

    Directory of Open Access Journals (Sweden)

    Andrzej Katunin

    2015-01-01

    Full Text Available The application of composite structures as elements of machines and vehicles working under various operational conditions causes degradation and occurrence of damage. Considering that composites are often used for responsible elements, for example, parts of aircrafts and other vehicles, it is extremely important to maintain them properly and detect, localize, and identify the damage occurring during their operation in possible early stage of its development. From a great variety of nondestructive testing methods developed to date, the vibration-based methods seem to be ones of the least expensive and simultaneously effective with appropriate processing of measurement data. Over the last decades a great popularity of vibration-based structural testing has been gained by wavelet analysis due to its high sensitivity to a damage. This paper presents an overview of results of numerous researchers working in the area of vibration-based damage assessment supported by the wavelet analysis and the detailed description of the Wavelet-based Structural Damage Assessment (WavStructDamAs Benchmark, which summarizes the author’s 5-year research in this area. The benchmark covers example problems of damage identification in various composite structures with various damage types using numerous wavelet transforms and supporting tools. The benchmark is openly available and allows performing the analysis on the example problems as well as on its own problems using available analysis tools.

  20. Wavelet-based peak detection and a new charge inference procedure for MS/MS implemented in ProteoWizard's msConvert.

    Science.gov (United States)

    French, William R; Zimmerman, Lisa J; Schilling, Birgit; Gibson, Bradford W; Miller, Christine A; Townsend, R Reid; Sherrod, Stacy D; Goodwin, Cody R; McLean, John A; Tabb, David L

    2015-02-06

    We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1-100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets.

  1. Wavelet-Based Peak Detection and a New Charge Inference Procedure for MS/MS Implemented in ProteoWizard’s msConvert

    Science.gov (United States)

    2015-01-01

    We report the implementation of high-quality signal processing algorithms into ProteoWizard, an efficient, open-source software package designed for analyzing proteomics tandem mass spectrometry data. Specifically, a new wavelet-based peak-picker (CantWaiT) and a precursor charge determination algorithm (Turbocharger) have been implemented. These additions into ProteoWizard provide universal tools that are independent of vendor platform for tandem mass spectrometry analyses and have particular utility for intralaboratory studies requiring the advantages of different platforms convergent on a particular workflow or for interlaboratory investigations spanning multiple platforms. We compared results from these tools to those obtained using vendor and commercial software, finding that in all cases our algorithms resulted in a comparable number of identified peptides for simple and complex samples measured on Waters, Agilent, and AB SCIEX quadrupole time-of-flight and Thermo Q-Exactive mass spectrometers. The mass accuracy of matched precursor ions also compared favorably with vendor and commercial tools. Additionally, typical analysis runtimes (∼1–100 ms per MS/MS spectrum) were short enough to enable the practical use of these high-quality signal processing tools for large clinical and research data sets. PMID:25411686

  2. The evolution of spillover effects between oil and stock markets across multi-scales using a wavelet-based GARCH-BEKK model

    Science.gov (United States)

    Liu, Xueyong; An, Haizhong; Huang, Shupei; Wen, Shaobo

    2017-01-01

    Aiming to investigate the evolution of mean and volatility spillovers between oil and stock markets in the time and frequency dimensions, we employed WTI crude oil prices, the S&P 500 (USA) index and the MICEX index (Russia) for the period Jan. 2003-Dec. 2014 as sample data. We first applied a wavelet-based GARCH-BEKK method to examine the spillover features in frequency dimension. To consider the evolution of spillover effects in time dimension at multiple-scales, we then divided the full sample period into three sub-periods, pre-crisis period, crisis period, and post-crisis period. The results indicate that spillover effects vary across wavelet scales in terms of strength and direction. By analysis the time-varying linkage, we found the different evolution features of spillover effects between the Oil-US stock market and Oil-Russia stock market. The spillover relationship between oil and US stock market is shifting to short-term while the spillover relationship between oil and Russia stock market is changing to all time scales. That result implies that the linkage between oil and US stock market is weakening in the long-term, and the linkage between oil and Russia stock market is getting close in all time scales. This may explain the phenomenon that the US stock index and the Russia stock index showed the opposite trend with the falling of oil price in the post-crisis period.

  3. Structure and texture of uranium ores in exogenous deposits

    International Nuclear Information System (INIS)

    Danchev, V.I.

    1977-01-01

    Structure and texture signs of uranium rock exogenous deposits have been systematized for the first time, taking into account the slaging of the ore-formation process, connected with formation and change of containing sedimentary rocks, starting with the sedimentogenesis stage and early sediment diagenesis and their subsequent transformation in katagenesis and metamorphism processes. The main features of uranium geochemistry in the exogenous process are considered. Suggested is the genetic classification of uranium exogenous deposits in rocks of sedimentary cover, made with respect to conjugation and various ore-forming productivity of the litogenesis stage. Described are the main combinations of various rock texture and structure properties, characteristic of deposits of genetic classes and groups of the above classification. Eight most frequently occuring textures (lamellar, concretion, oolitic, coagulate, crack, mixed and impregnated) and their types are described and illustrated. Materials of soviet and foreign authors have been used to compile the atlas

  4. Perceptual asymmetry in texture perception.

    OpenAIRE

    Williams, D; Julesz, B

    1992-01-01

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for...

  5. Artificial intelligence systems based on texture descriptors for vaccine development.

    Science.gov (United States)

    Nanni, Loris; Brahnam, Sheryl; Lumini, Alessandra

    2011-02-01

    The aim of this work is to analyze and compare several feature extraction methods for peptide classification that are based on the calculation of texture descriptors starting from a matrix representation of the peptide. This texture-based representation of the peptide is then used to train a support vector machine classifier. In our experiments, the best results are obtained using local binary patterns variants and the discrete cosine transform with selected coefficients. These results are better than those previously reported that employed texture descriptors for peptide representation. In addition, we perform experiments that combine standard approaches based on amino acid sequence. The experimental section reports several tests performed on a vaccine dataset for the prediction of peptides that bind human leukocyte antigens and on a human immunodeficiency virus (HIV-1). Experimental results confirm the usefulness of our novel descriptors. The matlab implementation of our approaches is available at http://bias.csr.unibo.it/nanni/TexturePeptide.zip.

  6. Parallel-Sequential Texture Analysis

    NARCIS (Netherlands)

    van den Broek, Egon; Singh, Sameer; Singh, Maneesha; van Rikxoort, Eva M.; Apte, Chid; Perner, Petra

    2005-01-01

    Color induced texture analysis is explored, using two texture analysis techniques: the co-occurrence matrix and the color correlogram as well as color histograms. Several quantization schemes for six color spaces and the human-based 11 color quantization scheme have been applied. The VisTex texture

  7. Improvement of Secret Image Invisibility in Circulation Image with Dyadic Wavelet Based Data Hiding with Run-Length Coded Secret Images of Which Location of Codes are Determined with Random Number

    OpenAIRE

    Kohei Arai; Yuji Yamada

    2011-01-01

    An attempt is made for improvement of secret image invisibility in circulation images with dyadic wavelet based data hiding with run-length coded secret images of which location of codes are determined by random number. Through experiments, it is confirmed that secret images are almost invisible in circulation images. Also robustness of the proposed data hiding method against data compression of circulation images is discussed. Data hiding performance in terms of invisibility of secret images...

  8. Depth image enhancement using perceptual texture priors

    Science.gov (United States)

    Bang, Duhyeon; Shim, Hyunjung

    2015-03-01

    A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.

  9. Local Wavelet-Based Filtering of Electromyographic Signals to Eliminate the Electrocardiographic-Induced Artifacts in Patients with Spinal Cord Injury.

    Science.gov (United States)

    Nitzken, Matthew; Bajaj, Nihit; Aslan, Sevda; Gimel'farb, Georgy; El-Baz, Ayman; Ovechkin, Alexander

    2013-07-18

    Surface Electromyography (EMG) is a standard method used in clinical practice and research to assess motor function in order to help with the diagnosis of neuromuscular pathology in human and animal models. EMG recorded from trunk muscles involved in the activity of breathing can be used as a direct measure of respiratory motor function in patients with spinal cord injury (SCI) or other disorders associated with motor control deficits. However, EMG potentials recorded from these muscles are often contaminated with heart-induced electrocardiographic (ECG) signals. Elimination of these artifacts plays a critical role in the precise measure of the respiratory muscle electrical activity. This study was undertaken to find an optimal approach to eliminate the ECG artifacts from EMG recordings. Conventional global filtering can be used to decrease the ECG-induced artifact. However, this method can alter the EMG signal and changes physiologically relevant information. We hypothesize that, unlike global filtering, localized removal of ECG artifacts will not change the original EMG signals. We develop an approach to remove the ECG artifacts without altering the amplitude and frequency components of the EMG signal by using an externally recorded ECG signal as a mask to locate areas of the ECG spikes within EMG data. These segments containing ECG spikes were decomposed into 128 sub-wavelets by a custom-scaled Morlet Wavelet Transform. The ECG-related sub-wavelets at the ECG spike location were removed and a de-noised EMG signal was reconstructed. Validity of the proposed method was proven using mathematical simulated synthetic signals and EMG obtained from SCI patients. We compare the Root-mean Square Error and the Relative Change in Variance between this method, global, notch and adaptive filters. The results show that the localized wavelet-based filtering has the benefit of not introducing error in the native EMG signal and accurately removing ECG artifacts from EMG signals.

  10. Wavelet-based resolution recovery using an anatomical prior provides quantitative recovery for human population phantom PET [11C]raclopride data

    International Nuclear Information System (INIS)

    Shidahara, M; Tamura, H; Tsoumpas, C; McGinnity, C J; Hammers, A; Turkheimer, F E; Kato, T; Watabe, H

    2012-01-01

    The objective of this study was to evaluate a resolution recovery (RR) method using a variety of simulated human brain [ 11 C]raclopride positron emission tomography (PET) images. Simulated datasets of 15 numerical human phantoms were processed by a wavelet-based RR method using an anatomical prior. The anatomical prior was in the form of a hybrid segmented atlas, which combined an atlas for anatomical labelling and a PET image for functional labelling of each anatomical structure. We applied RR to both 60 min static and dynamic PET images. Recovery was quantified in 84 regions, comparing the typical ‘true’ value for the simulation, as obtained in normal subjects, simulated and RR PET images. The radioactivity concentration in the white matter, striatum and other cortical regions was successfully recovered for the 60 min static image of all 15 human phantoms; the dependence of the solution on accurate anatomical information was demonstrated by the difficulty of the technique to retrieve the subthalamic nuclei due to mismatch between the two atlases used for data simulation and recovery. Structural and functional synergy for resolution recovery (SFS-RR) improved quantification in the caudate and putamen, the main regions of interest, from −30.1% and −26.2% to −17.6% and −15.1%, respectively, for the 60 min static image and from −51.4% and −38.3% to −27.6% and −20.3% for the binding potential (BP ND ) image, respectively. The proposed methodology proved effective in the RR of small structures from brain [ 11 C]raclopride PET images. The improvement is consistent across the anatomical variability of a simulated population as long as accurate anatomical segmentations are provided. (paper)

  11. Breast density pattern characterization by histogram features and texture descriptors

    Directory of Open Access Journals (Sweden)

    Pedro Cunha Carneiro

    2017-04-01

    Full Text Available Abstract Introduction Breast cancer is the first leading cause of death for women in Brazil as well as in most countries in the world. Due to the relation between the breast density and the risk of breast cancer, in medical practice, the breast density classification is merely visual and dependent on professional experience, making this task very subjective. The purpose of this paper is to investigate image features based on histograms and Haralick texture descriptors so as to separate mammographic images into categories of breast density using an Artificial Neural Network. Methods We used 307 mammographic images from the INbreast digital database, extracting histogram features and texture descriptors of all mammograms and selecting them with the K-means technique. Then, these groups of selected features were used as inputs of an Artificial Neural Network to classify the images automatically into the four categories reported by radiologists. Results An average accuracy of 92.9% was obtained in a few tests using only some of the Haralick texture descriptors. Also, the accuracy rate increased to 98.95% when texture descriptors were mixed with some features based on a histogram. Conclusion Texture descriptors have proven to be better than gray levels features at differentiating the breast densities in mammographic images. From this paper, it was possible to automate the feature selection and the classification with acceptable error rates since the extraction of the features is suitable to the characteristics of the images involving the problem.

  12. [Visual Texture Agnosia in Humans].

    Science.gov (United States)

    Suzuki, Kyoko

    2015-06-01

    Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.

  13. Texturized dairy proteins.

    Science.gov (United States)

    Onwulata, Charles I; Phillips, John G; Tunick, Michael H; Qi, Phoebi X; Cooke, Peter H

    2010-03-01

    Dairy proteins are amenable to structural modifications induced by high temperature, shear, and moisture; in particular, whey proteins can change conformation to new unfolded states. The change in protein state is a basis for creating new foods. The dairy products, nonfat dried milk (NDM), whey protein concentrate (WPC), and whey protein isolate (WPI) were modified using a twin-screw extruder at melt temperatures of 50, 75, and 100 degrees C, and moistures ranging from 20 to 70 wt%. Viscoelasticity and solubility measurements showed that extrusion temperature was a more significant (P extruded dairy protein ranged from rigid (2500 N) to soft (2.7 N). Extruding at or above 75 degrees C resulted in increased peak force for WPC (138 to 2500 N) and WPI (2.7 to 147.1 N). NDM was marginally texturized; the presence of lactose interfered with its texturization. WPI products extruded at 50 degrees C were not texturized; their solubility values ranged from 71.8% to 92.6%. A wide possibility exists for creating new foods with texturized dairy proteins due to the extensive range of states achievable. Dairy proteins can be used to boost the protein content in puffed snacks made from corn meal, but unmodified, they bind water and form doughy pastes with starch. To minimize the water binding property of dairy proteins, WPI, or WPC, or NDM were modified by extrusion processing. Extrusion temperature conditions were adjusted to 50, 75, or 100 degrees C, sufficient to change the structure of the dairy proteins, but not destroy them. Extrusion modified the structures of these dairy proteins for ease of use in starchy foods to boost nutrient levels. Dairy proteins can be used to boost the protein content in puffed snacks made from corn meal, but unmodified, they bind water and form doughy pastes with starch. To minimize the water binding property of dairy proteins, whey protein isolate, whey protein concentrate, or nonfat dried milk were modified by extrusion processing. Extrusion

  14. Filter and Filter Bank Design for Image Texture Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Randen, Trygve

    1997-12-31

    The relevance of this thesis to energy and environment lies in its application to remote sensing such as for instance sea floor mapping and seismic pattern recognition. The focus is on the design of two-dimensional filters for feature extraction, segmentation, and classification of digital images with textural content. The features are extracted by filtering with a linear filter and estimating the local energy in the filter response. The thesis gives a review covering broadly most previous approaches to texture feature extraction and continues with proposals of some new techniques. 143 refs., 59 figs., 7 tabs.

  15. Hierarchical Multiple Markov Chain Model for Unsupervised Texture Segmentation

    Czech Academy of Sciences Publication Activity Database

    Scarpa, G.; Gaetano, R.; Haindl, Michal; Zerubia, J.

    2009-01-01

    Roč. 18, č. 8 (2009), s. 1830-1843 ISSN 1057-7149 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Classification * texture analysis * segmentation * hierarchical image models * Markov process Subject RIV: BD - Theory of Information Impact factor: 2.848, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/haindl-hierarchical multiple markov chain model for unsupervised texture segmentation.pdf

  16. UAV Remote Sensing for Urban Vegetation Mapping Using Random Forest and Texture Analysis

    Directory of Open Access Journals (Sweden)

    Quanlong Feng

    2015-01-01

    Full Text Available Unmanned aerial vehicle (UAV remote sensing has great potential for vegetation mapping in complex urban landscapes due to the ultra-high resolution imagery acquired at low altitudes. Because of payload capacity restrictions, off-the-shelf digital cameras are widely used on medium and small sized UAVs. The limitation of low spectral resolution in digital cameras for vegetation mapping can be reduced by incorporating texture features and robust classifiers. Random Forest has been widely used in satellite remote sensing applications, but its usage in UAV image classification has not been well documented. The objectives of this paper were to propose a hybrid method using Random Forest and texture analysis to accurately differentiate land covers of urban vegetated areas, and analyze how classification accuracy changes with texture window size. Six least correlated second-order texture measures were calculated at nine different window sizes and added to original Red-Green-Blue (RGB images as ancillary data. A Random Forest classifier consisting of 200 decision trees was used for classification in the spectral-textural feature space. Results indicated the following: (1 Random Forest outperformed traditional Maximum Likelihood classifier and showed similar performance to object-based image analysis in urban vegetation classification; (2 the inclusion of texture features improved classification accuracy significantly; (3 classification accuracy followed an inverted U relationship with texture window size. The results demonstrate that UAV provides an efficient and ideal platform for urban vegetation mapping. The hybrid method proposed in this paper shows good performance in differentiating urban vegetation mapping. The drawbacks of off-the-shelf digital cameras can be reduced by adopting Random Forest and texture analysis at the same time.

  17. Quantitative characterization of texture used for identification of eggs of bovine parasitic nematodes

    DEFF Research Database (Denmark)

    Sommer, C.

    1998-01-01

    This study investigates the use of texture, i.e. the grey level variation in digital images, as a basis for identification of strongylid eggs. Texture features were defined by algorithms applied to digital images of eggs from the bovine parasitic nematodes, Ostertagia ostertagi, Cooperia oncophora...... criterion based on these ten texture features, an average of 91.2% of eggs from the three species were correctly classified. All O. radiatum eggs were correctly classified, 11.8% of O. ostertagi and C. oncophora were reciprocally misclassified, and 2.9% of O. ostertagi were identified as O. radiatum. When...... the ten texture features were used singly an average of 51.2 to 37.9% of the species could be classified correctly. When texture was used together with the shape and size features, a higher percentage of eggs were correctly classified compared with the classification based on either texture, or shape...

  18. Gravitational effects of global textures

    International Nuclear Information System (INIS)

    Noetzold, D.

    1990-03-01

    A solution for the dynamics of global textures is obtained. Their gravitational field during the collapse and the subsequent evolution is found to be given solely by a space-time dependent ''deficit solid angle.'' The frequency shift of photons traversing this gravitational field is calculated. The space-time dependent texture metric locally contracts the volume of three-space and thereby induces overdensities in homogeneous matter distributions. There are no gravitational forces unless matter has a nonzero angular momentum with respect to the texture origin which would be the case for moving textures

  19. Complex Wavelet Based Modulation Analysis

    DEFF Research Database (Denmark)

    Luneau, Jean-Marc; Lebrun, Jérôme; Jensen, Søren Holdt

    2008-01-01

    Low-frequency modulation of sound carry important information for speech and music. The modulation spectrum i commonly obtained by spectral analysis of the sole temporal envelopes of the sub-bands out of a time-frequency analysis. Processing in this domain usually creates undesirable distortions...... polynomial trends. Moreover an analytic Hilbert-like transform is possible with complex wavelets implemented as an orthogonal filter bank. By working in an alternative transform domain coined as “Modulation Subbands”, this transform shows very promising denoising capabilities and suggests new approaches for joint...

  20. Classifying Classifications

    DEFF Research Database (Denmark)

    Debus, Michael S.

    2017-01-01

    This paper critically analyzes seventeen game classifications. The classifications were chosen on the basis of diversity, ranging from pre-digital classification (e.g. Murray 1952), over game studies classifications (e.g. Elverdam & Aarseth 2007) to classifications of drinking games (e.g. LaBrie et...... al. 2013). The analysis aims at three goals: The classifications’ internal consistency, the abstraction of classification criteria and the identification of differences in classification across fields and/or time. Especially the abstraction of classification criteria can be used in future endeavors...... into the topic of game classifications....

  1. LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    R. Reena Rose

    2014-02-01

    Full Text Available Texture descriptors have an important role in recognizing face images. However, almost all the existing local texture descriptors use nearest neighbors to encode a texture pattern around a pixel. But in face images, most of the pixels have similar characteristics with that of its nearest neighbors because the skin covers large area in a face and the skin tone at neighboring regions are same. Therefore this paper presents a general framework called Local Texture Description Framework that uses only eight pixels which are at certain distance apart either circular or elliptical from the referenced pixel. Local texture description can be done using the foundation of any existing local texture descriptors. In this paper, the performance of the proposed framework is verified with three existing local texture descriptors Local Binary Pattern (LBP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs for the five issues viz. facial expression, partial occlusion, illumination variation, pose variation and general recognition. Five benchmark databases JAFFE, Essex, Indian faces, AT&T and Georgia Tech are used for the experiments. Experimental results demonstrate that even with less number of patterns, the proposed framework could achieve higher recognition accuracy than that of their base models.

  2. Dark texture in artworks

    Science.gov (United States)

    Parraman, Carinna

    2012-01-01

    This presentation highlights issues relating to the digital capture printing of 2D and 3D artefacts and accurate colour reproduction of 3D objects. There are a range of opportunities and technologies for the scanning and printing of two-dimensional and threedimensional artefacts [1]. A successful approach of Polynomial Texture Mapping (PTM) technique, to create a Reflectance Transformation Image (RTI) [2-4] is being used for the conservation and heritage of artworks as these methods are non invasive or non destructive of fragile artefacts. This approach captures surface detail of twodimensional artworks using a multidimensional approach that by using a hemispherical dome comprising 64 lamps to create an entire surface topography. The benefits of this approach are to provide a highly detailed visualization of the surface of materials and objects.

  3. Emotional effects of dynamic textures

    NARCIS (Netherlands)

    Toet, A.; Henselmans, M.; Lucassen, M.P.; Gevers, T.

    2011-01-01

    This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely

  4. Quantitative Characterisation of Surface Texture

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Lonardo, P.M.; Trumpold, H.

    2000-01-01

    This paper reviews the different methods used to give a quantitative characterisation of surface texture. The paper contains a review of conventional 2D as well as 3D roughness parameters, with particular emphasis on recent international standards and developments. It presents new texture...

  5. Human versus artificial texture perception

    NARCIS (Netherlands)

    Petiet, Peter J.; van Erp, J.; Drullman, R.; van den Broek, Egon; Beintema, J.; van Wijngaarden, S.

    2006-01-01

    The performances of current texture analysis algorithms are still poor, especially when applied to a large, diffuse texture domain. Most of these purely computationally driven techniques are created to function within a highly restricted domain. When applied as computer vision techniques, frequently

  6. Perceptual asymmetry in texture perception.

    Science.gov (United States)

    Williams, D; Julesz, B

    1992-07-15

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.

  7. Texture recognition of medical images with the ICM method

    International Nuclear Information System (INIS)

    Kinser, Jason M.; Wang Guisong

    2004-01-01

    The Integrated Cortical Model (ICM) is based upon several models of the mammalian visual cortex and produces pulse images over several iterations. These pulse images tend to isolate segments, edges, and textures that are inherent in the input image. To create a texture recognition engine the pulse spectrum of individual pixels are collected and used to develop a recognition library. Recognition is performed by comparing pulse spectra of unclassified regions of images with the known regions. Because signatures are smaller than images, signature-based computation is quite efficient and parasites can be recognized quickly. The precision of this method depends on the representative of signatures and classification. Our experiment results support the theoretical findings and show perspectives of practical applications of ICM-based method. The advantage of ICM method is using signatures to represent objects. ICM can extract the internal features of objects and represent them with signatures. Signature classification is critical for the precision of recognition

  8. Micro-Texture Synthesis by Phase Randomization

    Directory of Open Access Journals (Sweden)

    Bruno Galerne

    2011-09-01

    Full Text Available This contribution is concerned with texture synthesis by example, the process of generating new texture images from a given sample. The Random Phase Noise algorithm presented here synthesizes a texture from an original image by simply randomizing its Fourier phase. It is able to reproduce textures which are characterized by their Fourier modulus, namely the random phase textures (or micro-textures.

  9. Characterisation of radiotherapy planning volumes using textural analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nailon, William H.; Redpath, Anthony T.; McLaren, Duncan B. (Dept. of Oncology Physics, Edinburgh Cancer Centre, Western General Hospital, Edinburgh (United Kingdom))

    2008-08-15

    Computer-based artificial intelligence methods for classification and delineation of the gross tumour volume (GTV) on computerised tomography (CT) and magnetic resonance (MR) images do not, at present, provide the accuracy required for radiotherapy applications. This paper describes an image analysis method for classification of distinct regions within the GTV, and other clinically relevant regions, on CT images acquired on eight bladder cancer patients at the radiotherapy planning stage and thereafter at regular intervals during treatment. Statistical and fractal textural features (N=27) were calculated on the bladder, rectum and a control region identified on axial, coronal and sagittal CT images. Unsupervised classification results demonstrate that with a reduced feature set (N=3) the approach offers significant classification accuracy on axial, coronal and sagittal CT image planes and has the potential to be developed further for radiotherapy applications, particularly towards an automatic outlining approach

  10. Characterisation of radiotherapy planning volumes using textural analysis

    International Nuclear Information System (INIS)

    Nailon, William H.; Redpath, Anthony T.; McLaren, Duncan B.

    2008-01-01

    Computer-based artificial intelligence methods for classification and delineation of the gross tumour volume (GTV) on computerised tomography (CT) and magnetic resonance (MR) images do not, at present, provide the accuracy required for radiotherapy applications. This paper describes an image analysis method for classification of distinct regions within the GTV, and other clinically relevant regions, on CT images acquired on eight bladder cancer patients at the radiotherapy planning stage and thereafter at regular intervals during treatment. Statistical and fractal textural features (N=27) were calculated on the bladder, rectum and a control region identified on axial, coronal and sagittal CT images. Unsupervised classification results demonstrate that with a reduced feature set (N=3) the approach offers significant classification accuracy on axial, coronal and sagittal CT image planes and has the potential to be developed further for radiotherapy applications, particularly towards an automatic outlining approach

  11. Breast density pattern characterization by histogram features and texture descriptors

    OpenAIRE

    Carneiro,Pedro Cunha; Franco,Marcelo Lemos Nunes; Thomaz,Ricardo de Lima; Patrocinio,Ana Claudia

    2017-01-01

    Abstract Introduction Breast cancer is the first leading cause of death for women in Brazil as well as in most countries in the world. Due to the relation between the breast density and the risk of breast cancer, in medical practice, the breast density classification is merely visual and dependent on professional experience, making this task very subjective. The purpose of this paper is to investigate image features based on histograms and Haralick texture descriptors so as to separate mammo...

  12. Textures in Utopia Planitia

    Science.gov (United States)

    2002-01-01

    [figure removed for brevity, see original site] Bizarre textures cover the surface of eastern Utopia Planitia where there is a high probability that ground ice has played a role in the formation of this unusual landscape.Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  13. Fourier and Wavelet Based Characterisation of the Ionospheric Response to the Solar Eclipse of August, the 11th, 1999, Measured Through 1-minute Vertical Ionospheric Sounding

    Science.gov (United States)

    Sauli, P.; Abry, P.; Boska, J.

    2004-05-01

    The aim of the present work is to study the ionospheric response induced by the solar eclipse of August, the 11th, 1999. We provide Fourier and wavelet based characterisations of the propagation of the acoustic-gravity waves induced by the solar eclipse. The analysed data consist of profiles of electron concentration. They are derived from 1-minute vertical incidence ionospheric sounding measurements, performed at the Pruhonice observatory (Czech republic, 49.9N, 14.5E). The chosen 1-minute high sampling rate aims at enabling us to specifically see modes below acoustic cut-off period. The August period was characterized by Solar Flux F10.7 = 128, steady solar wind, quiet magnetospheric conditions, a low geomagnetic activity (Dst index varies from -10 nT to -20 nT, Σ Kp index reached value of 12+). The eclipse was notably exceptional in uniform solar disk. These conditions and fact that the culmination of the solar eclipse over central Europe occurred at local noon are such that the observed ionospheric response is mainly that of the solar eclipse. We provide a full characterization of the propagation of the waves in terms of times of occurrence, group and phase velocities, propagation direction, characteristic period and lifetime of the particular wave structure. However, ionospheric vertical sounding technique enables us to deal with vertical components of each characteristic. Parameters are estimated combining Fourier and wavelet analysis. Our conclusions confirm earlier theoretical and experimental findings, reported in [Altadill et al., 2001; Farges et al., 2001; Muller-Wodarg et al.,1998] regarding the generation and propagation of gravity waves and provide complementary characterisation using wavelet approaches. We also report a new evidence for the generation and propagation of acoustic waves induced by the solar eclipse through the ionospheric F region. Up to our knowledge, this is the first time that acoustic waves can be demonstrated based on ionospheric

  14. Symmetry realization of texture zeros

    International Nuclear Information System (INIS)

    Grimus, W.; Joshipura, A.S.; Lavoura, L.; Tanimoto, M.

    2004-01-01

    We show that it is possible to enforce texture zeros in arbitrary entries of the fermion mass matrices by means of Abelian symmetries; in this way, many popular mass-matrix textures find a symmetry justification. We propose two alternative methods which allow one to place zeros in any number of elements of the mass matrices that one wants. They are applicable simultaneously in the quark and lepton sectors. They are also applicable in grand unified theories. The number of scalar fields required by our methods may be large; still, in many interesting cases this number can be reduced considerably. The larger the desired number of texture zeros is, the simpler are the models which reproduce the texture. (orig.)

  15. CRUMB TEXTURE OF SPELT BREAD

    Directory of Open Access Journals (Sweden)

    Joanna Korczyk-Szabó

    2013-12-01

    Full Text Available Abstract The bread quality is considerably dependent on the texture characteristic of bread crumb. Crumb texture is an important quality indicator, as consumer prefer different bread taste. Texture analysis is primarily concerned with the evaluation of mechanical characteristics where a material is subjected to a controlled force from which a deformation curve of its response is generated. It is an objective physical examination of baked products and gives direct information on the product quality, oppositely to dough rheology tests what inform on the baking suitability of the flour, as raw material. This is why the texture analysis is one of the most helpful analytical methods of the product development. In the framework of our research during the years 2008 – 2009 were analyzed selected indicators for bread texture quality of five Triticum spelta L. varieties – Altgold, Oberkulmer Rotkorn, Ostro, Rubiota and Franckenkorn grown in an ecological system. The bread texture quality was evaluated on texture analyzer TA.XT Plus (Stable Micro Systems, Surrey, UK, following the AACC (74-09 standard method and expressed as crumb firmness (N, stiffness (N.mm-1 and relative elasticity (%. Our research proved that all selected indicators were significantly influenced by the year of growing and variety. The most soft bread was achieved in Rubiota, whereas bread crumb samples from Franckenkorn and Altgold were the most firm and stiff. Correlation analysis showed strong negative correlation between relative elasticity and bread crumb firmness as well as bread stiffness (-0.81++, -0.78++. The spelt grain can be a good source for making bread flour, but is closely dependent on choice of spelt variety. The spelt wheat bread crumb texture need further investigation as it can be a reliable quality parameter.

  16. Food Texture Preferences in Infants Versus Toddlers.

    Science.gov (United States)

    Lundy, Brenda; And Others

    1998-01-01

    Compared food texture preferences during infancy and toddlerhood. Found that infants displayed more negative expressions and head and body movements in response to complex textures than to simple textures. Toddlers displayed more positive head and body movements and more eagerness in response to complex than to simple textures. Experience with…

  17. Parameter set for computer-assisted texture analysis of fetal brain.

    Science.gov (United States)

    Gentillon, Hugues; Stefańczyk, Ludomir; Strzelecki, Michał; Respondek-Liberska, Maria

    2016-11-25

    Magnetic resonance data were collected from a diverse population of gravid women to objectively compare the quality of 1.5-tesla (1.5 T) versus 3-T magnetic resonance imaging of the developing human brain. MaZda and B11 computational-visual cognition tools were used to process 2D images. We proposed a wavelet-based parameter and two novel histogram-based parameters for Fisher texture analysis in three-dimensional space. Wavenhl, focus index, and dispersion index revealed better quality for 3 T. Though both 1.5 and 3 T images were 16-bit DICOM encoded, nearly 16 and 12 usable bits were measured in 3 and 1.5 T images, respectively. The four-bit padding observed in 1.5 T K-space encoding mimics noise by adding illusionistic details, which are not really part of the image. In contrast, zero-bit padding in 3 T provides space for storing more details and increases the likelihood of noise but as well as edges, which in turn are very crucial for differentiation of closely related anatomical structures. Both encoding modes are possible with both units, but higher 3 T resolution is the main difference. It contributes to higher perceived and available dynamic range. Apart from surprisingly larger Fisher coefficient, no significant difference was observed when testing was conducted with down-converted 8-bit BMP images.

  18. Texture Classification in Lung CT Using Local Binary Patterns

    DEFF Research Database (Denmark)

    Sørensen, Lauge Emil Borch Laurs; Shaker, Saher B.; de Bruijne, Marleen

    2008-01-01

    the k nearest neighbor classifier with histogram similarity as distance measure. The proposed method is evaluated on a set of 168 regions of interest comprising normal tissue and different emphysema patterns, and compared to a filter bank based on Gaussian derivatives. The joint LBP and intensity...

  19. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  20. BREAD CRUMBS TEXTURE OF SPELT

    Directory of Open Access Journals (Sweden)

    Joanna Korczyk – Szabó

    2014-02-01

    Full Text Available Texture analysis is an objective physical examination of baked products and gives direct information on the product quality, oppositely to dough rheology tests what inform on the baking suitability of the flour, as raw material. Evaluation of the mechanical properties of bread crumb is important not only for quality assurance in the bakeries, but also for assessing the effects of changes in dough ingredients and processing condition and also for describing the changes in bread crumb during storage. Crumb cellular structure is an important quality criterion used in commercial baking and research laboratories to judge bread quality alongside taste, crumb colour and crumb physical texture. In the framework of our research during the years 2010 – 2011 were analyzed selected indicators of bread crumb for texture quality of three Triticum spelta L. cultivars – Altgold, Rubiota and Ostro grown in an ecological system. The bread texture quality was evaluated on texture analyzer TA.XT Plus (Stable Micro Systems, Surrey, UK, following the AACC (74-09 standard and expressed as crumb firmness (N, stiffness (N.mm-1 and relative elasticity (%. Our research proved that all selected indicators were significantly influenced by the year of growing and variety. The most soft bread was achieved in Rubiota, whereas bread crumb samples from Altgold and Ostro were the most firm and stiff. Correlation analysis showed strong negative correlation between relative elasticity and bread crumb firmness as well as bread stiffness (-0.65++, -0.66++. The spelt wheat bread crumb texture need further investigation as it can be a reliable quality parameter.

  1. A signature dissimilarity measure for trabecular bone texture in knee radiographs

    International Nuclear Information System (INIS)

    Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.; Kurzynski, M.

    2010-01-01

    Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size, anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle ( 64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8% classification accuracy and outperformed the WND

  2. A signature dissimilarity measure for trabecular bone texture in knee radiographs

    Energy Technology Data Exchange (ETDEWEB)

    Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.; Kurzynski, M. [Tribology Laboratory, School of Mechanical Engineering, University of Western Australia, Crawley, Western Australia 6009 (Australia); Chair of Computer Systems and Networks, Faculty of Electronics, Wroclaw University of Technology, Wybrzeze Wyspianskiego 27, 50-370 Wroclaw (Poland)

    2010-05-15

    Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size, anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle (<25%), blur generated by a sharp film screen, and image size (>64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system

  3. Laser surface texturing of tool steel: textured surfaces quality evaluation

    Science.gov (United States)

    Šugár, Peter; Šugárová, Jana; Frnčík, Martin

    2016-05-01

    In this experimental investigation the laser surface texturing of tool steel of type 90MnCrV8 has been conducted. The 5-axis highly dynamic laser precision machining centre Lasertec 80 Shape equipped with the nano-second pulsed ytterbium fibre laser and CNC system Siemens 840 D was used. The planar and spherical surfaces first prepared by turning have been textured. The regular array of spherical and ellipsoidal dimples with a different dimensions and different surface density has been created. Laser surface texturing has been realized under different combinations of process parameters: pulse frequency, pulse energy and laser beam scanning speed. The morphological characterization of ablated surfaces has been performed using scanning electron microscopy (SEM) technique. The results show limited possibility of ns pulse fibre laser application to generate different surface structures for tribological modification of metallic materials. These structures were obtained by varying the processing conditions between surface ablation, to surface remelting. In all cases the areas of molten material and re-cast layers were observed on the bottom and walls of the dimples. Beside the influence of laser beam parameters on the machined surface quality during laser machining of regular hemispherical and elipsoidal dimple texture on parabolic and hemispherical surfaces has been studied.

  4. Computer-aided diagnosis with textural features for breast lesions in sonograms.

    Science.gov (United States)

    Chen, Dar-Ren; Huang, Yu-Len; Lin, Sheng-Hsiung

    2011-04-01

    Computer-aided diagnosis (CAD) systems provided second beneficial support reference and enhance the diagnostic accuracy. This paper was aimed to develop and evaluate a CAD with texture analysis in the classification of breast tumors for ultrasound images. The ultrasound (US) dataset evaluated in this study composed of 1020 sonograms of region of interest (ROI) subimages from 255 patients. Two-view sonogram (longitudinal and transverse views) and four different rectangular regions were utilized to analyze each tumor. Six practical textural features from the US images were performed to classify breast tumors as benign or malignant. However, the textural features always perform as a high dimensional vector; high dimensional vector is unfavorable to differentiate breast tumors in practice. The principal component analysis (PCA) was used to reduce the dimension of textural feature vector and then the image retrieval technique was performed to differentiate between benign and malignant tumors. In the experiments, all the cases were sampled with k-fold cross-validation (k=10) to evaluate the performance with receiver operating characteristic (ROC) curve. The area (A(Z)) under the ROC curve for the proposed CAD system with the specific textural features was 0.925±0.019. The classification ability for breast tumor with textural information is satisfactory. This system differentiates benign from malignant breast tumors with a good result and is therefore clinically useful to provide a second opinion. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. AUTOMATIC APPROACH TO VHR SATELLITE IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Kupidura

    2016-06-01

    Full Text Available In this paper, we present a proposition of a fully automatic classification of VHR satellite images. Unlike the most widespread approaches: supervised classification, which requires prior defining of class signatures, or unsupervised classification, which must be followed by an interpretation of its results, the proposed method requires no human intervention except for the setting of the initial parameters. The presented approach bases on both spectral and textural analysis of the image and consists of 3 steps. The first step, the analysis of spectral data, relies on NDVI values. Its purpose is to distinguish between basic classes, such as water, vegetation and non-vegetation, which all differ significantly spectrally, thus they can be easily extracted basing on spectral analysis. The second step relies on granulometric maps. These are the product of local granulometric analysis of an image and present information on the texture of each pixel neighbourhood, depending on the texture grain. The purpose of texture analysis is to distinguish between different classes, spectrally similar, but yet of different texture, e.g. bare soil from a built-up area, or low vegetation from a wooded area. Due to the use of granulometric analysis, based on mathematical morphology opening and closing, the results are resistant to the border effect (qualifying borders of objects in an image as spaces of high texture, which affect other methods of texture analysis like GLCM statistics or fractal analysis. Therefore, the effectiveness of the analysis is relatively high. Several indices based on values of different granulometric maps have been developed to simplify the extraction of classes of different texture. The third and final step of the process relies on a vegetation index, based on near infrared and blue bands. Its purpose is to correct partially misclassified pixels. All the indices used in the classification model developed relate to reflectance values, so the

  6. Height and Tilt Geometric Texture

    DEFF Research Database (Denmark)

    Andersen, Vedrana; Desbrun, Mathieu; Bærentzen, Jakob Andreas

    2009-01-01

    compromise between functionality and simplicity: it can efficiently handle and process geometric texture too complex to be represented as a height field, without having recourse to full blown mesh editing algorithms. The height-and-tilt representation proposed here is fully intrinsic to the mesh, making...

  7. EUROMET SUPPLEMENTARY COMPARISON - SURFACE TEXTURE

    DEFF Research Database (Denmark)

    Koenders, L.; Andreasen, Jan Lasson; De Chiffre, Leonardo

    At the length meeting in Prague in Oct. 1999 a new comparison was suggested on surface texture. The last comparison on this field was finished in 1989. In the meantime the instrumentation, the standards and the written standards have been improved including some software filters. The pilot labora...

  8. Color Textons for Texture Recognition

    NARCIS (Netherlands)

    Burghouts, G.J.; Geusebroek, J.M.

    2006-01-01

    Texton models have proven to be very discriminative for the recognition of grayvalue images taken from rough textures. To further improve the discriminative power of the distinctive texton models of Varma and Zisserman (VZ model) (IJCV, vol. 62(1), pp. 61-81, 2005), we propose two schemes to exploit

  9. Sensory memory and food texture

    NARCIS (Netherlands)

    Mojet, J.; Köster, E.P.

    2005-01-01

    Memory for texture plays an important role in food expectations. After fasting overnight, subjects (41 women, 35 men, age 19-60 years) received a breakfast including breakfast drink, biscuits and yoghurt. Subsequently, they rated their hunger feelings every hour, and returned for a taste experiment

  10. Sensory memory and food texture

    NARCIS (Netherlands)

    Mojet, J.; Koster, E.P.

    2005-01-01

    Memory for texture plays an important role in food expectations. After fasting overnight, subjects (41 women, 35 men, age 19¿60 years) received a breakfast including breakfast drink, biscuits and yoghurt. Subsequently, they rated their hunger feelings every hour, and returned for a taste experiment

  11. Colloidal aspects of texture perception

    NARCIS (Netherlands)

    Vliet, T. van; Aken, G.A. van; Jongh, H.H.J. de; Hamer, R.J.

    2009-01-01

    Recently, considerable attention has been given to the understanding of texture attributes that cannot directly be related to physical properties of food, such as creamy, crumbly and watery. The perception of these attributes is strongly related to the way the food is processed during food intake,

  12. Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery.

    Science.gov (United States)

    Li, Guiying; Lu, Dengsheng; Moran, Emilio; Hetrick, Scott

    2011-01-01

    This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms - maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes.

  13. Feature-aware natural texture synthesis

    KAUST Repository

    Wu, Fuzhang

    2014-12-04

    This article presents a framework for natural texture synthesis and processing. This framework is motivated by the observation that given examples captured in natural scene, texture synthesis addresses a critical problem, namely, that synthesis quality can be affected adversely if the texture elements in an example display spatially varied patterns, such as perspective distortion, the composition of different sub-textures, and variations in global color pattern as a result of complex illumination. This issue is common in natural textures and is a fundamental challenge for previously developed methods. Thus, we address it from a feature point of view and propose a feature-aware approach to synthesize natural textures. The synthesis process is guided by a feature map that represents the visual characteristics of the input texture. Moreover, we present a novel adaptive initialization algorithm that can effectively avoid the repeat and verbatim copying artifacts. Our approach improves texture synthesis in many images that cannot be handled effectively with traditional technologies.

  14. Feature-aware natural texture synthesis

    KAUST Repository

    Wu, Fuzhang; Dong, Weiming; Kong, Yan; Mei, Xing; Yan, Dongming; Zhang, Xiaopeng; Paul, Jean Claude

    2014-01-01

    This article presents a framework for natural texture synthesis and processing. This framework is motivated by the observation that given examples captured in natural scene, texture synthesis addresses a critical problem, namely, that synthesis

  15. Appearance and characterization of fruit image textures for quality sorting using wavelet transform and genetic algorithms.

    Science.gov (United States)

    Khoje, Suchitra

    2018-02-01

    Images of four qualities of mangoes and guavas are evaluated for color and textural features to characterize and classify them, and to model the fruit appearance grading. The paper discusses three approaches to identify most discriminating texture features of both the fruits. In the first approach, fruit's color and texture features are selected using Mahalanobis distance. A total of 20 color features and 40 textural features are extracted for analysis. Using Mahalanobis distance and feature intercorrelation analyses, one best color feature (mean of a* [L*a*b* color space]) and two textural features (energy a*, contrast of H*) are selected as features for Guava while two best color features (R std, H std) and one textural features (energy b*) are selected as features for mangoes with the highest discriminate power. The second approach studies some common wavelet families for searching the best classification model for fruit quality grading. The wavelet features extracted from five basic mother wavelets (db, bior, rbior, Coif, Sym) are explored to characterize fruits texture appearance. In third approach, genetic algorithm is used to select only those color and wavelet texture features that are relevant to the separation of the class, from a large universe of features. The study shows that image color and texture features which were identified using a genetic algorithm can distinguish between various qualities classes of fruits. The experimental results showed that support vector machine classifier is elected for Guava grading with an accuracy of 97.61% and artificial neural network is elected from Mango grading with an accuracy of 95.65%. The proposed method is nondestructive fruit quality assessment method. The experimental results has proven that Genetic algorithm along with wavelet textures feature has potential to discriminate fruit quality. Finally, it can be concluded that discussed method is an accurate, reliable, and objective tool to determine fruit

  16. Military personnel recognition system using texture, colour, and SURF features

    Science.gov (United States)

    Irhebhude, Martins E.; Edirisinghe, Eran A.

    2014-06-01

    This paper presents an automatic, machine vision based, military personnel identification and classification system. Classification is done using a Support Vector Machine (SVM) on sets of Army, Air Force and Navy camouflage uniform personnel datasets. In the proposed system, the arm of service of personnel is recognised by the camouflage of a persons uniform, type of cap and the type of badge/logo. The detailed analysis done include; camouflage cap and plain cap differentiation using gray level co-occurrence matrix (GLCM) texture feature; classification on Army, Air Force and Navy camouflaged uniforms using GLCM texture and colour histogram bin features; plain cap badge classification into Army, Air Force and Navy using Speed Up Robust Feature (SURF). The proposed method recognised camouflage personnel arm of service on sets of data retrieved from google images and selected military websites. Correlation-based Feature Selection (CFS) was used to improve recognition and reduce dimensionality, thereby speeding the classification process. With this method success rates recorded during the analysis include 93.8% for camouflage appearance category, 100%, 90% and 100% rates of plain cap and camouflage cap categories for Army, Air Force and Navy categories, respectively. Accurate recognition was recorded using SURF for the plain cap badge category. Substantial analysis has been carried out and results prove that the proposed method can correctly classify military personnel into various arms of service. We show that the proposed method can be integrated into a face recognition system, which will recognise personnel in addition to determining the arm of service which the personnel belong. Such a system can be used to enhance the security of a military base or facility.

  17. A Review Paper on Camouflage Texture Evaluation

    OpenAIRE

    Amol Patil; Girraj Prasad Rathode

    2013-01-01

    Traditional evaluation method of camouflage texture effect is subjective evaluation. It’s very tedious and inconvenient to direct the texture designing. In this project, a systemic and rational method for direction and evaluation of camouflage texture designing is proposed. Camouflage consists of things such as leaves, branches, or brown and green paint, which are used to make it difficult for an enemy to see military forces and equipment. A camouflage texture evaluation method based on WSSIM...

  18. Impact of vacuum cooking process on the texture degradation of selected apple cultivars.

    Science.gov (United States)

    Bourles, E; Mehinagic, E; Courthaudon, J L; Jourjon, F

    2009-01-01

    Thermal treatments are known to affect the textural properties of fruits and vegetables. This study was conducted to evaluate the influence of vacuum cooking process on the mechanical properties of various apple cultivars. A total of 10 apple cultivars were industrially processed by vacuum pasteurization at 95 degrees C for 25 min. The raw material was characterized by penetrometry, uniaxial double compression, soluble solid content, and titrable acidity. Textural properties of processed apples were analyzed by uniaxial double compression. As expected, for all cultivars, fruit resistance was lower after processing than before. Results showed that texture degradation due to vacuum pasteurization was different from one cultivar to another. Indeed, some cultivars, initially considered as the most resistant ones, such as Braeburn, were less suitable for processing, and became softer than others after thermal treatment. Consequently, it is worth noting that the texture classification of the investigated apple cultivars was changed by the vacuum-cooking process.

  19. Texture analysis using Renyi's generalized entropies

    NARCIS (Netherlands)

    Grigorescu, SE; Petkov, N

    2003-01-01

    We propose a texture analysis method based on Renyi's generalized entropies. The method aims at identifying texels in regular textures by searching for the smallest window through which the minimum number of different visual patterns is observed when moving the window over a given texture. The

  20. Evaluation of color representation for texture analysis

    NARCIS (Netherlands)

    Verbrugge, R.; van den Broek, Egon; van Rikxoort, E.M.; Taatgen, N.; Schomaker, L.

    2004-01-01

    Since more than 50 years texture in image material is a topic of research. Hereby, color was ignored mostly. This study compares 70 different configurations for texture analysis, using four features. For the configurations we used: (i) a gray value texture descriptor: the co-occurrence matrix and a

  1. Modeling Human Aesthetic Perception of Visual Textures

    NARCIS (Netherlands)

    Thumfart, Stefan; Jacobs, Richard H. A. H.; Lughofer, Edwin; Eitzinger, Christian; Cornelissen, Frans W.; Groissboeck, Werner; Richter, Roland

    Texture is extensively used in areas such as product design and architecture to convey specific aesthetic information. Using the results of a psychological experiment, we model the relationship between computational texture features and aesthetic properties of visual textures. Contrary to previous

  2. Image sequence analysis using spatio-temporal texture

    International Nuclear Information System (INIS)

    Sengupta, S.K.; Clark, G.A.; Barnes, F.L.; Schaich, P.C.

    1994-01-01

    The authors have developed and coded an algorithm for motion pattern classification based on spatio-temporal texture. The algorithm has been implemented and tested for the detection of wakes in simulated data with a relatively low signal-to-noise ratio (0.7 dB). Using a open-quote hold one out close-quote method, a detection probability of 100% with a 0% false alarm rate has been achieved on the limited number of samples (47 in each category) tested. The actual detection can be displayed in the form of a movie that can effectively show the submarine tracks based on the detected wake locations

  3. Evaluation of the effect of initial texture on the development of deformation texture

    DEFF Research Database (Denmark)

    Leffers, Torben; Juul Jensen, Dorte

    1986-01-01

    The authors describe a computer procedure which allows them to introduce experimental initial textures as starting conditions for texture simulation (instead of a theoretical random texture). They apply the procedure on two batches of copper with weak initial textures and on fine-grained and coarse......-grained aluminium with moderately strong initial textures. In copper the initial texture turns out to be too weak to have any significant effect. In aluminium the initial texture has a very significant effect on the simulated textures-similar to the effect it has on the experimental textures. However......, there are differences between the simulated and the experimental aluminium textures that can only be explained as a grain-size effect. Possible future applications of the procedure are discussed...

  4. Wavelet-based feature extraction applied to small-angle x-ray scattering patterns from breast tissue: a tool for differentiating between tissue types

    International Nuclear Information System (INIS)

    Falzon, G; Pearson, S; Murison, R; Hall, C; Siu, K; Evans, A; Rogers, K; Lewis, R

    2006-01-01

    This paper reports on the application of wavelet decomposition to small-angle x-ray scattering (SAXS) patterns from human breast tissue produced by a synchrotron source. The pixel intensities of SAXS patterns of normal, benign and malignant tissue types were transformed into wavelet coefficients. Statistical analysis found significant differences between the wavelet coefficients describing the patterns produced by different tissue types. These differences were then correlated with position in the image and have been linked to the supra-molecular structural changes that occur in breast tissue in the presence of disease. Specifically, results indicate that there are significant differences between healthy and diseased tissues in the wavelet coefficients that describe the peaks produced by the axial d-spacing of collagen. These differences suggest that a useful classification tool could be based upon the spectral information within the axial peaks

  5. A Classification Table for Achondrites

    Science.gov (United States)

    Chennaoui-Aoudjehane, H.; Larouci, N.; Jambon, A.; Mittlefehldt, D. W.

    2014-01-01

    Classifying chondrites is relatively easy and the criteria are well documented. It is based on mineral compositions, textural characteristics and more recently, magnetic susceptibility. It can be more difficult to classify achondrites, especially those that are very similar to terrestrial igneous rocks, because mineralogical, textural and compositional properties can be quite variable. Achondrites contain essentially olivine, pyroxenes, plagioclases, oxides, sulphides and accessory minerals. Their origin is attributed to differentiated parents bodies: large asteroids (Vesta); planets (Mars); a satellite (the Moon); and numerous asteroids of unknown size. In most cases, achondrites are not eye witnessed falls and some do not have fusion crust. Because of the mineralogical and magnetic susceptibility similarity with terrestrial igneous rocks for some achondrites, it can be difficult for classifiers to confirm their extra-terrestrial origin. We -as classifiers of meteorites- are confronted with this problem with every suspected achondrite we receive for identification. We are developing a "grid" of classification to provide an easier approach for initial classification. We use simple but reproducible criteria based on mineralogical, petrological and geochemical studies. We presented the classes: acapulcoites, lodranites, winonaites and Martian meteorites (shergottite, chassignites, nakhlites). In this work we are completing the classification table by including the groups: angrites, aubrites, brachinites, ureilites, HED (howardites, eucrites, and diogenites), lunar meteorites, pallasites and mesosiderites. Iron meteorites are not presented in this abstract.

  6. Watermarking textures in video games

    Science.gov (United States)

    Liu, Huajian; Berchtold, Waldemar; Schäfer, Marcel; Lieb, Patrick; Steinebach, Martin

    2014-02-01

    Digital watermarking is a promising solution to video game piracy. In this paper, based on the analysis of special challenges and requirements in terms of watermarking textures in video games, a novel watermarking scheme for DDS textures in video games is proposed. To meet the performance requirements in video game applications, the proposed algorithm embeds the watermark message directly in the compressed stream in DDS files and can be straightforwardly applied in watermark container technique for real-time embedding. Furthermore, the embedding approach achieves high watermark payload to handle collusion secure fingerprinting codes with extreme length. Hence, the scheme is resistant to collusion attacks, which is indispensable in video game applications. The proposed scheme is evaluated in aspects of transparency, robustness, security and performance. Especially, in addition to classical objective evaluation, the visual quality and playing experience of watermarked games is assessed subjectively in game playing.

  7. Texture of lipid bilayer domains

    DEFF Research Database (Denmark)

    Jensen, Uffe Bernchou; Brewer, Jonathan R.; Midtiby, Henrik Skov

    2009-01-01

    We investigate the texture of gel (g) domains in binary lipid membranes composed of the phospholipids DPPC and DOPC. Lateral organization of lipid bilayer membranes is a topic of fundamental and biological importance. Whereas questions related to size and composition of fluid membrane domain...... are well studied, the possibility of texture in gel domains has so far not been examined. When using polarized light for two-photon excitation of the fluorescent lipid probe Laurdan, the emission intensity is highly sensitive to the angle between the polarization and the tilt orientation of lipid acyl...... chains. By imaging the intensity variations as a function of the polarization angle, we map the lateral variations of the lipid tilt within domains. Results reveal that gel domains are composed of subdomains with different lipid tilt directions. We have applied a Fourier decomposition method...

  8. Texture studies of Zr-2

    International Nuclear Information System (INIS)

    Madden, P.K.

    1976-09-01

    Basal pole figures of seven Zr-2 pressure tubes have been determined. The pole figures give texture factors but these do not correlate with the irradiation growth strains observed in SGHWR. Precautions taken in specimen preparation and in pole figure determination are described in detail. It is shown that any point on a pole figure may be unambiguously related to a defined set of coordinate axes in the pressure tube. (author)

  9. Height perception influenced by texture gradient.

    Science.gov (United States)

    Tozawa, Junko

    2012-01-01

    Three experiments were carried out to examine whether a texture gradient influences perception of relative object height. Previous research implicated texture cues in judgments of object width, but similar influences have not been demonstrated for relative height. In this study, I evaluate a hypothesis that the projective ratio of the number of texture elements covered by the objects combined with the ratio of the retinal object heights determines percepts of relative object height. Density of texture background was varied: four density conditions ranged from no-texture to very dense texture. In experiments 1 and 2, participants judged the height of comparison bar compared to the standard bar positioned on no-texture or textured backgrounds. Results showed relative height judgments differed with texture manipulations, consistent with predictions from a hypothesised combination of the number of texture elements with retinal height (experiment 1), or partially consistent with this hypothesis (experiment 2). In experiment 2, variations in the position of a comparison object showed that comparisons located far from the horizon were judged more poorly than in other positions. In experiment 3 I examined distance perception; relative distance judgments were found to be also affected by textured backgrounds. Results are discussed in terms of Gibson's relational theory and distance calibration theory.

  10. Classifying brain metastases by their primary site of origin using a radiomics approach based on texture analysis: a feasibility study.

    Science.gov (United States)

    Ortiz-Ramón, Rafael; Larroza, Andrés; Ruiz-España, Silvia; Arana, Estanislao; Moratal, David

    2018-05-14

    To examine the capability of MRI texture analysis to differentiate the primary site of origin of brain metastases following a radiomics approach. Sixty-seven untreated brain metastases (BM) were found in 3D T1-weighted MRI of 38 patients with cancer: 27 from lung cancer, 23 from melanoma and 17 from breast cancer. These lesions were segmented in 2D and 3D to compare the discriminative power of 2D and 3D texture features. The images were quantized using different number of gray-levels to test the influence of quantization. Forty-three rotation-invariant texture features were examined. Feature selection and random forest classification were implemented within a nested cross-validation structure. Classification was evaluated with the area under receiver operating characteristic curve (AUC) considering two strategies: multiclass and one-versus-one. In the multiclass approach, 3D texture features were more discriminative than 2D features. The best results were achieved for images quantized with 32 gray-levels (AUC = 0.873 ± 0.064) using the top four features provided by the feature selection method based on the p-value. In the one-versus-one approach, high accuracy was obtained when differentiating lung cancer BM from breast cancer BM (four features, AUC = 0.963 ± 0.054) and melanoma BM (eight features, AUC = 0.936 ± 0.070) using the optimal dataset (3D features, 32 gray-levels). Classification of breast cancer and melanoma BM was unsatisfactory (AUC = 0.607 ± 0.180). Volumetric MRI texture features can be useful to differentiate brain metastases from different primary cancers after quantizing the images with the proper number of gray-levels. • Texture analysis is a promising source of biomarkers for classifying brain neoplasms. • MRI texture features of brain metastases could help identifying the primary cancer. • Volumetric texture features are more discriminative than traditional 2D texture features.

  11. Indoor Place Categorization based on Adaptive Partitioning of Texture Histograms

    Directory of Open Access Journals (Sweden)

    Sven Eberhardt

    2014-12-01

    Full Text Available How can we localize ourselves within a building solely using visual information, i.e., when no data about prior location or movement are available? Here, we define place categorization as a set of three distinct image classification tasks for view matching, location matching, and room matching. We present a novel image descriptor built on texture statistics and dynamic image partitioning that can be used to solve all tested place classification tasks. We benchmark the descriptor by assessing performance of regularization on our own dataset as well as the established Indoor Environment under Changing conditionS dataset, which varies lighting condition, location, and viewing angle on photos taken within an office building. We show improvement on both the datasets against a number of baseline algorithms.

  12. T2-weighted MRI-derived textural features reflect prostate cancer aggressiveness: preliminary results.

    Science.gov (United States)

    Nketiah, Gabriel; Elschot, Mattijs; Kim, Eugene; Teruel, Jose R; Scheenen, Tom W; Bathen, Tone F; Selnæs, Kirsten M

    2017-07-01

    To evaluate the diagnostic relevance of T2-weighted (T2W) MRI-derived textural features relative to quantitative physiological parameters derived from diffusion-weighted (DW) and dynamic contrast-enhanced (DCE) MRI in Gleason score (GS) 3+4 and 4+3 prostate cancers. 3T multiparametric-MRI was performed on 23 prostate cancer patients prior to prostatectomy. Textural features [angular second moment (ASM), contrast, correlation, entropy], apparent diffusion coefficient (ADC), and DCE pharmacokinetic parameters (K trans and V e ) were calculated from index tumours delineated on the T2W, DW, and DCE images, respectively. The association between the textural features and prostatectomy GS and the MRI-derived parameters, and the utility of the parameters in differentiating between GS 3+4 and 4+3 prostate cancers were assessed statistically. ASM and entropy correlated significantly (p textural features correlated insignificantly with K trans and V e . GS 4+3 cancers had significantly lower ASM and higher entropy than 3+4 cancers, but insignificant differences in median ADC, K trans , and V e . The combined texture-MRI parameters yielded higher classification accuracy (91%) than the individual parameter sets. T2W MRI-derived textural features could serve as potential diagnostic markers, sensitive to the pathological differences in prostate cancers. • T2W MRI-derived textural features correlate significantly with Gleason score and ADC. • T2W MRI-derived textural features differentiate Gleason score 3+4 from 4+3 cancers. • T2W image textural features could augment tumour characterization.

  13. Texture analysis in quantitative MR imaging. Tissue characterisation of normal brain and intracranial tumours at 1.5 T

    DEFF Research Database (Denmark)

    Kjaer, L; Ring, P; Thomsen, C

    1995-01-01

    The diagnostic potential of texture analysis in quantitative tissue characterisation by MR imaging at 1.5 T was evaluated in the brain of 6 healthy volunteers and in 88 patients with intracranial tumours. Texture images were computed from calculated T1 and T2 parameter images by applying groups o...... to be successful in some cases of clinical importance. However, no discrimination between benign and malignant tumour growth was possible. Much texture information seems to be contained in MR images, which may prove useful for classification and image segmentation....

  14. Impact of Soil Texture on Soil Ciliate Communities

    Science.gov (United States)

    Chau, J. F.; Brown, S.; Habtom, E.; Brinson, F.; Epps, M.; Scott, R.

    2014-12-01

    Soil water content and connectivity strongly influence microbial activities in soil, controlling access to nutrients and electron acceptors, and mediating interactions between microbes within and between trophic levels. These interactions occur at or below the pore scale, and are influenced by soil texture and structure, which determine the microscale architecture of soil pores. Soil protozoa are relatively understudied, especially given the strong control they exert on bacterial communities through predation. Here, ciliate communities in soils of contrasting textures were investigated. Two ciliate-specific primer sets targeting the 18S rRNA gene were used to amplify DNA extracted from eight soil samples collected from Sumter National Forest in western South Carolina. Primer sets 121F-384F-1147R (semi-nested) and 315F-959R were used to amplify soil ciliate DNA via polymerase chain reaction (PCR), and the resulting PCR products were analyzed by gel electrophoresis to obtain quantity and band size. Approximately two hundred ciliate 18S rRNA sequences were obtained were obtained from each of two contrasting soils. Sequences were aligned against the NCBI GenBank database for identification, and the taxonomic classification of best-matched sequences was determined. The ultimate goal of the work is to quantify changes in the ciliate community under short-timescale changes in hydrologic conditions for varying soil textures, elucidating dynamic responses to desiccation stress in major soil ciliate taxa.

  15. Feasibility of opportunistic osteoporosis screening in routine contrast-enhanced multi detector computed tomography (MDCT) using texture analysis.

    Science.gov (United States)

    Mookiah, M R K; Rohrmeier, A; Dieckmeyer, M; Mei, K; Kopp, F K; Noel, P B; Kirschke, J S; Baum, T; Subburaj, K

    2018-04-01

    This study investigated the feasibility of opportunistic osteoporosis screening in routine contrast-enhanced MDCT exams using texture analysis. The results showed an acceptable reproducibility of texture features, and these features could discriminate healthy/osteoporotic fracture cohort with an accuracy of 83%. This aim of this study is to investigate the feasibility of opportunistic osteoporosis screening in routine contrast-enhanced MDCT exams using texture analysis. We performed texture analysis at the spine in routine MDCT exams and investigated the effect of intravenous contrast medium (IVCM) (n = 7), slice thickness (n = 7), the long-term reproducibility (n = 9), and the ability to differentiate healthy/osteoporotic fracture cohort (n = 9 age and gender matched pairs). Eight texture features were extracted using gray level co-occurrence matrix (GLCM). The independent sample t test was used to rank the features of healthy/fracture cohort and classification was performed using support vector machine (SVM). The results revealed significant correlations between texture parameters derived from MDCT scans with and without IVCM (r up to 0.91) slice thickness of 1 mm versus 2 and 3 mm (r up to 0.96) and scan-rescan (r up to 0.59). The performance of the SVM classifier was evaluated using 10-fold cross-validation and revealed an average classification accuracy of 83%. Opportunistic osteoporosis screening at the spine using specific texture parameters (energy, entropy, and homogeneity) and SVM can be performed in routine contrast-enhanced MDCT exams.

  16. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce the convo...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  17. The brass-type texture and its deviation from the copper-type texture

    DEFF Research Database (Denmark)

    Leffers, Torben; Ray, R.K.

    2009-01-01

    Our basic aim with the present review is to address the classical problem of the “fcc rolling texture transition” – the fact that fcc materials may, depending on material parameters and rolling conditions, develop two different types of rolling textures, the copper-type texture and the brass...... the subject and sketch our approach for dealing with it. We then recapitulate the decisive progress made during the nineteen sixties in the empirical description of the fcc rolling texture transition and in lining up a number of possible explanations. Then follows a section about experimental investigations...... of the brass-type texture after the nineteen sixties covering texture measurements and microstructural investigations. The main observations are: (1) The brass-type texture deviates from the copper-type texture from an early stage of texture development. (2) Deformation twinning has a decisive effect...

  18. Aesthetic Perception of Visual Textures: A Holistic Exploration using Texture Analysis, Psychological Experiment and Perception Modeling

    Directory of Open Access Journals (Sweden)

    Jianli eLiu

    2015-11-01

    Full Text Available Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and nonlinear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties.

  19. Application of CT texture analysis in predicting histopathological characteristics of gastric cancers

    International Nuclear Information System (INIS)

    Liu, Shunli; Liu, Song; Ji, Changfeng; Zheng, Huanhuan; Pan, Xia; Zhang, Yujuan; He, Jian; Zhou, Zhengyang; Guan, Wenxian; Chen, Ling; Guan, Yue; Li, Weifeng; Ge, Yun

    2017-01-01

    To explore the application of computed tomography (CT) texture analysis in predicting histopathological features of gastric cancers. Preoperative contrast-enhanced CT images and postoperative histopathological features of 107 patients (82 men, 25 women) with gastric cancers were retrospectively reviewed. CT texture analysis generated: (1) mean attenuation, (2) standard deviation, (3) max frequency, (4) mode, (5) minimum attenuation, (6) maximum attenuation, (7) the fifth, 10th, 25th, 50th, 75th and 90th percentiles, and (8) entropy. Correlations between CT texture parameters and histopathological features were analysed. Mean attenuation, maximum attenuation, all percentiles and mode derived from portal venous CT images correlated significantly with differentiation degree and Lauren classification of gastric cancers (r, -0.231 ∝-0.324, 0.228 ∝ 0.321, respectively). Standard deviation and entropy derived from arterial CT images also correlated significantly with Lauren classification of gastric cancers (r = -0.265, -0.222, respectively). In arterial phase analysis, standard deviation and entropy were significantly lower in gastric cancers with than those without vascular invasion; however, minimum attenuation was significantly higher in gastric cancers with than those without vascular invasion. CT texture analysis held great potential in predicting differentiation degree, Lauren classification and vascular invasion status of gastric cancers. (orig.)

  20. Application of CT texture analysis in predicting histopathological characteristics of gastric cancers

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Shunli; Liu, Song; Ji, Changfeng; Zheng, Huanhuan; Pan, Xia; Zhang, Yujuan; He, Jian; Zhou, Zhengyang [The Affiliated Hospital of Nanjing University Medical School, Department of Radiology, Nanjing Drum Tower Hospital, Nanjing, Jiangsu Province (China); Guan, Wenxian [The Affiliated Hospital of Nanjing University Medical School, Department of Gastrointestinal Surgery, Nanjing Drum Tower Hospital, Nanjing (China); Chen, Ling [The Affiliated Hospital of Nanjing University Medical School, Department of Pathology, Nanjing Drum Tower Hospital, Nanjing, Jiangsu Province (China); Guan, Yue; Li, Weifeng; Ge, Yun [Nanjing University, School of Electronic Science and Engineering, Nanjing (China)

    2017-12-15

    To explore the application of computed tomography (CT) texture analysis in predicting histopathological features of gastric cancers. Preoperative contrast-enhanced CT images and postoperative histopathological features of 107 patients (82 men, 25 women) with gastric cancers were retrospectively reviewed. CT texture analysis generated: (1) mean attenuation, (2) standard deviation, (3) max frequency, (4) mode, (5) minimum attenuation, (6) maximum attenuation, (7) the fifth, 10th, 25th, 50th, 75th and 90th percentiles, and (8) entropy. Correlations between CT texture parameters and histopathological features were analysed. Mean attenuation, maximum attenuation, all percentiles and mode derived from portal venous CT images correlated significantly with differentiation degree and Lauren classification of gastric cancers (r, -0.231 ∝-0.324, 0.228 ∝ 0.321, respectively). Standard deviation and entropy derived from arterial CT images also correlated significantly with Lauren classification of gastric cancers (r = -0.265, -0.222, respectively). In arterial phase analysis, standard deviation and entropy were significantly lower in gastric cancers with than those without vascular invasion; however, minimum attenuation was significantly higher in gastric cancers with than those without vascular invasion. CT texture analysis held great potential in predicting differentiation degree, Lauren classification and vascular invasion status of gastric cancers. (orig.)

  1. Texturing of continuous LOD meshes with the hierarchical texture atlas

    Science.gov (United States)

    Birkholz, Hermann

    2006-02-01

    For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.

  2. Stereo vision with texture learning for fault-tolerant automatic baling

    DEFF Research Database (Denmark)

    Blas, Morten Rufus; Blanke, Mogens

    2010-01-01

    This paper presents advances in using stereovision for automating baling. A robust classification scheme is demonstrated for learning and classifying based on texture and shape. Using a state-of-the-art texton approach a fast classifier is obtained that can handle non-linearities in the data....... The addition of shape information makes the method robust to large variations and greatly reduces false alarms by applying tight geometrical constraints. The classifier is tested on data from a stereovision guidance system on a tractor. The system is able to classify cut plant material (called swath......) by learning it's appearance. A 3D classifier is used to train and supervise the texture classifier....

  3. Parenchymal texture measures weighted by breast anatomy: preliminary optimization in a case-control study

    Science.gov (United States)

    Gastounioti, Aimilia; Keller, Brad M.; Hsieh, Meng-Kang; Conant, Emily F.; Kontos, Despina

    2016-03-01

    Growing evidence suggests that quantitative descriptors of the parenchymal texture patterns hold a valuable role in assessing an individual woman's risk for breast cancer. In this work, we assess the hypothesis that breast cancer risk factors are not uniformly expressed in the breast parenchymal tissue and, therefore, breast-anatomy-weighted parenchymal texture descriptors, where different breasts ROIs have non uniform contributions, may enhance breast cancer risk assessment. To this end, we introduce an automated breast-anatomy-driven methodology which generates a breast atlas, which is then used to produce a weight map that reinforces the contributions of the central and upper-outer breast areas. We incorporate this methodology to our previously validated lattice-based strategy for parenchymal texture analysis. In the framework of a pilot case-control study, including digital mammograms from 424 women, our proposed breast-anatomy-weighted texture descriptors are optimized and evaluated against non weighted texture features, using regression analysis with leave-one-out cross validation. The classification performance is assessed in terms of the area under the curve (AUC) of the receiver operating characteristic. The collective discriminatory capacity of the weighted texture features was maximized (AUC=0.87) when the central breast area was considered more important than the upperouter area, with significant performance improvement (DeLong's test, p-valuewomen's cancer risk evaluation.

  4. T2-weighted MRI-derived textural features reflect prostate cancer aggressiveness: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Nketiah, Gabriel; Elschot, Mattijs; Kim, Eugene; Teruel, Jose R. [NTNU, Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine, Trondheim (Norway); Scheenen, Tom W. [Radboud University Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Bathen, Tone F.; Selnaes, Kirsten M. [NTNU, Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine, Trondheim (Norway); St. Olavs Hospital, Trondheim University Hospital, Trondheim (Norway)

    2017-07-15

    To evaluate the diagnostic relevance of T2-weighted (T2W) MRI-derived textural features relative to quantitative physiological parameters derived from diffusion-weighted (DW) and dynamic contrast-enhanced (DCE) MRI in Gleason score (GS) 3+4 and 4+3 prostate cancers. 3T multiparametric-MRI was performed on 23 prostate cancer patients prior to prostatectomy. Textural features [angular second moment (ASM), contrast, correlation, entropy], apparent diffusion coefficient (ADC), and DCE pharmacokinetic parameters (K{sup trans} and V{sub e}) were calculated from index tumours delineated on the T2W, DW, and DCE images, respectively. The association between the textural features and prostatectomy GS and the MRI-derived parameters, and the utility of the parameters in differentiating between GS 3+4 and 4+3 prostate cancers were assessed statistically. ASM and entropy correlated significantly (p < 0.05) with both GS and median ADC. Contrast correlated moderately with median ADC. The textural features correlated insignificantly with K{sup trans} and V{sub e}. GS 4+3 cancers had significantly lower ASM and higher entropy than 3+4 cancers, but insignificant differences in median ADC, K{sup trans}, and V{sub e}. The combined texture-MRI parameters yielded higher classification accuracy (91%) than the individual parameter sets. T2W MRI-derived textural features could serve as potential diagnostic markers, sensitive to the pathological differences in prostate cancers. (orig.)

  5. Cascaded Amplitude Modulations in Sound Texture Perception

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2017-01-01

    . In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as "beating" in the envelope-frequency domain. We developed an auditory texture...... model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures-stimuli generated using time-averaged statistics measured from real-world textures....... In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model...

  6. TEXTURAL DESCRIPTORS FOR MULTIPHASIC ORE PARTICLES

    Directory of Open Access Journals (Sweden)

    Laura Pérez-Barnuevo

    2012-11-01

    Full Text Available Monitoring of mineral processing circuits by means of particle liberation analysis through quantitative image analysis has become a routine technique within the last decades. Usually, liberation indices are computed as weight proportions, which is not informative enough when complex texture ores are treated by flotation. In these cases, liberation has to be computed as phase surface exposed to reactants, and textural relationships between minerals have to be characterized to determine the possibility of increasing exposure. In this paper, some indices to achieve a complete texture characterization have been developed in terms of 2D phase contact and mineral surfaces exposure. Indices suggested by other authors are also compared. The response of this set of parameters against textural changes has been explored on simple synthetic textures ranging from single to multiple inclusions and single to multiple veins and their ability to discriminate between different textural features is analyzed over real mineral particles with known internal structure.

  7. Annealing texture of rolled nickel alloys

    International Nuclear Information System (INIS)

    Meshchaninov, I.V.; Khayutin, S.G.

    1976-01-01

    A texture of pure nickel and binary alloys after the 95% rolling and annealing has been studied. Insoluble additives (Mg, Zr) slacken the cubic texture in nickel and neral slackening of the texture (Zr). In the case of alloying with silicium (up to 2%) the texture practically coinsides with that of a technical-grade nickel. The remaining soluble additives either do not change the texture of pure nickel (C, Nb) or enhance the sharpness and intensity of the cubic compontnt (Al, Cu, Mn, Cr, Mo, W, Co -at their content 0.5 to 2.0%). A model is proposed by which variation of the annealing texture upon alloying is caused by dissimilar effect of the alloying elements on the mobility of high- and low-angle grain boundaries

  8. Gender classification system in uncontrolled environments

    Science.gov (United States)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  9. Fast Synthesis of Dynamic Colour Textures

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Haindl, Michal; Chetverikov, D.

    -, č. 66 (2006), s. 53-54 ISSN 0926-4981 R&D Projects: GA AV ČR IAA2075302; GA AV ČR 1ET400750407; GA MŠk 1M0572 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : dynamic colour texture * texture synthesis * texture modelling Subject RIV: BD - Theory of Information http://www.ercim.org/publication/Ercim_News/enw66/haindl.html

  10. Texture and anisotropy in ferroelectric lead metaniobate

    Science.gov (United States)

    Iverson, Benjamin John

    Ferroelectric lead metaniobate, PbNb2O6, is a piezoelectric ceramic typically used because of its elevated Curie temperature and anisotropic properties. However, the piezoelectric constant, d33, is relatively low in randomly oriented ceramics when compared to other ferroelectrics. Crystallographic texturing is often employed to increase the piezoelectric constant because the spontaneous polarization axes of grains are better aligned. In this research, crystallographic textures induced through tape casting are distinguished from textures induced through electrical poling. Texture is described using multiple quantitative approaches utilizing X-ray and neutron time-of-flight diffraction. Tape casting lead metaniobate with an inclusion of acicular template particles induces an orthotropic texture distribution. Templated grain growth from seed particles oriented during casting results in anisotropic grain structures. The degree of preferred orientation is directly linked to the shear behavior of the tape cast slurry. Increases in template concentration, slurry viscosity, and casting velocity lead to larger textures by inducing more particle orientation in the tape casting plane. The maximum 010 texture distributions were two and a half multiples of a random distribution. Ferroelectric texture was induced by electrical poling. Electric poling increases the volume of material oriented with the spontaneous polarization direction in the material. Samples with an initial paraelectric texture exhibit a greater change in the domain volume fraction during electrical poling than randomly oriented ceramics. In tape cast samples, the resulting piezoelectric response is proportional to the 010 texture present prior to poling. This results in property anisotropy dependent on initial texture. Piezoelectric properties measured on the most textured ceramics were similar to those obtained with a commercial standard.

  11. Neutrino mass textures with maximal CP violation

    International Nuclear Information System (INIS)

    Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki

    2005-01-01

    We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases

  12. Textural features for radar image analysis

    Science.gov (United States)

    Shanmugan, K. S.; Narayanan, V.; Frost, V. S.; Stiles, J. A.; Holtzman, J. C.

    1981-01-01

    Texture is seen as an important spatial feature useful for identifying objects or regions of interest in an image. While textural features have been widely used in analyzing a variety of photographic images, they have not been used in processing radar images. A procedure for extracting a set of textural features for characterizing small areas in radar images is presented, and it is shown that these features can be used in classifying segments of radar images corresponding to different geological formations.

  13. Feature selection for neural network based defect classification of ceramic components using high frequency ultrasound.

    Science.gov (United States)

    Kesharaju, Manasa; Nagarajah, Romesh

    2015-09-01

    The motivation for this research stems from a need for providing a non-destructive testing method capable of detecting and locating any defects and microstructural variations within armour ceramic components before issuing them to the soldiers who rely on them for their survival. The development of an automated ultrasonic inspection based classification system would make possible the checking of each ceramic component and immediately alert the operator about the presence of defects. Generally, in many classification problems a choice of features or dimensionality reduction is significant and simultaneously very difficult, as a substantial computational effort is required to evaluate possible feature subsets. In this research, a combination of artificial neural networks and genetic algorithms are used to optimize the feature subset used in classification of various defects in reaction-sintered silicon carbide ceramic components. Initially wavelet based feature extraction is implemented from the region of interest. An Artificial Neural Network classifier is employed to evaluate the performance of these features. Genetic Algorithm based feature selection is performed. Principal Component Analysis is a popular technique used for feature selection and is compared with the genetic algorithm based technique in terms of classification accuracy and selection of optimal number of features. The experimental results confirm that features identified by Principal Component Analysis lead to improved performance in terms of classification percentage with 96% than Genetic algorithm with 94%. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Dry texturing of solar cells

    Science.gov (United States)

    Sopori, Bhushan L.

    1994-01-01

    A textured backside of a semiconductor device for increasing light scattering and absorption in a semiconductor substrate is accomplished by applying infrared radiation to the front side of a semiconductor substrate that has a metal layer deposited on its backside in a time-energy profile that first produces pits in the backside surface and then produces a thin, highly reflective, low resistivity, epitaxial alloy layer over the entire area of the interface between the semiconductor substrate and a metal contact layer. The time-energy profile includes ramping up to a first energy level and holding for a period of time to create the desired pit size and density and then rapidly increasing the energy to a second level in which the entire interface area is melted and alloyed quickly. After holding the second energy level for a sufficient time to develop the thin alloy layer over the entire interface area, the energy is ramped down to allow epitaxial crystal growth in the alloy layer. The result is a textured backside an optically reflective, low resistivity alloy interface between the semiconductor substrate and the metal electrical contact layer.

  15. Geometric Total Variation for Texture Deformation

    DEFF Research Database (Denmark)

    Bespalov, Dmitriy; Dahl, Anders Lindbjerg; Shokoufandeh, Ali

    2010-01-01

    In this work we propose a novel variational method that we intend to use for estimating non-rigid texture deformation. The method is able to capture variation in grayscale images with respect to the geometry of its features. Our experimental evaluations demonstrate that accounting for geometry...... of features in texture images leads to significant improvements in localization of these features, when textures undergo geometrical transformations. Accurate localization of features in the presense of unkown deformations is a crucial property for texture characterization methods, and we intend to expoit...

  16. Cascaded Amplitude Modulations in Sound Texture Perception

    Directory of Open Access Journals (Sweden)

    Richard McWalter

    2017-09-01

    Full Text Available Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.

  17. FCC Rolling Textures Reviewed in the Light of Quantitative Comparisons between Simulated and Experimental Textures

    DEFF Research Database (Denmark)

    Wierzbanowski, Krzysztof; Wroński, Marcin; Leffers, Torben

    2014-01-01

    The crystallographic texture of metallic materials has a very strong effect on the properties of the materials. In the present article, we look at the rolling textures of fcc metals and alloys, where the classical problem is the existence of two different types of texture, the "copper-type texture......" and the "brass-type texture." The type of texture developed is determined by the stacking fault energy of the material, the rolling temperature and the strain rate of the rolling process. Recent texture simulations by the present authors provide the basis for a renewed discussion of the whole field of fcc......} slip without or with deformation twinning, but we also consider slip on other slip planes and slip by partial dislocations. We consistently make quantitative comparison of the simulation results and the experimental textures by means of a scalar correlation factor. We find that the development...

  18. Tissue Classification

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  19. Sea ice classification using dual polarization SAR data

    International Nuclear Information System (INIS)

    Huiying, Liu; Huadong, Guo; Lu, Zhang

    2014-01-01

    Sea ice is an indicator of climate change and also a threat to the navigation security of ships. Polarimetric SAR images are useful in the sea ice detection and classification. In this paper, backscattering coefficients and texture features derived from dual polarization SAR images are used for sea ice classification. Firstly, the HH image is recalculated based on the angular dependences of sea ice types. Then the effective gray level co-occurrence matrix (GLCM) texture features are selected for the support vector machine (SVM) classification. In the end, because sea ice concentration can provide a better separation of pancake ice from old ice, it is used to improve the SVM result. This method provides a good classification result, compared with the sea ice chart from CIS

  20. Mechanical seal with textured sidewall

    Energy Technology Data Exchange (ETDEWEB)

    Khonsari, Michael M.; Xiao, Nian

    2017-02-14

    The present invention discloses a mating ring, a primary ring, and associated mechanical seal having superior heat transfer and wear characteristics. According to an exemplary embodiment of the present invention, one or more dimples are formed onto the cylindrical outer surface of a mating ring sidewall and/or a primary ring sidewall. A stationary mating ring for a mechanical seal assembly is disclosed. Such a mating ring comprises an annular body having a central axis and a sealing face, wherein a plurality of dimples are formed into the outer circumferential surface of the annular body such that the exposed circumferential surface area of the annular body is increased. The texture added to the sidewall of the mating ring yields superior heat transfer and wear characteristics.

  1. D-branes and textures

    International Nuclear Information System (INIS)

    Everett, L.; Kane, G.L.; King, S.F.

    2000-01-01

    We examine the flavor structure of the trilinear superpotential couplings which can result from embedding the Standard Model within D-brane sectors in Type IIB orientifold models, which are examples within the Type I string framework. We find in general that the allowed flavor structures of the Yukawa coupling matrices to leading order are given by basic variations on the d emocratic'' texture ansatz. In certain interesting cases, the Yukawa couplings have a novel structure in which a single right-handed fermion couples democratically at leading order to three left-handed fermions. We discuss the viability of such a s ingle right-handed democracy'' in detail; remarkably, even though there are large mixing angles in the u,d sectors separately, the CKM mixing angles are small. The analysis demonstrates the ways in which the Type I superstring framework can provide a rich setting for investigating novel resolutions to the flavor puzzle. (author)

  2. Subjective figures and texture perception.

    Science.gov (United States)

    Zucker, S W; Cavanagh, P

    1985-01-01

    A texture discrimination task using the Ehrenstein illusion demonstrates that subjective brightness effects can play an essential role in early vision. The subjectively bright regions of the Ehrenstein can be organized either as discs or as stripes, depending on orientation. The accuracy of discrimination between variants of the Ehrenstein and control patterns was a direct function of the presence of the illusory brightness stripes, being high when they were present and low otherwise. It is argued that neither receptive field structure nor spatial-frequency content can adequately account for these results. We suggest that the subjective brightness illusions, rather than being a high-level, cognitive aspect of vision, are in fact the result of an early visual process.

  3. Cool Polar Bears: Dabbing on the Texture

    Science.gov (United States)

    O'Connell, Jean

    2011-01-01

    In this article, the author describes how her second-graders created their cool polar bears. The students used the elements of shape and texture to create the bears. They used Monet's technique of dabbing paint so as to give the bear some texture on his fur.

  4. Texture Repairing by Unified Low Rank Optimization

    Institute of Scientific and Technical Information of China (English)

    Xiao Liang; Xiang Ren; Zhengdong Zhang; Yi Ma

    2016-01-01

    In this paper, we show how to harness both low-rank and sparse structures in regular or near-regular textures for image completion. Our method is based on a unified formulation for both random and contiguous corruption. In addition to the low rank property of texture, the algorithm also uses the sparse assumption of the natural image: because the natural image is piecewise smooth, it is sparse in certain transformed domain (such as Fourier or wavelet transform). We combine low-rank and sparsity properties of the texture image together in the proposed algorithm. Our algorithm based on convex optimization can automatically and correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. This algorithm integrates texture rectification and repairing into one optimization problem. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Our method demonstrates significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.

  5. On texture formation of chromium electrodeposits

    DEFF Research Database (Denmark)

    Nielsen, Christian Bergenstof; Leisner, Peter; Horsewell, Andy

    1998-01-01

    The microstructure, texture and hardness of electrodeposited hard, direct current (DC) chromium and pulsed reversed chromium has been investigated. These investigations suggest that the growth and texture of hard chromium is controlled by inhibition processes and reactions. Further, it has been...

  6. On the origin of recrystallization textures

    Indian Academy of Sciences (India)

    Unknown

    rival theories of evolution of recrystallization textures i.e. oriented nucleation (ON) and oriented growth (OG) has been under dispute. In the ON model, it has been argued that a higher frequency of the special orientation. (grains) than random occur, thus accounting for the texture. In the OG model, it has been argued that the.

  7. Texture design for light touch perception

    NARCIS (Netherlands)

    Zhang, S.; Zeng, X.; Matthews, D.T.A.; Igartua, A.; Rodriguez Vidal, E.; Fortes, J. Contreras; Van Der Heide, E.

    This study focused on active light touch with predefined textures specially-designed for tactile perception. The counter-body material is stainless steel sheet. Three geometric structures (grid, crater and groove) were fabricated by pulsed laser surface texturing. A total number of twenty volunteers

  8. Bread crumb classification using fractal and multifractal features

    OpenAIRE

    Baravalle, Rodrigo Guillermo; Delrieux, Claudio Augusto; Gómez, Juan Carlos

    2017-01-01

    Adequate image descriptors are fundamental in image classification and object recognition. Main requirements for image features are robustness and low dimensionality which would lead to low classification errors in a variety of situations and with a reasonable computational cost. In this context, the identification of materials poses a significant challenge, since typical (geometric and/or differential) feature extraction methods are not robust enough. Texture features based on Fourier or wav...

  9. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  10. Shape-Tailored Features and their Application to Texture Segmentation

    KAUST Repository

    Khan, Naeemullah

    2014-01-01

    Texture Segmentation is one of the most challenging areas of computer vision. One reason for this difficulty is the huge variety and variability of textures occurring in real world, making it very difficult to quantitatively study textures. One

  11. Ensemble based system for whole-slide prostate cancer probability mapping using color texture features.

    LENUS (Irish Health Repository)

    DiFranco, Matthew D

    2011-01-01

    We present a tile-based approach for producing clinically relevant probability maps of prostatic carcinoma in histological sections from radical prostatectomy. Our methodology incorporates ensemble learning for feature selection and classification on expert-annotated images. Random forest feature selection performed over varying training sets provides a subset of generalized CIEL*a*b* co-occurrence texture features, while sample selection strategies with minimal constraints reduce training data requirements to achieve reliable results. Ensembles of classifiers are built using expert-annotated tiles from training images, and scores for the probability of cancer presence are calculated from the responses of each classifier in the ensemble. Spatial filtering of tile-based texture features prior to classification results in increased heat-map coherence as well as AUC values of 95% using ensembles of either random forests or support vector machines. Our approach is designed for adaptation to different imaging modalities, image features, and histological decision domains.

  12. Texture analysis of pulmonary parenchymateous changes related to pulmonary thromboembolism in dogs - a novel approach using quantitative methods

    DEFF Research Database (Denmark)

    Marschner, Clara Büchner; Kokla, Marietta; Amigo Rubio, Jose Manuel

    2017-01-01

    include dual energy computed tomography (DECT) as well as computer assisted diagnosis (CAD) techniques. The purpose of this study was to investigate the performance of quantitative texture analysis for detecting dogs with PTE using grey-level co-occurrence matrices (GLCM) and multivariate statistical...... classification analyses. CT images from healthy (n = 6) and diseased (n = 29) dogs with and without PTE confirmed on CTPA were segmented so that only tissue with CT numbers between −1024 and −250 Houndsfield Units (HU) was preserved. GLCM analysis and subsequent multivariate classification analyses were...... using GLCM is an effective tool for distinguishing healthy from abnormal lung. Furthermore the texture of pulmonary parenchyma in dogs with PTE is altered, when compared to the texture of pulmonary parenchyma of healthy dogs. The models’ poorer performance in classifying dogs within the diseased group...

  13. MRI texture features as biomarkers to predict MGMT methylation status in glioblastomas

    Energy Technology Data Exchange (ETDEWEB)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J., E-mail: bje@mayo.edu [Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, Minnesota 55905 (United States); Coufalova, Lucie [Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, Minnesota 55905 (United States); Department of Neurosurgery of First Faculty of Medicine, Charles University in Prague, Military University Hospital, Prague 128 21 (Czech Republic); International Clinical Research Center, St. Anne’s University Hospital Brno, Brno 656 91 (Czech Republic); Lachance, Daniel H. [Department of Neurology, Mayo Clinic, 200 1st Street SW, Rochester, Minnesota 55905 (United States); Parney, Ian F. [Department of Neurologic Surgery, Mayo Clinic, 200 1st Street SW, Rochester, Minnesota 55905 (United States); Carter, Rickey E. [Department of Health Sciences Research, Mayo Clinic, 200 1st Street SW, Rochester, Minnesota 55905 (United States); Buckner, Jan C. [Department of Medical Oncology, Mayo Clinic, 200 1st Street SW, Rochester, Minnesota 55905 (United States)

    2016-06-15

    Purpose: Imaging biomarker research focuses on discovering relationships between radiological features and histological findings. In glioblastoma patients, methylation of the O{sup 6}-methylguanine methyltransferase (MGMT) gene promoter is positively correlated with an increased effectiveness of current standard of care. In this paper, the authors investigate texture features as potential imaging biomarkers for capturing the MGMT methylation status of glioblastoma multiforme (GBM) tumors when combined with supervised classification schemes. Methods: A retrospective study of 155 GBM patients with known MGMT methylation status was conducted. Co-occurrence and run length texture features were calculated, and both support vector machines (SVMs) and random forest classifiers were used to predict MGMT methylation status. Results: The best classification system (an SVM-based classifier) had a maximum area under the receiver-operating characteristic (ROC) curve of 0.85 (95% CI: 0.78–0.91) using four texture features (correlation, energy, entropy, and local intensity) originating from the T2-weighted images, yielding at the optimal threshold of the ROC curve, a sensitivity of 0.803 and a specificity of 0.813. Conclusions: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.

  14. MRI texture features as biomarkers to predict MGMT methylation status in glioblastomas

    International Nuclear Information System (INIS)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J.; Coufalova, Lucie; Lachance, Daniel H.; Parney, Ian F.; Carter, Rickey E.; Buckner, Jan C.

    2016-01-01

    Purpose: Imaging biomarker research focuses on discovering relationships between radiological features and histological findings. In glioblastoma patients, methylation of the O 6 -methylguanine methyltransferase (MGMT) gene promoter is positively correlated with an increased effectiveness of current standard of care. In this paper, the authors investigate texture features as potential imaging biomarkers for capturing the MGMT methylation status of glioblastoma multiforme (GBM) tumors when combined with supervised classification schemes. Methods: A retrospective study of 155 GBM patients with known MGMT methylation status was conducted. Co-occurrence and run length texture features were calculated, and both support vector machines (SVMs) and random forest classifiers were used to predict MGMT methylation status. Results: The best classification system (an SVM-based classifier) had a maximum area under the receiver-operating characteristic (ROC) curve of 0.85 (95% CI: 0.78–0.91) using four texture features (correlation, energy, entropy, and local intensity) originating from the T2-weighted images, yielding at the optimal threshold of the ROC curve, a sensitivity of 0.803 and a specificity of 0.813. Conclusions: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.

  15. A novel approach for classification of abnormalities in digitized ...

    Indian Academy of Sciences (India)

    Feature extraction is an important process for the overall system performance in classification. The objective of this article is to reveal the effectiveness of texture feature analysis for detecting the abnormalities in digitized mammograms using Self Adaptive Resource Allocation Network (SRAN) classifier. Thus, we proposed a ...

  16. Can Laws Be a Potential PET Image Texture Analysis Approach for Evaluation of Tumor Heterogeneity and Histopathological Characteristics in NSCLC?

    Science.gov (United States)

    Karacavus, Seyhan; Yılmaz, Bülent; Tasdemir, Arzu; Kayaaltı, Ömer; Kaya, Eser; İçer, Semra; Ayyıldız, Oguzhan

    2018-04-01

    We investigated the association between the textural features obtained from 18 F-FDG images, metabolic parameters (SUVmax , SUVmean, MTV, TLG), and tumor histopathological characteristics (stage and Ki-67 proliferation index) in non-small cell lung cancer (NSCLC). The FDG-PET images of 67 patients with NSCLC were evaluated. MATLAB technical computing language was employed in the extraction of 137 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run length matrix (GLRLM), and Laws' texture filters. Textural features and metabolic parameters were statistically analyzed in terms of good discrimination power between tumor stages, and selected features/parameters were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). We showed that one textural feature (gray-level nonuniformity, GLN) obtained using GLRLM approach and nine textural features using Laws' approach were successful in discriminating all tumor stages, unlike metabolic parameters. There were significant correlations between Ki-67 index and some of the textural features computed using Laws' method (r = 0.6, p = 0.013). In terms of automatic classification of tumor stage, the accuracy was approximately 84% with k-NN classifier (k = 3) and SVM, using selected five features. Texture analysis of FDG-PET images has a potential to be an objective tool to assess tumor histopathological characteristics. The textural features obtained using Laws' approach could be useful in the discrimination of tumor stage.

  17. Fast segmentation of industrial quality pavement images using Laws texture energy measures and k -means clustering

    Science.gov (United States)

    Mathavan, Senthan; Kumar, Akash; Kamal, Khurram; Nieminen, Michael; Shah, Hitesh; Rahman, Mujib

    2016-09-01

    Thousands of pavement images are collected by road authorities daily for condition monitoring surveys. These images typically have intensity variations and texture nonuniformities that make their segmentation challenging. The automated segmentation of such pavement images is crucial for accurate, thorough, and expedited health monitoring of roads. In the pavement monitoring area, well-known texture descriptors, such as gray-level co-occurrence matrices and local binary patterns, are often used for surface segmentation and identification. These, despite being the established methods for texture discrimination, are inherently slow. This work evaluates Laws texture energy measures as a viable alternative for pavement images for the first time. k-means clustering is used to partition the feature space, limiting the human subjectivity in the process. Data classification, hence image segmentation, is performed by the k-nearest neighbor method. Laws texture energy masks are shown to perform well with resulting accuracy and precision values of more than 80%. The implementations of the algorithm, in both MATLAB® and OpenCV/C++, are extensively compared against the state of the art for execution speed, clearly showing the advantages of the proposed method. Furthermore, the OpenCV-based segmentation shows a 100% increase in processing speed when compared to the fastest algorithm available in literature.

  18. Please Don't Move-Evaluating Motion Artifact From Peripheral Quantitative Computed Tomography Scans Using Textural Features.

    Science.gov (United States)

    Rantalainen, Timo; Chivers, Paola; Beck, Belinda R; Robertson, Sam; Hart, Nicolas H; Nimphius, Sophia; Weeks, Benjamin K; McIntyre, Fleur; Hands, Beth; Siafarikas, Aris

    Most imaging methods, including peripheral quantitative computed tomography (pQCT), are susceptible to motion artifacts particularly in fidgety pediatric populations. Methods currently used to address motion artifact include manual screening (visual inspection) and objective assessments of the scans. However, previously reported objective methods either cannot be applied on the reconstructed image or have not been tested for distal bone sites. Therefore, the purpose of the present study was to develop and validate motion artifact classifiers to quantify motion artifact in pQCT scans. Whether textural features could provide adequate motion artifact classification performance in 2 adolescent datasets with pQCT scans from tibial and radial diaphyses and epiphyses was tested. The first dataset was split into training (66% of sample) and validation (33% of sample) datasets. Visual classification was used as the ground truth. Moderate to substantial classification performance (J48 classifier, kappa coefficients from 0.57 to 0.80) was observed in the validation dataset with the novel texture-based classifier. In applying the same classifier to the second cross-sectional dataset, a slight-to-fair (κ = 0.01-0.39) classification performance was observed. Overall, this novel textural analysis-based classifier provided a moderate-to-substantial classification of motion artifact when the classifier was specifically trained for the measurement device and population. Classification based on textural features may be used to prescreen obviously acceptable and unacceptable scans, with a subsequent human-operated visual classification of any remaining scans. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  19. Texture and wettability of metallic lotus leaves

    Science.gov (United States)

    Frankiewicz, C.; Attinger, D.

    2016-02-01

    Superhydrophobic surfaces with the self-cleaning behavior of lotus leaves are sought for drag reduction and phase change heat transfer applications. These superrepellent surfaces have traditionally been fabricated by random or deterministic texturing of a hydrophobic material. Recently, superrepellent surfaces have also been made from hydrophilic materials, by deterministic texturing using photolithography, without low-surface energy coating. Here, we show that hydrophilic materials can also be made superrepellent to water by chemical texturing, a stochastic rather than deterministic process. These metallic surfaces are the first analog of lotus leaves, in terms of wettability, texture and repellency. A mechanistic model is also proposed to describe the influence of multiple tiers of roughness on wettability and repellency. This demonstrated ability to make hydrophilic materials superrepellent without deterministic structuring or additional coatings opens the way to large scale and robust manufacturing of superrepellent surfaces.Superhydrophobic surfaces with the self-cleaning behavior of lotus leaves are sought for drag reduction and phase change heat transfer applications. These superrepellent surfaces have traditionally been fabricated by random or deterministic texturing of a hydrophobic material. Recently, superrepellent surfaces have also been made from hydrophilic materials, by deterministic texturing using photolithography, without low-surface energy coating. Here, we show that hydrophilic materials can also be made superrepellent to water by chemical texturing, a stochastic rather than deterministic process. These metallic surfaces are the first analog of lotus leaves, in terms of wettability, texture and repellency. A mechanistic model is also proposed to describe the influence of multiple tiers of roughness on wettability and repellency. This demonstrated ability to make hydrophilic materials superrepellent without deterministic structuring or additional

  20. HEp-2 Cell Classification Using Shape Index Histograms With Donut-Shaped Spatial Pooling

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo; Vestergaard, Jacob Schack; Larsen, Rasmus

    2014-01-01

    We present a new method for automatic classification of indirect immunoflourescence images of HEp-2 cells into different staining pattern classes. Our method is based on a new texture measure called shape index histograms that captures second-order image structure at multiple scales. Moreover, we...... datasets. Our results show that shape index histograms are superior to other popular texture descriptors for HEp-2 cell classification. Moreover, when comparing to other automated systems for HEp-2 cell classification we show that shape index histograms are very competitive; especially considering...

  1. On Texture and Geometry in Image Analysis

    DEFF Research Database (Denmark)

    Gustafsson, David Karl John

    2009-01-01

    fields and Maximum Entropy (FRAME) model [213, 214] is used for inpaining texture. We argue that many ’textures’ contain details that must be inpainted exactly. Simultaneous reconstruction of geometric structure and texture is a difficult problem, therefore, a two-phase reconstruction procedure...... is proposed. An inverse temperature is added to the FRAME model. In the first phase, the geometric structure is reconstructed by cooling the distribution, and in the second phase, the texture is added by heating the distribution. Empirically, we show that the long range geometric structure is inpainted...

  2. Doping profile measurement on textured silicon surface

    Science.gov (United States)

    Essa, Zahi; Taleb, Nadjib; Sermage, Bernard; Broussillou, Cédric; Bazer-Bachi, Barbara; Quillec, Maurice

    2018-04-01

    In crystalline silicon solar cells, the front surface is textured in order to lower the reflection of the incident light and increase the efficiency of the cell. This texturing whose dimensions are a few micrometers wide and high, often makes it difficult to determine the doping profile measurement. We have measured by secondary ion mass spectrometry (SIMS) and electrochemical capacitance voltage profiling the doping profile of implanted phosphorus in alkaline textured and in polished monocrystalline silicon wafers. The paper shows that SIMS gives accurate results provided the primary ion impact angle is small enough. Moreover, the comparison between these two techniques gives an estimation of the concentration of electrically inactive phosphorus atoms.

  3. Texturized pinto bean protein fortification in straight dough bread formulation.

    Science.gov (United States)

    Simons, Courtney W; Hunt-Schmidt, Emily; Simsek, Senay; Hall, Clifford; Biswas, Atanu

    2014-09-01

    Pinto beans were milled and then air-classified to obtain a raw high protein fraction (RHPF) followed by extrusion to texturize the protein fraction. The texturized high protein fraction (THPF) was then milled to obtain flour, and combined with wheat flour at 5, 10, and 15% levels to make bread. The air-classification process produced flour with high concentration of lipids and phytic acid in the protein-rich fraction. However, extrusion significantly reduced hexane extractable lipid and phytic acid. However, the reduction observed may simply indicate a reduction in recovery due to bind with other components. Total protein and lysine contents in composite flours increased significantly as THPF levels increased in composite flour. Bread made with 5% THPF had 48% more lysine than the 100 % wheat flour (control). The THPF helped to maintain dough strength by reducing mixing tolerance index (MTI), maintaining dough stability and increasing departure time on Farinograph. Bread loaf volume was significantly reduced above 5% THPF addition. THPF increased water absorption causing an increase in bread weights by up to 6%. Overall, loaf quality deteriorated at 10 and 15% THPF levels while bread with 5% THPF was not significantly different from the control. These results support the addition of 5% THPF as a means to enhance lysine content of white pan bread.

  4. A standardised protocol for texture feature analysis of endoscopic images in gynaecological cancer

    Directory of Open Access Journals (Sweden)

    Pattichis Marios S

    2007-11-01

    Full Text Available Abstract Background In the development of tissue classification methods, classifiers rely on significant differences between texture features extracted from normal and abnormal regions. Yet, significant differences can arise due to variations in the image acquisition method. For endoscopic imaging of the endometrium, we propose a standardized image acquisition protocol to eliminate significant statistical differences due to variations in: (i the distance from the tissue (panoramic vs close up, (ii difference in viewing angles and (iii color correction. Methods We investigate texture feature variability for a variety of targets encountered in clinical endoscopy. All images were captured at clinically optimum illumination and focus using 720 × 576 pixels and 24 bits color for: (i a variety of testing targets from a color palette with a known color distribution, (ii different viewing angles, (iv two different distances from a calf endometrial and from a chicken cavity. Also, human images from the endometrium were captured and analysed. For texture feature analysis, three different sets were considered: (i Statistical Features (SF, (ii Spatial Gray Level Dependence Matrices (SGLDM, and (iii Gray Level Difference Statistics (GLDS. All images were gamma corrected and the extracted texture feature values were compared against the texture feature values extracted from the uncorrected images. Statistical tests were applied to compare images from different viewing conditions so as to determine any significant differences. Results For the proposed acquisition procedure, results indicate that there is no significant difference in texture features between the panoramic and close up views and between angles. For a calibrated target image, gamma correction provided an acquired image that was a significantly better approximation to the original target image. In turn, this implies that the texture features extracted from the corrected images provided for better

  5. Wavelet-Based Quantum Field Theory

    Directory of Open Access Journals (Sweden)

    Mikhail V. Altaisky

    2007-11-01

    Full Text Available The Euclidean quantum field theory for the fields $phi_{Delta x}(x$, which depend on both the position $x$ and the resolution $Delta x$, constructed in SIGMA 2 (2006, 046, on the base of the continuous wavelet transform, is considered. The Feynman diagrams in such a theory become finite under the assumption there should be no scales in internal lines smaller than the minimal of scales of external lines. This regularisation agrees with the existing calculations of radiative corrections to the electron magnetic moment. The transition from the newly constructed theory to a standard Euclidean field theory is achieved by integration over the scale arguments.

  6. Wavelet based multicarrier code division multiple access ...

    African Journals Online (AJOL)

    This paper presents the study on Wavelet transform based Multicarrier Code Division Multiple Access (MC-CDMA) system for a downlink wireless channel. The performance of the system is studied for Additive White Gaussian Noise Channel (AWGN) and slowly varying multipath channels. The bit error rate (BER) versus ...

  7. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  8. Evolution of solidification texture during additive manufacturing

    Science.gov (United States)

    Wei, H. L.; Mazumder, J.; DebRoy, T.

    2015-01-01

    Striking differences in the solidification textures of a nickel based alloy owing to changes in laser scanning pattern during additive manufacturing are examined based on theory and experimental data. Understanding and controlling texture are important because it affects mechanical and chemical properties. Solidification texture depends on the local heat flow directions and competitive grain growth in one of the six preferred growth directions in face centered cubic alloys. Therefore, the heat flow directions are examined for various laser beam scanning patterns based on numerical modeling of heat transfer and fluid flow in three dimensions. Here we show that numerical modeling can not only provide a deeper understanding of the solidification growth patterns during the additive manufacturing, it also serves as a basis for customizing solidification textures which are important for properties and performance of components. PMID:26553246

  9. Decameter-Scale Regolith Textures on Mercury

    Science.gov (United States)

    Kreslavsky, M. A.; Zharkova, A. Yu.; Head, J. W.

    2018-05-01

    Like on the Moon, regolith gardening smooths the surface. Small craters are in equilibrium. “Elephant hide“ typical on the lunar slopes is infrequent on Mercury. Finely Textured Slope Patches have no analog on the Moon.

  10. Extrusion Cooking Systems and Textured Vegetable Proteins

    Directory of Open Access Journals (Sweden)

    2015-02-01

    Full Text Available Many fabricated foods are cooked industrially and are given desired textures, shapes, density and rehydration characteristics by an extrusion cooking process. This relatively new process is used in the preparation of “engineered” convenience foods: textured vegetable proteins, breakfast cereals, snacks, infant foods, dry soup mixes, breading, poultry stuffing, croutons, pasta products, beverage powders, hot breakfast gruels, and in the gelatinization of starch or the starchy component of foods.

  11. Determination of textures by neutron diffraction

    International Nuclear Information System (INIS)

    Dervin, P.; Penelle, R.

    1989-01-01

    In virtue of the low absorption coefficient of most materials in regard to neutrons, neutron diffraction is particularly well adapted for high-precision characterizing of the gross texture of massive fine-grained or coarse-grained specimens of the order of the cubic centimeter. The firt part of this paper is devoted to a description of the distribution of crystalline orientations, and the second part to experimental identification of textures [fr

  12. Texture and inflation in a closed universe

    International Nuclear Information System (INIS)

    Hacyan, S.; Sarmiento, A.

    1993-01-01

    We present a cosmological model with a global homogeneous texture and inflation, but without an initial singularity. The Universe starts from an equilibrium configuration in a symmetric vacuum; the dynamic stability of this configuration is studied. We obtain numerical solutions which show that the Universe expands exponentially and the texture field decays in a finite time; this corresponds to a period of inflation followed naturally by a Friedmann expansion

  13. Filtering Color Mapped Textures and Surfaces

    OpenAIRE

    Heitz , Eric; Nowrouzezahrai , Derek; Poulin , Pierre; Neyret , Fabrice

    2013-01-01

    International audience; Color map textures applied directly to surfaces, to geometric microsurface details, or to procedural functions (such as noise), are commonly used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient color map filt...

  14. A New Insight into Land Use Classification Based on Aggregated Mobile Phone Data

    OpenAIRE

    Pei, Tao; Sobolevsky, Stanislav; Ratti, Carlo; Shaw, Shih-Lung; Zhou, Chenghu

    2013-01-01

    Land use classification is essential for urban planning. Urban land use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land use classification, most ...

  15. Texture memory and strain-texture mapping in a NiTi shape memory alloy

    International Nuclear Information System (INIS)

    Ye, B.; Majumdar, B. S.; Dutta, I.

    2007-01-01

    The authors report on the near-reversible strain hysteresis during thermal cycling of a polycrystalline NiTi shape memory alloy at a constant stress that is below the yield strength of the martensite. In situ neutron diffraction experiments are used to demonstrate that the strain hysteresis occurs due to a texture memory effect, where the martensite develops a texture when it is cooled under load from the austenite phase and is thereafter ''remembered.'' Further, the authors quantitatively relate the texture to the strain by developing a calculated strain-texture map or pole figure for the martensite phase, and indicate its applicability in other martensitic transformations

  16. Texture and Elastic Anisotropy of Mantle Olivine

    Science.gov (United States)

    Nikitin, A. N.; Ivankina, T. I.; Bourilitchev, D. E.; Klima, K.; Locajicek, T.; Pros, Z.

    Eight olivine rock samples from different European regions were collected for neu- tron texture analyses and for P-wave velocity measurements by means of ultrasonic sounding at various confining pressures. The orientation distribution functions (ODFs) of olivine were determined and pole figures of the main crystallographic planes were calculated. The spatial P-wave velocity distributions were determined at confining pressures from 0.1 to 400 MPa and modelled from the olivine textures. In dependence upon the type of rock (xenolith or dunite) different behavior of both the P-wave veloc- ity distributions and the anisotropy coefficients with various confining pressures was observed. In order to explain the interdependence of elastic anisotropy and hydrostatic pressure, a model for polycrystalline olivine rocks was suggested, which considers the influence of the crystallographic and the mechanical textures on the elastic behaviour of the polycrystal. Since the olivine texture depends upon the active slip systems and the deformation temperature, neutron texture analyses enable us to estimate depth and thermodynamical conditions during texture formation.

  17. TEXTURE ANALYSIS OF SPELT WHEAT BREAD

    Directory of Open Access Journals (Sweden)

    Magdaléna Lacko - Bartošová

    2013-02-01

    Full Text Available The bread quality is considerably dependent on the texture characteristic of bread crumb. Texture analysis is primarily concerned with the evaluation of mechanical characteristics where a material is subjected to a controlled force from which a deformation curve of its response is generated. It is an objective physical examination of baked products and gives direct information on the product quality, oppositely to dough rheology tests what are inform on the baking suitability of the flour, as raw material. This is why the texture analysis is one of the most helpful analytical methods of the product development. In the framework of our research during the years 2008 – 2009 were analyzed selected indicators of bread crumb for texture quality of three Triticum spelta L. cultivars – Oberkulmer Rotkorn, Rubiota and Franckenkorn grown in an ecological system at the locality of Dolna Malanta near Nitra. The bread texture quality was evaluated on texture analyzer TA.XT Plus and expressed as crumb firmness (N, stiffness (N.mm-1 and relative elasticity (%.Our research proved that all selected indicators were significantly influenced by the year of growing and variety. The most soft bread was measured in Rubiota, whereas bread crumb samples from Franckenkorn were the most firm and stiff. Relative elasticity confirmed that the lowest firmness and stiffness was found in Rubiota bread. The spelt grain can be a good source for making bread flour, but is closely dependent on choice of spelt variety.

  18. Neutronographic Texture Analysis of Zirconium Based Alloys

    International Nuclear Information System (INIS)

    Kruz'elová, M; Vratislav, S; Kalvoda, L; Dlouhá, M

    2012-01-01

    Neutron diffraction is a very powerful tool in texture analysis of zirconium based alloys used in nuclear technique. Textures of five samples (two rolled sheets and three tubes) were investigated by using basal pole figures, inversion pole figures, and ODF distribution function. The texture measurement was performed at diffractometer KSN2 on the Laboratory of Neutron Diffraction, Department of Solid State Engineering, Faculty of Nuclear Sciences and Physical Engineering, CTU in Prague. Procedures for studying textures with thermal neutrons and procedures for obtaining texture parameters (direct and inverse pole figures, three dimensional orientation distribution function) are also described. Observed data were processed by software packages HEXAL and GSAS. Our results can be summarized as follows: i) All samples of zirconium alloys show the distribution of middle area into two maxima in basal pole figures. This is caused by alloying elements. A characteristic split of the basal pole maxima tilted from the normal direction toward the transverse direction can be observed for all samples, ii) Sheet samples prefer orientation of planes (100) and (110) perpendicular to rolling direction and orientation of planes (002) perpendicular to normal direction, iii) Basal planes of tubes are oriented parallel to tube axis, meanwhile (100) planes are oriented perpendicular to tube axis. Level of resulting texture and maxima position is different for tubes and for sheets. The obtained results are characteristic for zirconium based alloys.

  19. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery

    Directory of Open Access Journals (Sweden)

    Ming-Der Yang

    2017-06-01

    Full Text Available Rice lodging identification relies on manual in situ assessment and often leads to a compensation dispute in agricultural disaster assessment. Therefore, this study proposes a comprehensive and efficient classification technique for agricultural lands that entails using unmanned aerial vehicle (UAV imagery. In addition to spectral information, digital surface model (DSM and texture information of the images was obtained through image-based modeling and texture analysis. Moreover, single feature probability (SFP values were computed to evaluate the contribution of spectral and spatial hybrid image information to classification accuracy. The SFP results revealed that texture information was beneficial for the classification of rice and water, DSM information was valuable for lodging and tree classification, and the combination of texture and DSM information was helpful in distinguishing between artificial surface and bare land. Furthermore, a decision tree classification model incorporating SFP values yielded optimal results, with an accuracy of 96.17% and a Kappa value of 0.941, compared with that of a maximum likelihood classification model (90.76%. The rice lodging ratio in paddies at the study site was successfully identified, with three paddies being eligible for disaster relief. The study demonstrated that the proposed spatial and spectral hybrid image classification technology is a promising tool for rice lodging assessment.

  20. Hyperspectral image classification based on local binary patterns and PCANet

    Science.gov (United States)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  1. [Application of optical flow dynamic texture in land use/cover change detection].

    Science.gov (United States)

    Yan, Li; Gong, Yi-Long; Zhang, Yi; Duan, Wei

    2014-11-01

    In the present study, a novel change detection approach for high resolution remote sensing images is proposed based on the optical flow dynamic texture (OFDT), which could achieve the land use & land cover change information automatically with a dynamic description of ground-object changes. This paper describes the ground-object gradual change process from the principle using optical flow theory, which breaks the ground-object sudden change hypothesis in remote sensing change detection methods in the past. As the steps of this method are simple, it could be integrated in the systems and software such as Land Resource Management and Urban Planning software that needs to find ground-object changes. This method takes into account the temporal dimension feature between remote sensing images, which provides a richer set of information for remote sensing change detection, thereby improving the status that most of the change detection methods are mainly dependent on the spatial dimension information. In this article, optical flow dynamic texture is the basic reflection of changes, and it is used in high resolution remote sensing image support vector machine post-classification change detection, combined with spectral information. The texture in the temporal dimension which is considered in this article has a smaller amount of data than most of the textures in the spatial dimensions. The highly automated texture computing has only one parameter to set, which could relax the onerous manual evaluation present status. The effectiveness of the proposed approach is evaluated with the 2011 and 2012 QuickBird datasets covering Duerbert Mongolian Autonomous County of Daqing City, China. Then, the effects of different optical flow smooth coefficient and the impact on the description of the ground-object changes in the method are deeply analyzed: The experiment result is satisfactory, with an 87.29% overall accuracy and an 0.850 7 Kappa index, and the method achieves better

  2. Some distinguishing characteristics of contour and texture phenomena in images

    Science.gov (United States)

    Jobson, Daniel J.

    1992-01-01

    The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.

  3. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  4. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...... that call for inquiries into the theoretical foundation of bibliographic classification theory....

  5. Prognostic Value and Reproducibility of Pretreatment CT Texture Features in Stage III Non-Small Cell Lung Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Fried, David V. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas (United States); Tucker, Susan L. [Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhou, Shouhao [Division of Quantitative Sciences, Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Liao, Zhongxing [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Mawlawi, Osama [Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas (United States); Ibbott, Geoffrey [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas (United States); Court, Laurence E., E-mail: LECourt@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas (United States)

    2014-11-15

    Purpose: To determine whether pretreatment CT texture features can improve patient risk stratification beyond conventional prognostic factors (CPFs) in stage III non-small cell lung cancer (NSCLC). Methods and Materials: We retrospectively reviewed 91 cases with stage III NSCLC treated with definitive chemoradiation therapy. All patients underwent pretreatment diagnostic contrast enhanced computed tomography (CE-CT) followed by 4-dimensional CT (4D-CT) for treatment simulation. We used the average-CT and expiratory (T50-CT) images from the 4D-CT along with the CE-CT for texture extraction. Histogram, gradient, co-occurrence, gray tone difference, and filtration-based techniques were used for texture feature extraction. Penalized Cox regression implementing cross-validation was used for covariate selection and modeling. Models incorporating texture features from the 33 image types and CPFs were compared to those with models incorporating CPFs alone for overall survival (OS), local-regional control (LRC), and freedom from distant metastases (FFDM). Predictive Kaplan-Meier curves were generated using leave-one-out cross-validation. Patients were stratified based on whether their predicted outcome was above or below the median. Reproducibility of texture features was evaluated using test-retest scans from independent patients and quantified using concordance correlation coefficients (CCC). We compared models incorporating the reproducibility seen on test-retest scans to our original models and determined the classification reproducibility. Results: Models incorporating both texture features and CPFs demonstrated a significant improvement in risk stratification compared to models using CPFs alone for OS (P=.046), LRC (P=.01), and FFDM (P=.005). The average CCCs were 0.89, 0.91, and 0.67 for texture features extracted from the average-CT, T50-CT, and CE-CT, respectively. Incorporating reproducibility within our models yielded 80.4% (±3.7% SD), 78.3% (±4.0% SD), and 78

  6. Prognostic Value and Reproducibility of Pretreatment CT Texture Features in Stage III Non-Small Cell Lung Cancer

    International Nuclear Information System (INIS)

    Fried, David V.; Tucker, Susan L.; Zhou, Shouhao; Liao, Zhongxing; Mawlawi, Osama; Ibbott, Geoffrey; Court, Laurence E.

    2014-01-01

    Purpose: To determine whether pretreatment CT texture features can improve patient risk stratification beyond conventional prognostic factors (CPFs) in stage III non-small cell lung cancer (NSCLC). Methods and Materials: We retrospectively reviewed 91 cases with stage III NSCLC treated with definitive chemoradiation therapy. All patients underwent pretreatment diagnostic contrast enhanced computed tomography (CE-CT) followed by 4-dimensional CT (4D-CT) for treatment simulation. We used the average-CT and expiratory (T50-CT) images from the 4D-CT along with the CE-CT for texture extraction. Histogram, gradient, co-occurrence, gray tone difference, and filtration-based techniques were used for texture feature extraction. Penalized Cox regression implementing cross-validation was used for covariate selection and modeling. Models incorporating texture features from the 33 image types and CPFs were compared to those with models incorporating CPFs alone for overall survival (OS), local-regional control (LRC), and freedom from distant metastases (FFDM). Predictive Kaplan-Meier curves were generated using leave-one-out cross-validation. Patients were stratified based on whether their predicted outcome was above or below the median. Reproducibility of texture features was evaluated using test-retest scans from independent patients and quantified using concordance correlation coefficients (CCC). We compared models incorporating the reproducibility seen on test-retest scans to our original models and determined the classification reproducibility. Results: Models incorporating both texture features and CPFs demonstrated a significant improvement in risk stratification compared to models using CPFs alone for OS (P=.046), LRC (P=.01), and FFDM (P=.005). The average CCCs were 0.89, 0.91, and 0.67 for texture features extracted from the average-CT, T50-CT, and CE-CT, respectively. Incorporating reproducibility within our models yielded 80.4% (±3.7% SD), 78.3% (±4.0% SD), and 78

  7. An improved high order texture features extraction method with application to pathological diagnosis of colon lesions for CT colonography

    Science.gov (United States)

    Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Wang, Huafeng; Han, Fangfang; Zhu, Wei; Liang, Zhengrong

    2014-03-01

    Differentiation of colon lesions according to underlying pathology, e.g., neoplastic and non-neoplastic, is of fundamental importance for patient management. Image intensity based textural features have been recognized as a useful biomarker for the differentiation task. In this paper, we introduce high order texture features, beyond the intensity, such as gradient and curvature, for that task. Based on the Haralick texture analysis method, we introduce a virtual pathological method to explore the utility of texture features from high order differentiations, i.e., gradient and curvature, of the image intensity distribution. The texture features were validated on database consisting of 148 colon lesions, of which 35 are non-neoplastic lesions, using the random forest classifier and the merit of area under the curve (AUC) of the receiver operating characteristics. The results show that after applying the high order features, the AUC was improved from 0.8069 to 0.8544 in differentiating non-neoplastic lesion from neoplastic ones, e.g., hyperplastic polyps from tubular adenomas, tubulovillous adenomas and adenocarcinomas. The experimental results demonstrated that texture features from the higher order images can significantly improve the classification accuracy in pathological differentiation of colorectal lesions. The gain in differentiation capability shall increase the potential of computed tomography (CT) colonography for colorectal cancer screening by not only detecting polyps but also classifying them from optimal polyp management for the best outcome in personalized medicine.

  8. Haralick texture features from apparent diffusion coefficient (ADC) MRI images depend on imaging and pre-processing parameters.

    Science.gov (United States)

    Brynolfsson, Patrik; Nilsson, David; Torheim, Turid; Asklund, Thomas; Karlsson, Camilla Thellenberg; Trygg, Johan; Nyholm, Tufve; Garpebring, Anders

    2017-06-22

    In recent years, texture analysis of medical images has become increasingly popular in studies investigating diagnosis, classification and treatment response assessment of cancerous disease. Despite numerous applications in oncology and medical imaging in general, there is no consensus regarding texture analysis workflow, or reporting of parameter settings crucial for replication of results. The aim of this study was to assess how sensitive Haralick texture features of apparent diffusion coefficient (ADC) MR images are to changes in five parameters related to image acquisition and pre-processing: noise, resolution, how the ADC map is constructed, the choice of quantization method, and the number of gray levels in the quantized image. We found that noise, resolution, choice of quantization method and the number of gray levels in the quantized images had a significant influence on most texture features, and that the effect size varied between different features. Different methods for constructing the ADC maps did not have an impact on any texture feature. Based on our results, we recommend using images with similar resolutions and noise levels, using one quantization method, and the same number of gray levels in all quantized images, to make meaningful comparisons of texture feature results between different subjects.

  9. Hydrophobicity classification of polymeric materials based on fractal dimension

    Directory of Open Access Journals (Sweden)

    Daniel Thomazini

    2008-12-01

    Full Text Available This study proposes a new method to obtain hydrophobicity classification (HC in high voltage polymer insulators. In the method mentioned, the HC was analyzed by fractal dimension (fd and its processing time was evaluated having as a goal the application in mobile devices. Texture images were created from spraying solutions produced of mixtures of isopropyl alcohol and distilled water in proportions, which ranged from 0 to 100% volume of alcohol (%AIA. Based on these solutions, the contact angles of the drops were measured and the textures were used as patterns for fractal dimension calculations.

  10. Exploiting High Resolution Multi-Seasonal Textural Measures and Spectral Information for Reedbed Mapping

    Directory of Open Access Journals (Sweden)

    Alex Okiemute Onojeghuo

    2016-02-01

    Full Text Available Reedbeds across the UK are amongst the most important habitats for rare and endangered birds, wildlife and organisms. However, over the past century, this valued wetland habitat has experienced a drastic reduction in quality and spatial coverage due to pressures from human related activities. To this end, conservation organisations across the UK have been charged with the task of conserving and expanding this threatened habitat. With this backdrop, the study aimed to develop a methodology for accurate reedbed mapping through the combined use of multi-seasonal texture measures and spectral information contained in high resolution QuickBird satellite imagery. The key objectives were to determine the most effective single-date (autumn or summer and multi-seasonal QuickBird imagery suitable for reedbed mapping over the study area; to evaluate the effectiveness of combining multi-seasonal texture measures and spectral information for reedbed mapping using a variety of combinations; and to evaluate the most suitable classification technique for reedbed mapping from three selected classification techniques, namely maximum likelihood classifier, spectral angular mapper and artificial neural network. Using two selected grey-level co-occurrence textural measures (entropy and angular second moment, a series of experiments were conducted using varied combinations of single-date and multi-seasonal QuickBird imagery. Overall, the results indicate the multi-seasonal pansharpened multispectral bands (eight layers combined with all eight grey level co-occurrence matrix texture measures (entropy and angular second moment computed using windows 3 × 3 and 7 × 7 produced the optimal reedbed (76.5% and overall classification (78.1% accuracies using the maximum likelihood classifier technique. Using the optimal 16 layer multi-seasonal pansharpened multispectral and texture combined image dataset, a total reedbed area of 9.8 hectares was successfully mapped over the

  11. TEXTURE OF COOKED SPELT WHEAT NOODLES

    Directory of Open Access Journals (Sweden)

    Magdaléna Lacko - Bartošová

    2013-02-01

    Full Text Available At present, there are limited and incomplete data on the ability of spelt to produce alimentary pasta of suitable quality. Noodles are traditional cereal-based food that is becoming increasingly popular worldwide because of its convenience, nutritional qualities, and palatability. It is generally accepted that texture is the main criterion for assessing overall quality of cooked noodles. We present selected indicators of noodle texture of three spelt cultivars – Oberkulmer Rotkorn, Rubiota and Franckenkorn grown in an ecological system at the locality of Dolna Malanta near Nitra. A texture analyzer TA.XT PLUS was used to determine cooked spelt wheat noodle firmness (N (AACC 66-50. The texture of cooked spelt wheat noodles was expressed also as elasticity (N and extensibility (mm. Statistical analysis showed significant influence of the variety and year of growing on the firmness, elasticity and extensibility of cooked noodles. The wholemeal spelt wheat noodles were characterized with lower cutting firmness than the flour noodles. Flour noodles were more tensile than wholemeal noodles. The best elasticity and extensibility of flour noodles was found in noodles prepared from Rubiota however from wholemeal noodles it was Oberkulmer Rotkorn. Spelt wheat is suitable for noodle production, however also here it is necessary to differentiate between varieties. According to achieved results, wholemeal noodles prepared from Oberkulmer Rotkorn can be recommended for noodle industry due to their consistent structure and better texture quality after cooking.

  12. Hazard classification methodology

    International Nuclear Information System (INIS)

    Brereton, S.J.

    1996-01-01

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  13. An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.

    Science.gov (United States)

    Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V

    2018-04-01

    Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic

  14. Statistical analysis of texture in trunk images for biometric identification of tree species.

    Science.gov (United States)

    Bressane, Adriano; Roveda, José A F; Martins, Antônio C G

    2015-04-01

    The identification of tree species is a key step for sustainable management plans of forest resources, as well as for several other applications that are based on such surveys. However, the present available techniques are dependent on the presence of tree structures, such as flowers, fruits, and leaves, limiting the identification process to certain periods of the year. Therefore, this article introduces a study on the application of statistical parameters for texture classification of tree trunk images. For that, 540 samples from five Brazilian native deciduous species were acquired and measures of entropy, uniformity, smoothness, asymmetry (third moment), mean, and standard deviation were obtained from the presented textures. Using a decision tree, a biometric species identification system was constructed and resulted to a 0.84 average precision rate for species classification with 0.83accuracy and 0.79 agreement. Thus, it can be considered that the use of texture presented in trunk images can represent an important advance in tree identification, since the limitations of the current techniques can be overcome.

  15. Texture analysis of speckle in optical coherence tomography images of tissue phantoms

    International Nuclear Information System (INIS)

    Gossage, Kirk W; Smith, Cynthia M; Kanter, Elizabeth M; Hariri, Lida P; Stone, Alice L; Rodriguez, Jeffrey J; Williams, Stuart K; Barton, Jennifer K

    2006-01-01

    Optical coherence tomography (OCT) is an imaging modality capable of acquiring cross-sectional images of tissue using back-reflected light. Conventional OCT images have a resolution of 10-15 μm, and are thus best suited for visualizing tissue layers and structures. OCT images of collagen (with and without endothelial cells) have no resolvable features and may appear to simply show an exponential decrease in intensity with depth. However, examination of these images reveals that they display a characteristic repetitive structure due to speckle.The purpose of this study is to evaluate the application of statistical and spectral texture analysis techniques for differentiating living and non-living tissue phantoms containing various sizes and distributions of scatterers based on speckle content in OCT images. Statistically significant differences between texture parameters and excellent classification rates were obtained when comparing various endothelial cell concentrations ranging from 0 cells/ml to 25 million cells/ml. Statistically significant results and excellent classification rates were also obtained using various sizes of microspheres with concentrations ranging from 0 microspheres/ml to 500 million microspheres/ml. This study has shown that texture analysis of OCT images may be capable of differentiating tissue phantoms containing various sizes and distributions of scatterers

  16. Electrochemically grown rough-textured nanowires

    International Nuclear Information System (INIS)

    Tyagi, Pawan; Postetter, David; Saragnese, Daniel; Papadakis, Stergios J.; Gracias, David H.

    2010-01-01

    Nanowires with a rough surface texture show unusual electronic, optical, and chemical properties; however, there are only a few existing methods for producing these nanowires. Here, we describe two methods for growing both free standing and lithographically patterned gold (Au) nanowires with a rough surface texture. The first strategy is based on the deposition of nanowires from a silver (Ag)-Au plating solution mixture that precipitates an Ag-Au cyanide complex during electrodeposition at low current densities. This complex disperses in the plating solution, thereby altering the nanowire growth to yield a rough surface texture. These nanowires are mass produced in alumina membranes. The second strategy produces long and rough Au nanowires on lithographically patternable nickel edge templates with corrugations formed by partial etching. These rough nanowires can be easily arrayed and integrated with microscale devices.

  17. Texture and deformation mechanism of yttrium

    International Nuclear Information System (INIS)

    Adamesku, R.A.; Grebenkin, S.V.; Stepanenko, A.V.

    1992-01-01

    X-ray pole figure analysis was applied to study texture and deformation mechanism in pure and commercial polycrystalline yttrium on cold working. It was found that in cast yttrium the texture manifected itself weakly enough both for pure and commercial metal. Analysis of the data obtained made it possible to assert that cold deformation of pure yttrium in the initial stage occurred mainly by slip the role of which decreased at strains higher than 36%. The texture of heavily deformed commercial yttrium contained two components, these were an 'ideal' basic orientation and an axial one with the angle of inclination about 20 deg. Twinning mechanism was revealed to be also possible in commercial yttrium

  18. Parametrization of textural patterns in 123I-ioflupane imaging for the automatic detection of Parkinsonism

    International Nuclear Information System (INIS)

    Martinez-Murcia, F. J.; Górriz, J. M.; Ramírez, J.; Moreno-Caballero, M.; Gómez-Río, M.

    2014-01-01

    Purpose: A novel approach to a computer aided diagnosis system for the Parkinson's disease is proposed. This tool is intended as a supporting tool for physicians, based on fully automated methods that lead to the classification of 123 I-ioflupane SPECT images. Methods: 123 I-ioflupane images from three different databases are used to train the system. The images are intensity and spatially normalized, then subimages are extracted and a 3D gray-level co-occurrence matrix is computed over these subimages, allowing the characterization of the texture using Haralick texture features. Finally, different discrimination estimation methods are used to select a feature vector that can be used to train and test the classifier. Results: Using the leave-one-out cross-validation technique over these three databases, the system achieves results up to a 97.4% of accuracy, and 99.1% of sensitivity, with positive likelihood ratios over 27. Conclusions: The system presents a robust feature extraction method that helps physicians in the diagnosis task by providing objective, operator-independent textural information about 123 I-ioflupane images, commonly used in the diagnosis of the Parkinson's disease. Textural features computation has been optimized by using a subimage selection algorithm, and the discrimination estimation methods used here makes the system feature-independent, allowing us to extend it to other databases and diseases

  19. Deformation texture and microtexture development in zircaloy-2

    International Nuclear Information System (INIS)

    Vanitha, C.; Kiran Kumar, M.; Samajdar, I.; Vishvanathan, N.N.; Dey, G.K.; Tewari, R.; Srivastava, D.; Banerjee, S.

    2002-01-01

    In the present study, two starting materials used were as-cast Zircaloy-2 with random texture and the finished tube with relatively stronger starting texture. Specimens of the alloys were hot rolled to various strains at different temperature. The texture measurement was carried out and was represented in the form of Orientation Distribution Function which showed a sluggish texture development on high temperature deformation. In the case of as cast alloy with increase in strain at a constant deformation temperature, development in the texture was significant. Upon increasing the working temperature, rate of the overall texture development has been found to reduce. This could be due to reduced slip-twin activities, recovery or due to recrystallization. Microstructural and relative hardening studies were carried out for understanding the mechanisms of deformation texture developments at warm and hot working stages. In the case of finished tube having initially strong texture exhibited slower development in texture on warm and hot rolling. (author)

  20. Speech-Language and Nutritional Sciences in hospital environment: analysis of terminology of food consistencies classification.

    Science.gov (United States)

    Amaral, Ana Cláudia Fernandes; Rodrigues, Lívia Azevedo; Furlan, Renata Maria Moreira Moraes; Vicente, Laélia Cristina Caseiro; Motta, Andréa Rodrigues

    2015-01-01

    To verify if there is an agreement between speech-language pathologists and nutritionists about the classification of food textures used in hospitals and their opinions about the possible consequences of differences in this classification. This is a descriptive, cross-sectional study with 30 speech-language pathologists and 30 nutritionists who worked in 14 hospitals of public and/or private network in Belo Horizonte, Brazil. The professionals answered a questionnaire, prepared by the researchers, and classified five different foods, with and without theoretical direction. The data were analyzed using Fisher's exact and Z -tests to compare ratios with a 5% significance level. Both speech-language therapists (100%) and nutritionists (90%) perceive divergence in the classification and, 86.2% and 100% of them, respectively, believe that this difference may affect the patients' recovery. Aspiration risk was the most mentioned problem. For the general classification of food textures, most of the professionals (88.5%) suggested four to six terms. As to the terminology used in the classification of food presented without theoretical direction, the professionals cited 49 terms and agreed only in the solid and liquid classifications. With theoretical direction, the professionals also agreed in the classification of thick and thin paste. Both the professionals recognized divergences in the classification of food textures and the consequent risk of damage to patient's recovery. The use of theoretical direction increased the agreement between these professionals.

  1. A Noise Robust Statistical Texture Model

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Stegmann, Mikkel Bille; Larsen, Rasmus

    2002-01-01

    Appearance Models segmentation framework. This is accomplished by augmenting the model with an estimate of the covariance of the noise present in the training data. This results in a more compact model maximising the signal-to-noise ratio, thus favouring subspaces rich on signal, but low on noise......This paper presents a novel approach to the problem of obtaining a low dimensional representation of texture (pixel intensity) variation present in a training set after alignment using a Generalised Procrustes analysis.We extend the conventional analysis of training textures in the Active...

  2. The structure of surface texture knowledge

    International Nuclear Information System (INIS)

    Yan Wang; Scott, Paul J; Jiang Xiangqian

    2005-01-01

    This research aims to create an intelligent knowledge-based system for engineering and bio-medical engineering surface texture, which will provide expert knowledge of surface texture to link surface function, specification of micro- and nano-geometry through manufacture, and verification. The intelligent knowledge base should be capable of incorporating knowledge from multiple sources (standards, books, experts, etc), adding new knowledge from these sources and still remain a coherent reliable system. A new data model based on category theory will be adopted to construct this system

  3. Biaxially textured articles formed by powder metallurgy

    Science.gov (United States)

    Goyal, Amit; Williams, Robert K.; Kroeger, Donald M.

    2003-08-05

    A biaxially textured alloy article having a magnetism less than pure Ni includes a rolled and annealed compacted and sintered powder-metallurgy preform article, the preform article having been formed from a powder mixture selected from the group of ternary mixtures consisting of: Ni powder, Cu powder, and Al powder, Ni powder, Cr powder, and Al powder; Ni powder, W powder and Al powder; Ni powder, V powder, and Al powder; Ni powder, Mo powder, and Al powder; the article having a fine and homogeneous grain structure; and having a dominant cube oriented {100} orientation texture; and further having a Curie temperature less than that of pure Ni.

  4. Biaxially textured articles formed by powder metallurgy

    Science.gov (United States)

    Goyal, Amit; Williams, Robert K.; Kroeger, Donald M.

    2003-07-29

    A biaxially textured alloy article having a magnetism less than pure Ni includes a rolled and annealed compacted and sintered powder-metallurgy preform article, the preform article having been formed from a powder mixture selected from the group of mixtures consisting of: at least 60 at % Ni powder and at least one of Cr powder, W powder, V powder, Mo powder, Cu powder, Al powder, Ce powder, YSZ powder, Y powder, Mg powder, and RE powder; the article having a fine and homogeneous grain structure; and having a dominant cube oriented {100} orientation texture; and further having a Curie temperature less than that of pure Ni.

  5. Biaxially textured articles formed by power metallurgy

    Science.gov (United States)

    Goyal, Amit; Williams, Robert K.; Kroeger, Donald M.

    2003-08-26

    A biaxially textured alloy article having a magnetism less than pure Ni includes a rolled and annealed compacted and sintered powder-metallurgy preform article, the preform article having been formed from a powder mixture selected from the group of mixtures consisting of: at least 60 at % Ni powder and at least one of Cr powder, W powder, V powder, Mo powder, Cu powder, Al powder, Ce powder, YSZ powder, Y powder, Mg powder, and RE powder; the article having a fine and homogeneous grain structure; and having a dominant cube oriented {100} orientation texture; and further having a Curie temperature less than that of pure Ni.

  6. Martensitic textures: Multiscale consequences of elastic compatibility

    International Nuclear Information System (INIS)

    Shenoy, S.R.; Lookman, T.; Saxena, A.; Bishop, A.R.

    2001-03-01

    We show that a free energy entirely in the order-parameter strain variable(s), rather than the displacement field, provides a unified understanding of martensitic textures. We use compatibility equations, linking the strain tensor components in the bulk and at interfaces, that induce anisotropic order-parameter strain interactions. These two long-range bulk/interface potentials, together with local compositional fluctuations, drive the formation of global elastic textures. Relaxational simulations show the spontaneous formation (and evolution under stress/temperature quenches) of equal width parallel twins, branched twins, and tweed, including characteristic scaling of twin width with twin length. (author)

  7. Global crystallographic textures obtained by neutron and synchrotron radiation

    International Nuclear Information System (INIS)

    Brokmeier, Heinz-Guenter

    2006-01-01

    Global crystallographic textures belong to the main characteristic parameters of engineering materials. The global crystallographic texture is always the average texture of a well-defined sample volume which is representative to solve practical engineering problems. Thus a beam having a high penetration power is needed available as neutron or high energetic X-ray radiation. Texture type and texture sharpness are of great importance for materials properties such as the deep drawing behaviour, one of the basic techniques in many industries. Advantages and disadvantages of both radiations make them complementary for measuring crystallographic textures in a wide range of materials

  8. Identification of natural images and computer-generated graphics based on statistical and textural features.

    Science.gov (United States)

    Peng, Fei; Li, Jiao-ting; Long, Min

    2015-03-01

    To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.

  9. MULTI-TEMPORAL CLASSIFICATION AND CHANGE DETECTION USING UAV IMAGES

    Directory of Open Access Journals (Sweden)

    S. Makuti

    2018-05-01

    Full Text Available In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV, textural features (GLCM and 3D geometric features. For classification purposes Conditional Random Field (CRF has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  10. Radiative stability of neutrino-mass textures

    Indian Academy of Sciences (India)

    physics pp. 647-650. Radiative stability of neutrino-mass textures. M K PARIDA ... A major challenge to particle physics at present is the theoretical understanding of ... A possible origin of two large neutrino mixings for /e -/μ and /μ -/г but small.

  11. Texture mapping in a distributed environment

    NARCIS (Netherlands)

    Nicolae, Goga; Racovita, Zoea; Telea, Alexandru

    2003-01-01

    This paper presents a tool for texture mapping in a distributed environment. A parallelization method based on the master-slave model is described. The purpose of this work is to lower the image generation time in the complex 3D scenes synthesis process. The experimental results concerning the

  12. Functionality of extrusion--texturized whey proteins.

    Science.gov (United States)

    Onwulata, C I; Konstance, R P; Cooke, P H; Farrell, H M

    2003-11-01

    Whey, a byproduct of the cheesemaking process, is concentrated by processors to make whey protein concentrates (WPC) and isolates (WPI). Only 50% of whey proteins are used in foods. In order to increase their usage, texturizing WPC, WPI, and whey albumin is proposed to create ingredients with new functionality. Extrusion processing texturizes globular proteins by shearing and stretching them into aligned or entangled fibrous bundles. In this study, WPC, WPI, and whey albumin were extruded in a twin screw extruder at approximately 38% moisture content (15.2 ml/min, feed rate 25 g/min) and, at different extrusion cook temperatures, at the same temperature for the last four zones before the die (35, 50, 75, and 100 degrees C, respectively). Protein solubility, gelation, foaming, and digestibility were determined in extrudates. Degree of extrusion-induced insolubility (denaturation) or texturization, determined by lack of solubility at pH 7 for WPI, increased from 30 to 60, 85, and 95% for the four temperature conditions 35, 50, 75, and 100 degrees C, respectively. Gel strength of extruded isolates increased initially 115% (35 degrees C) and 145% (50 degrees C), but gel strength was lost at 75 and 100 degrees C. Denaturation at these melt temperatures had minimal effect on foaming and digestibility. Varying extrusion cook temperature allowed a new controlled rate of denaturation, indicating that a texturized ingredient with a predetermined functionality based on degree of denaturation can be created.

  13. AN ILLUMINATION INVARIANT TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    K. Meena

    2013-11-01

    Full Text Available Automatic face recognition remains an interesting but challenging computer vision open problem. Poor illumination is considered as one of the major issue, since illumination changes cause large variation in the facial features. To resolve this, illumination normalization preprocessing techniques are employed in this paper to enhance the face recognition rate. The methods such as Histogram Equalization (HE, Gamma Intensity Correction (GIC, Normalization chain and Modified Homomorphic Filtering (MHF are used for preprocessing. Owing to great success, the texture features are commonly used for face recognition. But these features are severely affected by lighting changes. Hence texture based models Local Binary Pattern (LBP, Local Derivative Pattern (LDP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs are experimented under different lighting conditions. In this paper, illumination invariant face recognition technique is developed based on the fusion of illumination preprocessing with local texture descriptors. The performance has been evaluated using YALE B and CMU-PIE databases containing more than 1500 images. The results demonstrate that MHF based normalization gives significant improvement in recognition rate for the face images with large illumination conditions.

  14. Spin Transport in Ferromagnetic and Antiferromagnetic Textures

    KAUST Repository

    Akosa, Collins Ashu

    2016-01-01

    in this thesis, the current-driven velocity of magnetic textures is related to the ratio between the so-called non-adiabatic torque and magnetic damping. Uncovering the physics underlying these phenomena can lead to the optimal design of magnetic systems

  15. Factors Affecting the Textural Properties of Pork

    Science.gov (United States)

    Holmer, Sean Frederick

    2009-01-01

    Research concerning rate and extent of tenderization has focused on beef or lamb. However, it is critical to understand these processes in pork, especially as retailers move towards minimally processed or non-enhanced product. The objectives of this experiment were to evaluate the textural properties of pork (firmness and tenderness) by examining…

  16. Prague texture segmentation data generator and benchmark

    Czech Academy of Sciences Publication Activity Database

    Mikeš, Stanislav; Haindl, Michal

    2006-01-01

    Roč. 2006, č. 64 (2006), s. 67-68 ISSN 0926-4981 R&D Projects: GA MŠk(CZ) 1M0572; GA AV ČR(CZ) 1ET400750407; GA AV ČR IAA2075302 Institutional research plan: CEZ:AV0Z10750506 Keywords : image segmentation * texture * benchmark * web Subject RIV: BD - Theory of Information

  17. Phosphorus leaching in a soil textural gradient

    DEFF Research Database (Denmark)

    Glæsner, Nadia; Kjærgaard, Charlotte; Rubæk, Gitte Holton

    2009-01-01

    Texture is a major factor influencing mobilization and transport of P in soil owing partly to differences in adsorptive properties, and partly to differences in pore-size distribution and pore organization. Slurry application strategies may be important mitigation measures for reducing agricultur...

  18. On the origin of recrystallization textures

    Indian Academy of Sciences (India)

    In the ON model, it has been argued that a higher frequency of the special ... In FCC metals and alloys like aluminium, cube orientation [(001) ⟨ 100 ⟩ ] is the ... in deformation textures of aluminium and hence the classic OG model remains ...

  19. Using a Combination of Spectral and Textural Data to Measure Water-Holding Capacity in Fresh Chicken Breast Fillets

    Directory of Open Access Journals (Sweden)

    Beibei Jia

    2018-02-01

    Full Text Available The aim here was to explore the potential of visible and near-infrared (Vis/NIR hyperspectral imaging (400–1000 nm to classify fresh chicken breast fillets into different water-holding capacity (WHC groups. Initially, the extracted spectra and image textural features, as well as the mixed data of the two, were used to develop partial least square-discriminant analysis (PLS-DA classification models. Smoothing, a first derivative process, and principle component analysis (PCA were carried out sequentially on the mean spectra of all samples to deal with baseline offsets and identify outlier data. Six samples located outside the confidence ellipses of 95% confidence level in the score plot were defined as outliers. A PLS-DA model based on the outlier-free spectra provided a correct classification rate (CCR value of 78% in the prediction set. Then, seven optimal wavelengths selected using a successive projections algorithm (SPA were used to develop a simplified PLS-DA model that obtained a slightly reduced CCR with a value of 73%. Moreover, the gray-level co-occurrence matrix (GLCM was implemented on the first principle component image (with 98.13% of variance of the hyperspectral image to extract textural features (contrast, correlation, energy, and homogeneity. The CCR of the model developed using textural variables was less optimistic with a value of 59%. Compared to results of models based on spectral or textural data individually, the performance of the model based on the mixed data of optimal spectral and textural features was the best with an improved CCR of 86%. The results showed that the spectral and textural data of hyperspectral images together can be integrated in order to measure and classify the WHC of fresh chicken breast fillets.

  20. Evaluation of Liver Fibrosis Using Texture Analysis on Combined-Contrast-Enhanced Magnetic Resonance Images at 3.0T

    Directory of Open Access Journals (Sweden)

    Takeshi Yokoo

    2015-01-01

    Full Text Available Purpose. To noninvasively assess liver fibrosis using combined-contrast-enhanced (CCE magnetic resonance imaging (MRI and texture analysis. Materials and Methods. In this IRB-approved, HIPAA-compliant prospective study, 46 adults with newly diagnosed HCV infection and recent liver biopsy underwent CCE liver MRI following intravenous administration of superparamagnetic iron oxides (ferumoxides and gadolinium DTPA (gadopentetate dimeglumine. The image texture of the liver was quantified in regions-of-interest by calculating 165 texture features. Liver biopsy specimens were stained with Masson trichrome and assessed qualitatively (METAVIR fibrosis score and quantitatively (% collagen stained area. Using L1 regularization path algorithm, two texture-based multivariate linear models were constructed, one for quantitative and the other for quantitative histology prediction. The prediction performance of each model was assessed using receiver operating characteristics (ROC and correlation analyses. Results. The texture-based predicted fibrosis score significantly correlated with qualitative (r=0.698, P<0.001 and quantitative (r=0.757, P<0.001 histology. The prediction model for qualitative histology had 0.814–0.976 areas under the curve (AUC, 0.659–1.000 sensitivity, 0.778–0.930 specificity, and 0.674–0.935 accuracy, depending on the binary classification threshold. The prediction model for quantitative histology had 0.742–0.950 AUC, 0.688–1.000 sensitivity, 0.679–0.857 specificity, and 0.696–0.848 accuracy, depending on the binary classification threshold. Conclusion. CCE MRI and texture analysis may permit noninvasive assessment of liver fibrosis.

  1. Land-Use and Land-Cover Mapping Using a Gradable Classification Method

    Directory of Open Access Journals (Sweden)

    Keigo Kitada

    2012-05-01

    Full Text Available Conventional spectral-based classification methods have significant limitations in the digital classification of urban land-use and land-cover classes from high-resolution remotely sensed data because of the lack of consideration given to the spatial properties of images. To recognize the complex distribution of urban features in high-resolution image data, texture information consisting of a group of pixels should be considered. Lacunarity is an index used to characterize different texture appearances. It is often reported that the land-use and land-cover in urban areas can be effectively classified using the lacunarity index with high-resolution images. However, the applicability of the maximum-likelihood approach for hybrid analysis has not been reported. A more effective approach that employs the original spectral data and lacunarity index can be expected to improve the accuracy of the classification. A new classification procedure referred to as “gradable classification method” is proposed in this study. This method improves the classification accuracy in incremental steps. The proposed classification approach integrates several classification maps created from original images and lacunarity maps, which consist of lacnarity values, to create a new classification map. The results of this study confirm the suitability of the gradable classification approach, which produced a higher overall accuracy (68% and kappa coefficient (0.64 than those (65% and 0.60, respectively obtained with the maximum-likelihood approach.

  2. Spin Transport in Ferromagnetic and Antiferromagnetic Textures

    KAUST Repository

    Akosa, Collins A.

    2016-12-07

    In this dissertation, we provide an accurate description of spin transport in magnetic textures and in particular, we investigate in detail, the nature of spin torque and magnetic damping in such systems. Indeed, as will be further discussed in this thesis, the current-driven velocity of magnetic textures is related to the ratio between the so-called non-adiabatic torque and magnetic damping. Uncovering the physics underlying these phenomena can lead to the optimal design of magnetic systems with improved efficiency. We identified three interesting classes of systems which have attracted enormous research interest (i) Magnetic textures in systems with broken inversion symmetry: We investigate the nature of magnetic damping in non-centrosymmetric ferromagnets. Based on phenomenological and microscopic derivations, we show that the magnetic damping becomes chiral, i.e. depends on the chirality of the magnetic texture. (ii) Ferromagnetic domain walls, skyrmions and vortices: We address the physics of spin transport in sharp disordered magnetic domain walls and vortex cores. We demonstrate that upon spin-independent scattering, the non-adiabatic torque can be significantly enhanced. Such an enhancement is large for vortex cores compared to transverse domain walls. We also show that the topological spin currents owing in these structures dramatically enhances the non-adiabaticity, an effect unique to non-trivial topological textures (iii) Antiferromagnetic skyrmions: We extend this study to antiferromagnetic skyrmions and show that such an enhanced topological torque also exist in these systems. Even more interestingly, while such a non-adiabatic torque inuences the undesirable transverse velocity of ferromagnetic skyrmions, in antiferromagnetic skyrmions, the topological non-adiabatic torque directly determines the longitudinal velocity. As a consequence, scaling down the antiferromagnetic skyrmion results in a much more efficient spin torque.

  3. Texture of semi-solids : sensory flavor-texture interactions for custard desserts

    NARCIS (Netherlands)

    Wijk, de R.A.; Rasing, F.; Wilkinson, C.L.

    2003-01-01

    Possible interactions between flavor and oral texture sensations were investigated for four flavorants, diacetyl, benzaldehyde, vanillin, and caffeine, added in two concentrations to model vanilla custard desserts. The flavorants affected viscosities and resulted in corresponding changes in

  4. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  5. Application des ondelettes à l'analyse de texture et à l'inspection de surface industrielle

    Science.gov (United States)

    Wolf, D.; Husson, R.

    1993-11-01

    This paper presents a method of texture analysis based on multiresolution wavelets analysis. We discuss the problem of theoretical and experimental choice of the wavelet. Statistical modelling of wavelet images is treated and it results in considering statistical distribution to be a generalized Gaussian law. An algorithm for texture classification is developed with respect of the variances of different wavelet images. An industrial application of this algorithm illustrates its quality and proves its aptitude for automation of certain tasks in industrial control. Nous présentons une méthode d'analyse de texture fondée sur l'analyse multirésolution par ondelettes. Nous discutons du problème du choix théorique et expérimental de l'ondelette. Le problème de la modélisation statistique des images d'ondelettes est traité et aboutit à considérer la distribution statistique comme une loi de Gauss généralisée. Un algorithme de classification de texture est construit à l'aide de la variance des différentes images d'ondelettes. Enfin, une application industrielle de cet algorithme illustre ses qualités et démontre son aptitude à l'automatisation de certaines tâches de contrôle industriel.

  6. Texture Retrieval from VHR Optical Remote Sensed Images Using the Local Extrema Descriptor with Application to Vineyard Parcel Detection

    Directory of Open Access Journals (Sweden)

    Minh-Tan Pham

    2016-04-01

    Full Text Available In this article, we develop a novel method for the detection of vineyard parcels in agricultural landscapes based on very high resolution (VHR optical remote sensing images. Our objective is to perform texture-based image retrieval and supervised classification algorithms. To do that, the local textural and structural features inside each image are taken into account to measure its similarity to other images. In fact, VHR images usually involve a variety of local textures and structures that may verify a weak stationarity hypothesis. Hence, an approach only based on characteristic points, not on all pixels of the image, is supposed to be relevant. This work proposes to construct the local extrema-based descriptor (LED by using the local maximum and local minimum pixels extracted from the image. The LED descriptor is formed based on the radiometric, geometric and gradient features from these local extrema. We first exploit the proposed LED descriptor for the retrieval task to evaluate its performance on texture discrimination. Then, it is embedded into a supervised classification framework to detect vine parcels using VHR satellite images. Experiments performed on VHR panchromatic PLEIADES image data prove the effectiveness of the proposed strategy. Compared to state-of-the-art methods, an enhancement of about 7% in retrieval rate is achieved. For the detection task, about 90% of vineyards are correctly detected.

  7. Anisotropic 3D texture synthesis with application to volume rendering

    DEFF Research Database (Denmark)

    Laursen, Lasse Farnung; Ersbøll, Bjarne Kjær; Bærentzen, Jakob Andreas

    2011-01-01

    images using a 12.1 megapixel camera. Next, we extend the volume rendering pipeline by creating a transfer function which yields not only color and opacity from the input intensity, but also texture coordinates for our synthesized 3D texture. Thus, we add texture to the volume rendered images....... This method is applied to a high quality visualization of a pig carcass, where samples of meat, bone, and fat have been used to produce the anisotropic 3D textures....

  8. Quantifying the ability of environmental parameters to predict soil texture fractions using regression-tree model with GIS and LIDAR data

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Bou Kheir, Rania; Greve, Mette Balslev

    2012-01-01

    Soil texture is an important soil characteristic that drives crop production and field management, and is the basis for environmental monitoring (including soil quality and sustainability, hydrological and ecological processes, and climate change simulations). The combination of coarse sand, fine...... sand, silt, and clay in soil determines its textural classification. This study used Geographic Information Systems (GIS) and regression-tree modeling to precisely quantify the relationships between the soil texture fractions and different environmental parameters on a national scale, and to detect...... precipitation, seasonal precipitation to statistically explain soil texture fractions field/laboratory measurements (45,224 sampling sites) in the area of interest (Denmark). The developed strongest relationships were associated with clay and silt, variance being equal to 60%, followed by coarse sand (54...

  9. Texture analysis for mapping Tamarix parviflora using aerial photographs along the Cache Creek, California.

    Science.gov (United States)

    Ge, Shaokui; Carruthers, Raymond; Gong, Peng; Herrera, Angelica

    2006-03-01

    Natural color photographs were used to detect the coverage of saltcedar, Tamarix parviflora, along a 40 km portion of Cache Creek near Woodland, California. Historical aerial photographs from 2001 were retrospectively evaluated and compared with actual ground-based information to assess accuracy of the assessment process. The color aerial photos were sequentially digitized, georeferenced, classified using color and texture methods, and mosaiced into maps for field use. Eight types of ground cover (Tamarix, agricultural crops, roads, rocks, water bodies, evergreen trees, non-evergreen trees and shrubs (excluding Tamarix)) were selected from the digitized photos for separability analysis and supervised classification. Due to color similarities among the eight cover types, the average separability, based originally only on color, was very low. The separability was improved significantly through the inclusion of texture analysis. Six types of texture measures with various window sizes were evaluated. The best texture was used as an additional feature along with the color, for identifying Tamarix. A total of 29 color photographs were processed to detect Tamarix infestations using a combination of the original digital images and optimal texture features. It was found that the saltcedar covered a total of 3.96 km(2) (396 hectares) within the study area. For the accuracy assessment, 95 classified samples from the resulting map were checked in the field with a global position system (GPS) unit to verify Tamarix presence. The producer's accuracy was 77.89%. In addition, 157 independently located ground sites containing saltcedar were compared with the classified maps, producing a user's accuracy of 71.33%.

  10. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    Science.gov (United States)

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  11. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  12. Computer-assisted liver graft steatosis assessment via learning-based texture analysis.

    Science.gov (United States)

    Moccia, Sara; Mattos, Leonardo S; Patrini, Ilaria; Ruperti, Michela; Poté, Nicolas; Dondero, Federica; Cauchy, François; Sepulveda, Ailton; Soubrane, Olivier; De Momi, Elena; Diaspro, Alberto; Cesaretti, Manuela

    2018-05-23

    Fast and accurate graft hepatic steatosis (HS) assessment is of primary importance for lowering liver dysfunction risks after transplantation. Histopathological analysis of biopsied liver is the gold standard for assessing HS, despite being invasive and time consuming. Due to the short time availability between liver procurement and transplantation, surgeons perform HS assessment through clinical evaluation (medical history, blood tests) and liver texture visual analysis. Despite visual analysis being recognized as challenging in the clinical literature, few efforts have been invested to develop computer-assisted solutions for HS assessment. The objective of this paper is to investigate the automatic analysis of liver texture with machine learning algorithms to automate the HS assessment process and offer support for the surgeon decision process. Forty RGB images of forty different donors were analyzed. The images were captured with an RGB smartphone camera in the operating room (OR). Twenty images refer to livers that were accepted and 20 to discarded livers. Fifteen randomly selected liver patches were extracted from each image. Patch size was [Formula: see text]. This way, a balanced dataset of 600 patches was obtained. Intensity-based features (INT), histogram of local binary pattern ([Formula: see text]), and gray-level co-occurrence matrix ([Formula: see text]) were investigated. Blood-sample features (Blo) were included in the analysis, too. Supervised and semisupervised learning approaches were investigated for feature classification. The leave-one-patient-out cross-validation was performed to estimate the classification performance. With the best-performing feature set ([Formula: see text]) and semisupervised learning, the achieved classification sensitivity, specificity, and accuracy were 95, 81, and 88%, respectively. This research represents the first attempt to use machine learning and automatic texture analysis of RGB images from ubiquitous smartphone

  13. Screening Mississippi River Levees Using Texture-Based and Polarimetric-Based Features from Synthetic Aperture Radar Data

    Directory of Open Access Journals (Sweden)

    Lalitha Dabbiru

    2017-03-01

    Full Text Available This article reviews the use of synthetic aperture radar remote sensing data for earthen levee mapping with an emphasis on finding the slump slides on the levees. Earthen levees built on the natural levees parallel to the river channel are designed to protect large areas of populated and cultivated land in the Unites States from flooding. One of the signs of potential impending levee failure is the appearance of slump slides. On-site inspection of levees is expensive and time-consuming; therefore, a need to develop efficient techniques based on remote sensing technologies is mandatory to prevent failures under flood loading. Analysis of multi-polarized radar data is one of the viable tools for detecting the problem areas on the levees. In this study, we develop methods to detect anomalies on the levee, such as slump slides and give levee managers new tools to prioritize their tasks. This paper presents results of applying the National Aeronautics and Space Administration (NASA Jet Propulsion Lab (JPL’s Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR quad-polarized L-band data to detect slump slides on earthen levees. The study area encompasses a portion of levees of the lower Mississippi River in the United States. In this paper, we investigate the performance of polarimetric and texture features for efficient levee classification. Texture features derived from the gray level co-occurrence (GLCM matrix and discrete wavelet transform were computed and analyzed for efficient levee classification. The pixel-based polarimetric decomposition features, such as entropy, anisotropy, and scattering angle were also computed and applied to the support vector machine classifier to characterize the radar imagery and compared the results with texture-based classification. Our experimental results showed that inclusion of textural features derived from the SAR data using the discrete wavelet transform (DWT features and GLCM features provided

  14. A dynamical system approach to texel identification in regular textures

    NARCIS (Netherlands)

    Grigorescu, S.E.; Petkov, N.; Loncaric, S; Neri, A; Babic, H

    2003-01-01

    We propose a texture analysis method based on Rényi’s entropies. The method aims at identifying texels in regular textures by searching for the smallest window through which the minimum number of different visual patterns is observed when moving the window over a given texture. The experimental

  15. Texture Analysis Using Rényi’s Generalized Entropies

    NARCIS (Netherlands)

    Grigorescu, S.E.; Petkov, N.

    2003-01-01

    We propose a texture analysis method based on Rényi’s generalized entropies. The method aims at identifying texels in regular textures by searching for the smallest window through which the minimum number of different visual patterns is observed when moving the window over a given texture. The

  16. TEXTURE ANALYSIS OF EXTRUDED APPLE POMACE - WHEAT SEMOLINA BLENDS

    Directory of Open Access Journals (Sweden)

    Ivan Bakalov

    2016-03-01

    Full Text Available Apple pomace - wheat semolina blends were extruded in a laboratory single screw extruder (Brabender 20 DN, Germany. Effects apple pomace content, moisture content, screw speed, and temperature of final cooking zone on texture of extrudates were studied applying response surface methodology. The texture characteristics of the extrudates were measured using a TA.XT Plus Texture Analyser, Stable Micro Systems.

  17. Texture Control During the Manufacturing of Nonoriented Electrical Steels

    NARCIS (Netherlands)

    Kestens, L.; Jacobs, S.

    2008-01-01

    Methods of modern quantitative texture analysis are applied in order to characterize the crystallographic texture of various non-oriented electrical steel grades in view of their relation with the magnetic properties of the steel sheet. A texture parameter is defined which quantifies the density of

  18. Evaluation of texture differences among varieties of cooked quinoa

    Science.gov (United States)

    Texture is one of the most significant factors for consumers’ experience of foods. Texture difference of cooked quinoa among thirteen different varieties was studied. Correlations between the texture and seed composition, seed characteristics, cooking qualities, flour pasting properties and flour th...

  19. Line Laser and Triple Laser Quantification of the Difference in International Roughness Index between Textured and Non-Textured Strips

    Science.gov (United States)

    2017-07-01

    Practitioners have often wondered whether, during ride measurement with inertial devices, the motion of the laser through pavement texture introduces non representative values of international roughness index (IRI), particularly in certain textures. ...

  20. Cardiac arrhythmia beat classification using DOST and PSO tuned SVM.

    Science.gov (United States)

    Raj, Sandeep; Ray, Kailash Chandra; Shankar, Om

    2016-11-01

    The increase in the number of deaths due to cardiovascular diseases (CVDs) has gained significant attention from the study of electrocardiogram (ECG) signals. These ECG signals are studied by the experienced cardiologist for accurate and proper diagnosis, but it becomes difficult and time-consuming for long-term recordings. Various signal processing techniques are studied to analyze the ECG signal, but they bear limitations due to the non-stationary behavior of ECG signals. Hence, this study aims to improve the classification accuracy rate and provide an automated diagnostic solution for the detection of cardiac arrhythmias. The proposed methodology consists of four stages, i.e. filtering, R-peak detection, feature extraction and classification stages. In this study, Wavelet based approach is used to filter the raw ECG signal, whereas Pan-Tompkins algorithm is used for detecting the R-peak inside the ECG signal. In the feature extraction stage, discrete orthogonal Stockwell transform (DOST) approach is presented for an efficient time-frequency representation (i.e. morphological descriptors) of a time domain signal and retains the absolute phase information to distinguish the various non-stationary behavior ECG signals. Moreover, these morphological descriptors are further reduced in lower dimensional space by using principal component analysis and combined with the dynamic features (i.e based on RR-interval of the ECG signals) of the input signal. This combination of two different kinds of descriptors represents each feature set of an input signal that is utilized for classification into subsequent categories by employing PSO tuned support vector machines (SVM). The proposed methodology is validated on the baseline MIT-BIH arrhythmia database and evaluated under two assessment schemes, yielding an improved overall accuracy of 99.18% for sixteen classes in the category-based and 89.10% for five classes (mapped according to AAMI standard) in the patient

  1. Zonal velocity and texture in the jovian atmosphere inferred from Voyager images

    International Nuclear Information System (INIS)

    Ingersoll, A.P.; Beebe, R.F.; Collins, S.A.; Hunt, G.E.; Mitchell, J.L.; Muller, P.; Smith, B.A.; Terrile, R.J.

    1979-01-01

    The first report (Smith et al. Science; 204: 951 (1979)) of the Voyager imaging science team following the 5 March 1979 encounter described Jupiter's changing appearance at resolutions down to 10 km, over intervals as small as 1 h. Examples of small-scale convection, rapid variations of features, and complex interactions of closed vortices were presented. This article extends these results in two ways. First, measurements of the latitudinal profile of zonal (eastward) velocity are presented, from which the absolute vorticity gradient is estimated. Second, a classification scheme based on texture ie the patterns of small features visible at resolutions of 100 km or better, is presented. (UK)

  2. A new texture descriptor based on local micro-pattern for detection of architectural distortion in mammographic images

    Science.gov (United States)

    de Oliveira, Helder C. R.; Moraes, Diego R.; Reche, Gustavo A.; Borges, Lucas R.; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.

    2017-03-01

    This paper presents a new local micro-pattern texture descriptor for the detection of Architectural Distortion (AD) in digital mammography images. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automatic detection of AD, but their performance are still unsatisfactory. The proposed descriptor, Local Mapped Pattern (LMP), is a generalization of the Local Binary Pattern (LBP), which is considered one of the most powerful feature descriptor for texture classification in digital images. Compared to LBP, the LMP descriptor captures more effectively the minor differences between the local image pixels. Moreover, LMP is a parametric model which can be optimized for the desired application. In our work, the LMP performance was compared to the LBP and four Haralick's texture descriptors for the classification of 400 regions of interest (ROIs) extracted from clinical mammograms. ROIs were selected and divided into four classes: AD, normal tissue, microcalcifications and masses. Feature vectors were used as input to a multilayer perceptron neural network, with a single hidden layer. Results showed that LMP is a good descriptor to distinguish AD from other anomalies in digital mammography. LMP performance was slightly better than the LBP and comparable to Haralick's descriptors (mean classification accuracy = 83%).

  3. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  4. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2015-01-01

    Full Text Available The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI. In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII. The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  5. Efficient rolling texture predictions and texture-sensitive thermomechanical properties of α-uranium foils

    Science.gov (United States)

    Steiner, Matthew A.; Klein, Robert W.; Calhoun, Christopher A.; Knezevic, Marko; Garlea, Elena; Agnew, Sean R.

    2017-11-01

    Finite element (FE) analysis was used to simulate the strain history of an α-uranium foil during cold straight-rolling, with the sheet modeled as an isotropic elastoplastic continuum. The resulting strain history was then used as input for a viscoplastic self-consistent (VPSC) polycrystal plasticity model to simulate crystallographic texture evolution. Mid-plane textures predicted via the combined FE→VPSC approach show alignment of the (010) poles along the rolling direction (RD), and the (001) poles along the normal direction (ND) with a symmetric splitting along RD. The surface texture is similar to that of the mid-plane, but with a shear-induced asymmetry that favors one of the RD split features of the (001) pole figure. Both the mid-plane and surface textures predicted by the FE→VPSC approach agree with published experimental results for cold straight-rolled α-uranium plates, as well as predictions made by a more computationally intensive full-field crystal plasticity based finite element model. α-uranium foils produced by cold-rolling must typically undergo a recrystallization anneal to restore ductility prior to their final application, resulting in significant texture evolution from the cold-rolled plate deformation texture. Using the texture measured from a foil in the final recrystallized state, coefficients of thermal expansion and the elastic stiffness tensors were calculated using a thermo-elastic self-consistent model, and the anisotropic yield loci and flow curves along the RD, TD, and ND were predicted using the VPSC code.

  6. SURFACE TEXTURE ANALYSIS FOR FUNCTIONALITY CONTROL

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Andreasen, Jan Lasson; Tosello, Guido

    This document is used in connection with three exercises of 3 hours duration as a part of the course VISION ONLINE – One week course on Precision & Nanometrology. The exercises concern surface texture analysis for functionality control, in connection with three different case stories. This docume...... contains a short description of each case story, 3-D roughness parameters analysis and relation with the product’s functionality.......This document is used in connection with three exercises of 3 hours duration as a part of the course VISION ONLINE – One week course on Precision & Nanometrology. The exercises concern surface texture analysis for functionality control, in connection with three different case stories. This document...

  7. Composite biaxially textured substrates using ultrasonic consolidation

    Science.gov (United States)

    Blue, Craig A; Goyal, Amit

    2013-04-23

    A method of forming a composite sheet includes disposing an untextured metal or alloy first sheet in contact with a second sheet in an aligned opposing position; bonding the first sheet to the second sheet by applying an oscillating ultrasonic force to at least one of the first sheet and the second sheet to form an untextured intermediate composite sheet; and annealing the untextured intermediate composite sheet at a temperature lower than a primary re-crystallization temperature of the second sheet and higher than a primary re-crystallization temperature of the first sheet to convert the untextured first sheet into a cube textured sheet, wherein the cube texture is characterized by a .phi.-scan having a FWHM of no more than 15.degree. in all directions, the second sheet remaining untextured, to form a composite sheet.

  8. Dropwise condensation on inclined textured surfaces

    CERN Document Server

    Khandekar, Sameer

    2014-01-01

    Dropwise Condensation on Textured Surfaces presents a holistic framework for understanding dropwise condensation through mathematical modeling and meaningful experiments. The book presents a review of the subject required to build up models as well as to design experiments. Emphasis is placed on the effect of physical and chemical texturing and their effect on the bulk transport phenomena. Application of the model to metal vapor condensation is of special interest. The unique behavior of liquid metals, with their low Prandtl number and high surface tension, is also discussed. The model predicts instantaneous drop size distribution for a given level of substrate subcooling and derives local as well as spatio-temporally averaged heat transfer rates and wall shear stress.

  9. Texture-based analysis of COPD

    DEFF Research Database (Denmark)

    Sørensen, Lauge; Nielsen, Mads; Lo, Pechin Chien Pau

    2012-01-01

    This study presents a fully automatic, data-driven approach for texture-based quantitative analysis of chronic obstructive pulmonary disease (COPD) in pulmonary computed tomography (CT) images. The approach uses supervised learning where the class labels are, in contrast to previous work, based...... on measured lung function instead of on manually annotated regions of interest (ROIs). A quantitative measure of COPD is obtained by fusing COPD probabilities computed in ROIs within the lung fields where the individual ROI probabilities are computed using a k nearest neighbor (kNN ) classifier. The distance...... and subsequently applied to classify 200 independent images from the same screening trial. The texture-based measure was significantly better at discriminating between subjects with and without COPD than were the two most common quantitative measures of COPD in the literature, which are based on density...

  10. Texture of fermion mass matrices in partially unified theories

    International Nuclear Information System (INIS)

    Dutta, B.; Texas Univ., Austin, TX; Nandi, S.; Texas Univ., Austin, TX

    1996-01-01

    We investigate the texture of fermion mass matrices in theories with partial unification (for example, SU(2) L x SU(2) R x SU(4) c ) at a scale of ∼ 10 12 GeV. Starting with the low energy values of the masses and the mixing angles, we find only two viable textures with at most four texture zeros. One of these corresponds to a somewhat modified Fritzsch textures. A theoretical derivation of these textures leads to new interesting relations among the masses and the mixing angles. 13 refs

  11. Texture analyses of Sauropod dinosaur bones from Tendaguru

    International Nuclear Information System (INIS)

    Pyzalla, A.R.; Sander, P.M.; Hansen, A.; Ferreyro, R.; Yi, S.-B.; Stempniewicz, M.; Brokmeier, H.-G.

    2006-01-01

    The apatite texture of fossil Brachiosaurus brancai and Barosaurus africanus sauropod bones from the excavation site at Tendaguru, Tanzania, was characterized by neutron diffraction pole figures. The results obtained reveal predominantly -fibre textures of the apatite; the fibre direction coincides with the longitudinal direction of the long bones of the skeletons. Neutron pole figures further indicate that other texture types may also be present. Texture strength is similar to dinosaur tendons and contemporary turkey tendon studied by others. Variations of texture strength across the bone wall cross-sections are not significantly large

  12. Texture analyses of Sauropod dinosaur bones from Tendaguru

    Energy Technology Data Exchange (ETDEWEB)

    Pyzalla, A.R. [TU Wien, Institute of Material Science and Technology, Karlsplatz 13-308, A-1040 Vienna (Austria) and MPI fuer Eisenforschung GmbH, Max-Planck-Str. 1, D-40237 Duesseldorf (Germany)]. E-mail: pyzalla@mpie.de; Sander, P.M. [University of Bonn, Institute of Palaeontology, Nusseallee, D-53115 Bonn (Germany); Hansen, A. [TU Clausthal, Institute of Materials Science and Engineering. A, Structural Materials: Properties, Microstructure and Processingnd GKSS Research Centre Geesthacht GmbH, Geesthacht, Max-Planck-Str.1, D-21502 Geesthacht (Germany); Ferreyro, R. [TU Wien, Institute of Material Science and Technology, Karlsplatz 13-308, A-1040 Vienna (Austria); Yi, S.-B. [TU Clausthal, Institute of Materials Science and Engineering. A, Structural Materials: Properties, Microstructure and Processingnd GKSS Research Centre Geesthacht GmbH, Geesthacht, Max-Planck-Str.1, D-21502 Geesthacht (Germany); MPI fuer Eisenforschung GmbH, Max-Planck-Str. 1, D-40237 Duesseldorf (Germany); Stempniewicz, M. [TU Wien, Institute of Material Science and Technology, Karlsplatz 13-308, A-1040 Vienna (Austria); Brokmeier, H.-G. [TU Clausthal, Institute of Materials Science and Engineering. A, Structural Materials: Properties, Microstructure and Processingnd GKSS Research Centre Geesthacht GmbH, Geesthacht, Max-Planck-Str.1, D-21502 Geesthacht (Germany)

    2006-11-10

    The apatite texture of fossil Brachiosaurus brancai and Barosaurus africanus sauropod bones from the excavation site at Tendaguru, Tanzania, was characterized by neutron diffraction pole figures. The results obtained reveal predominantly <0 0 0 1>-fibre textures of the apatite; the fibre direction coincides with the longitudinal direction of the long bones of the skeletons. Neutron pole figures further indicate that other texture types may also be present. Texture strength is similar to dinosaur tendons and contemporary turkey tendon studied by others. Variations of texture strength across the bone wall cross-sections are not significantly large.

  13. Texture Mapped Paper Pop-Ups

    OpenAIRE

    Darmadji, Armandarius; --, Liliana

    2013-01-01

    Origamic architecture (OA) merupakan papercraft yang dapat mereplika struktur arsitektural, pola geometri, dan objek tiga dimensi (3D) lainnya dalam bentuk pop-up hanya dengan melipat dan menggunting satu buah kertas. Rancangan image 2-dimensi yang dapat direalisasikan menjadi OA disebut OA plan. Pemberian texture pada OA plan dapat digunakan untuk menampilkan detail visual pada OA yang dihasilkan. Akan tetapi, desain OA plan cenderung memiliki bentuk geometri yang berbeda dengan objek asliny...

  14. Dynamic texture as foreground and background

    Czech Academy of Sciences Publication Activity Database

    Chetverikov, D.; Fazekas, S.; Haindl, Michal

    2011-01-01

    Roč. 22, č. 5 (2011), s. 741-750 ISSN 0932-8092 R&D Projects: GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : Dynamic texture * Optical flow * SVD Subject RIV: BD - Theory of Information Impact factor: 1.009, year: 2011 http://library.utia.cas.cz/separaty/2011/RO/haindl-0345450.pdf

  15. Texture zeros in neutrino mass matrix

    Energy Technology Data Exchange (ETDEWEB)

    Dziewit, B., E-mail: bartosz.dziewit@us.edu.pl; Holeczek, J., E-mail: jacek.holeczek@us.edu.pl; Richter, M., E-mail: monikarichter18@gmail.com [University of Silesia, Institute of Physics (Poland); Zajac, S., E-mail: s.zajac@uksw.edu.pl [Cardinal Stefan Wyszyński University in Warsaw, Faculty of Mathematics and Natural Studies (Poland); Zralek, M., E-mail: marek.zralek@us.edu.pl [University of Silesia, Institute of Physics (Poland)

    2017-03-15

    The Standard Model does not explain the hierarchy problem. Before the discovery of nonzero lepton mixing angle θ{sub 13} high hopes in explanation of the shape of the lepton mixing matrix were combined with non-Abelian symmetries. Nowadays, assuming one Higgs doublet, it is unlikely that this is still valid. Texture zeroes, that are combined with abelian symmetries, are intensively studied. The neutrino mass matrix is a natural way to study such symmetries.

  16. Wavelet and Blend maps for texture synthesis

    OpenAIRE

    Du Jin-Lian; Wang Song; Meng Xianhai

    2011-01-01

    blending is now a popular technology for large realtime texture synthesis .Nevertheless, creating blend map during rendering is time and computation consuming work. In this paper, we exploited a method to create a kind of blend tile which can be tile together seamlessly. Note that blend map is in fact a kind of image, which is Markov Random Field, contains multiresolution signals, while wavelet is a powerful way to process multiresolution signals, we use wavelet to process the traditional ble...

  17. Topological patterns of mesh textures in serpentinites

    Science.gov (United States)

    Miyazawa, M.; Suzuki, A.; Shimizu, H.; Okamoto, A.; Hiraoka, Y.; Obayashi, I.; Tsuji, T.; Ito, T.

    2017-12-01

    Serpentinization is a hydration process that forms serpentine minerals and magnetite within the oceanic lithosphere. Microfractures crosscut these minerals during the reactions, and the structures look like mesh textures. It has been known that the patterns of microfractures and the system evolutions are affected by the hydration reaction and fluid transport in fractures and within matrices. This study aims at quantifying the topological patterns of the mesh textures and understanding possible conditions of fluid transport and reaction during serpentinization in the oceanic lithosphere. Two-dimensional simulation by the distinct element method (DEM) generates fracture patterns due to serpentinization. The microfracture patterns are evaluated by persistent homology, which measures features of connected components of a topological space and encodes multi-scale topological features in the persistence diagrams. The persistence diagrams of the different mesh textures are evaluated by principal component analysis to bring out the strong patterns of persistence diagrams. This approach help extract feature values of fracture patterns from high-dimensional and complex datasets.

  18. Calculation of skid resistance from texture measurements

    Directory of Open Access Journals (Sweden)

    Andreas Ueckermann

    2015-02-01

    Full Text Available There is a wide range of routine skid resistance measurement devices on the market. All of them are measuring the friction force between a rubber wheel and the wetted road surface. Common to all of them is that they are relatively complex and costly because generally a truck carrying a large water tank is needed to wet the surface with a defined water layer. Because of the limited amount of water they can carry they are limited in range. Besides that the measurement is depending on factors like water film thickness, temperature, measurement speed, rubber aging, rubber wear and even road evenness and curviness. All of these factors will affect the skid resistance and are difficult to control. We present a concept of contactless skid resistance measurement which is based on optical texture measurement and consists of two components: measurement of the pavement texture by means of an optical measuring system and calculation of the skid resistance based on the measured texture by means of a rubber friction model. The basic assumptions underlying the theoretical approach and the model itself based on the theory of Persson are presented. The concept is applied to a laboratory device called Wehner/Schulze (W/S machine to prove the theoretical approach. The results are very promising. A strong indication could be provided that skid resistance could be measured without contact in the future.

  19. Model for understanding consumer textural food choice.

    Science.gov (United States)

    Jeltema, Melissa; Beckley, Jacqueline; Vahalik, Jennifer

    2015-05-01

    The current paradigm for developing products that will match the marketing messaging is flawed because the drivers of product choice and satisfaction based on texture are misunderstood. Qualitative research across 10 years has led to the thesis explored in this research that individuals have a preferred way to manipulate food in their mouths (i.e., mouth behavior) and that this behavior is a major driver of food choice, satisfaction, and the desire to repurchase. Texture, which is currently thought to be a major driver of product choice, is a secondary factor, and is important only in that it supports the primary driver-mouth behavior. A model for mouth behavior is proposed and the qualitative research supporting the identification of different mouth behaviors is presented. The development of a trademarked typing tool for characterizing mouth behavior is described along with quantitative substantiation of the tool's ability to group individuals by mouth behavior. The use of these four groups to understand textural preferences and the implications for a variety of areas including product design and weight management are explored.

  20. Automatic Texture Optimization for 3D Urban Reconstruction

    Directory of Open Access Journals (Sweden)

    LI Ming

    2017-03-01

    Full Text Available In order to solve the problem of texture optimization in 3D city reconstruction by using multi-lens oblique images, the paper presents a method of seamless texture model reconstruction. At first, it corrects the radiation information of images by camera response functions and image dark channel. Then, according to the corresponding relevance between terrain triangular mesh surface model to image, implements occlusion detection by sparse triangulation method, and establishes the triangles' texture list of visible. Finally, combines with triangles' topology relationship in 3D triangular mesh surface model and means and variances of image, constructs a graph-cuts-based texture optimization algorithm under the framework of MRF(Markov random filed, to solve the discrete label problem of texture optimization selection and clustering, ensures the consistency of the adjacent triangles in texture mapping, achieves the seamless texture reconstruction of city. The experimental results verify the validity and superiority of our proposed method.

  1. Natural texture retrieval based on perceptual similarity measurement

    Science.gov (United States)

    Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun

    2018-04-01

    A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.

  2. Advecting Procedural Textures for 2D Flow Animation

    Science.gov (United States)

    Kao, David; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    This paper proposes the use of specially generated 3D procedural textures for visualizing steady state 2D flow fields. We use the flow field to advect and animate the texture over time. However, using standard texture advection techniques and arbitrary textures will introduce some undesirable effects such as: (a) expanding texture from a critical source point, (b) streaking pattern from the boundary of the flowfield, (c) crowding of advected textures near an attracting spiral or sink, and (d) absent or lack of textures in some regions of the flow. This paper proposes a number of strategies to solve these problems. We demonstrate how the technique works using both synthetic data and computational fluid dynamics data.

  3. Texture analysis of pulmonary parenchymateous changes related to pulmonary thromboembolism in dogs - a novel approach using quantitative methods.

    Science.gov (United States)

    Marschner, C B; Kokla, M; Amigo, J M; Rozanski, E A; Wiinberg, B; McEvoy, F J

    2017-07-11

    Diagnosis of pulmonary thromboembolism (PTE) in dogs relies on computed tomography pulmonary angiography (CTPA), but detailed interpretation of CTPA images is demanding for the radiologist and only large vessels may be evaluated. New approaches for better detection of smaller thrombi include dual energy computed tomography (DECT) as well as computer assisted diagnosis (CAD) techniques. The purpose of this study was to investigate the performance of quantitative texture analysis for detecting dogs with PTE using grey-level co-occurrence matrices (GLCM) and multivariate statistical classification analyses. CT images from healthy (n = 6) and diseased (n = 29) dogs with and without PTE confirmed on CTPA were segmented so that only tissue with CT numbers between -1024 and -250 Houndsfield Units (HU) was preserved. GLCM analysis and subsequent multivariate classification analyses were performed on texture parameters extracted from these images. Leave-one-dog-out cross validation and receiver operator characteristic (ROC) showed that the models generated from the texture analysis were able to predict healthy dogs with optimal levels of performance. Partial Least Square Discriminant Analysis (PLS-DA) obtained a sensitivity of 94% and a specificity of 96%, while Support Vector Machines (SVM) yielded a sensitivity of 99% and a specificity of 100%. The models, however, performed worse in classifying the type of disease in the diseased dog group: In diseased dogs with PTE sensitivities were 30% (PLS-DA) and 38% (SVM), and specificities were 80% (PLS-DA) and 89% (SVM). In diseased dogs without PTE the sensitivities of the models were 59% (PLS-DA) and 79% (SVM) and specificities were 79% (PLS-DA) and 82% (SVM). The results indicate that texture analysis of CTPA images using GLCM is an effective tool for distinguishing healthy from abnormal lung. Furthermore the texture of pulmonary parenchyma in dogs with PTE is altered, when compared to the texture of pulmonary parenchyma

  4. Asteroid taxonomic classifications

    International Nuclear Information System (INIS)

    Tholen, D.J.

    1989-01-01

    This paper reports on three taxonomic classification schemes developed and applied to the body of available color and albedo data. Asteroid taxonomic classifications according to two of these schemes are reproduced

  5. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  6. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach.

    Science.gov (United States)

    Irshad, Humayun; Jalali, Sepehr; Roux, Ludovic; Racoceanu, Daniel; Hwee, Lim Joo; Naour, Gilles Le; Capron, Frédérique

    2013-01-01

    According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. The aim is to investigate the various texture features and Hierarchical Model and X (HMAX) biologically inspired approach for mitosis detection using machine-learning techniques. We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT) features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM), and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS) dataset provided for an International Conference on Pattern Recognition (ICPR) 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and classification rate.

  7. Automated mitosis detection using texture, SIFT features and HMAX biologically inspired approach

    Directory of Open Access Journals (Sweden)

    Humayun Irshad

    2013-01-01

    Full Text Available Context: According to Nottingham grading system, mitosis count in breast cancer histopathology is one of three components required for cancer grading and prognosis. Manual counting of mitosis is tedious and subject to considerable inter- and intra-reader variations. Aims: The aim is to investigate the various texture features and Hierarchical Model and X (HMAX biologically inspired approach for mitosis detection using machine-learning techniques. Materials and Methods: We propose an approach that assists pathologists in automated mitosis detection and counting. The proposed method, which is based on the most favorable texture features combination, examines the separability between different channels of color space. Blue-ratio channel provides more discriminative information for mitosis detection in histopathological images. Co-occurrence features, run-length features, and Scale-invariant feature transform (SIFT features were extracted and used in the classification of mitosis. Finally, a classification is performed to put the candidate patch either in the mitosis class or in the non-mitosis class. Three different classifiers have been evaluated: Decision tree, linear kernel Support Vector Machine (SVM, and non-linear kernel SVM. We also evaluate the performance of the proposed framework using the modified biologically inspired model of HMAX and compare the results with other feature extraction methods such as dense SIFT. Results: The proposed method has been tested on Mitosis detection in breast cancer histological images (MITOS dataset provided for an International Conference on Pattern Recognition (ICPR 2012 contest. The proposed framework achieved 76% recall, 75% precision and 76% F-measure. Conclusions: Different frameworks for classification have been evaluated for mitosis detection. In future work, instead of regions, we intend to compute features on the results of mitosis contour segmentation and use them to improve detection and

  8. Classification with support hyperplanes

    NARCIS (Netherlands)

    G.I. Nalbantov (Georgi); J.C. Bioch (Cor); P.J.F. Groenen (Patrick)

    2006-01-01

    textabstractA new classification method is proposed, called Support Hy- perplanes (SHs). To solve the binary classification task, SHs consider the set of all hyperplanes that do not make classification mistakes, referred to as semi-consistent hyperplanes. A test object is classified using

  9. Standard classification: Physics

    International Nuclear Information System (INIS)

    1977-01-01

    This is a draft standard classification of physics. The conception is based on the physics part of the systematic catalogue of the Bayerische Staatsbibliothek and on the classification given in standard textbooks. The ICSU-AB classification now used worldwide by physics information services was not taken into account. (BJ) [de

  10. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  11. MRI textures as outcome predictor for Gamma Knife radiosurgery on vestibular schwannoma

    Science.gov (United States)

    Langenhuizen, P. P. J. H.; Legters, M. J. W.; Zinger, S.; Verheul, H. B.; Leenstra, S.; de With, P. H. N.

    2018-02-01

    Vestibular schwannomas (VS) are benign brain tumors that can be treated with high-precision focused radiation with the Gamma Knife in order to stop tumor growth. Outcome prediction of Gamma Knife radiosurgery (GKRS) treatment can help in determining whether GKRS will be effective on an individual patient basis. However, at present, prognostic factors of tumor control after GKRS for VS are largely unknown, and only clinical factors, such as size of the tumor at treatment and pre-treatment growth rate of the tumor, have been considered thus far. This research aims at outcome prediction of GKRS by means of quantitative texture feature analysis on conventional MRI scans. We compute first-order statistics and features based on gray-level co- occurrence (GLCM) and run-length matrices (RLM), and employ support vector machines and decision trees for classification. In a clinical dataset, consisting of 20 tumors showing treatment failure and 20 tumors exhibiting treatment success, we have discovered that the second-order statistical metrics distilled from GLCM and RLM are suitable for describing texture, but are slightly outperformed by simple first-order statistics, like mean, standard deviation and median. The obtained prediction accuracy is about 85%, but a final choice of the best feature can only be made after performing more extensive analyses on larger datasets. In any case, this work provides suitable texture measures for successful prediction of GKRS treatment outcome for VS.

  12. Assessment of chronic kidney disease using skin texture as a key parameter: for South Indian population.

    Science.gov (United States)

    Udhayarasu, Madhanlal; Ramakrishnan, Kalpana; Periasamy, Soundararajan

    2017-12-01

    Periodical monitoring of renal function, specifically for subjects with history of diabetic or hypertension would prevent them from entering into chronic kidney disease (CKD) condition. The recent increase in numbers may be due to food habits or lack of physical exercise, necessitates a rapid kidney function monitoring system. Presently, it is determined by evaluating glomerular filtration rate (GFR) that is mainly dependent on serum creatinine value and demographic parameters and ethnic value. Attempted here is to develop ethnic parameter based on skin texture for every individual. This value when used in GFR computation, the results are much agreeable with GFR obtained through standard modification of diet in renal disease and CKD epidemiology collaboration equations. Once correlation between CKD and skin texture is established, classification tool using artificial neural network is built to categorise CKD level based on demographic values and parameter obtained through skin texture (without using creatinine). This network when tested gives almost at par results with the network that is trained with demographic and creatinine values. The results of this Letter demonstrate the possibility of non-invasively determining kidney function and hence for making a device that would readily assess the kidney function even at home.

  13. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    Science.gov (United States)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  14. Diabetic peripheral neuropathy assessment through texture based analysis of corneal nerve images

    Science.gov (United States)

    Silva, Susana F.; Gouveia, Sofia; Gomes, Leonor; Negrão, Luís; João Quadrado, Maria; Domingues, José Paulo; Morgado, António Miguel

    2015-05-01

    Diabetic peripheral neuropathy (DPN) is one common complication of diabetes. Early diagnosis of DPN often fails due to the non-availability of a simple, reliable, non-invasive method. Several published studies show that corneal confocal microscopy (CCM) can identify small nerve fibre damage and quantify the severity of DPN, using nerve morphometric parameters. Here, we used image texture features, extracted from corneal sub-basal nerve plexus images, obtained in vivo by CCM, to identify DPN patients, using classification techniques. A SVM classifier using image texture features was used to identify (DPN vs. No DPN) DPN patients. The accuracies were 80.6%, when excluding diabetic patients without neuropathy, and 73.5%, when including diabetic patients without diabetic neuropathy jointly with healthy controls. The results suggest that texture analysis might be used as a complementing technique for DPN diagnosis, without requiring nerve segmentation in CCM images. The results also suggest that this technique has enough sensitivity to detect early disorders in the corneal nerves of diabetic patients.

  15. Effects of alkali on protein polymerization and textural characteristics of textured wheat protein.

    Science.gov (United States)

    Li, Ting; Guo, Xiao-Na; Zhu, Ke-Xue; Zhou, Hui-Ming

    2018-01-15

    The impact of alkali addition on the degree of gluten polymerization and textural characteristics of textured wheat protein was investigated. Results showed that the extrusion process increased the average molecular weight of gluten as evidenced by SDS-PAGE and SDS extractable protein. The addition of alkali not only promoted the degree of gluten polymerization, but also induced dehydroalanine-derived cross-linking. Alkali addition decreased the content of cystine and increased the contents of dehydroalanine and lanthionine. The obvious decrease of free SH showed that dehydroalanine-derived cross-linking was quantitatively less crucial than disulfide cross-linking. Furthermore, the protein cross-linking induced by alkali improved the texture properties of gluten extrudates. SEM analysis showed extrusion under alkaline condition conferred a more fibrous microstructure as a consequence of a compact gluten network. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Classification of refrigerants; Classification des fluides frigorigenes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  17. Filtering SVM frame-by-frame binary classification in a detection framework

    NARCIS (Netherlands)

    Betancourt Arango, A.; Morerio, P.; Marcenaro, L.; Rauterberg, G.W.M.; Regazzoni, C.S.

    2015-01-01

    Classifying frames, or parts of them, is a common way of carrying out detection tasks in computer vision. However, frame by frame classification suffers from sudden significant variations in image texture, colour and luminosity, resulting in noise in the extracted features and consequently in the

  18. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  19. Texturing and modeling a procedural approach

    CERN Document Server

    Ebert, David S

    1994-01-01

    Congratulations to Ken Perlin for his 1997 Technical Achievement Award from the Academy of Motion Picture Arts and Science Board of Governors, given in recognition of the development of ""Turbulence"", Perlin Noise, a technique discussed in this book which is used to produce natural appearing textures on computer-generated surfaces for motion picture visual effects. Dr. Perlin joins Darwyn Peachey (co-developer of RenderMan(R), also discussed in the book) in being honored with this prestigious award.* * Written at a usable level by the developers of the techniques* Serves as a source book for

  20. Maya Studio Projects Texturing and Lighting

    CERN Document Server

    Lanier, Lee

    2011-01-01

    Learn to create realistic digital assets for film and games with this project-based guide Focused entirely on practical projects, this hands-on guide shows you how to use Maya's texturing and lighting tools in real-world situations. Whether you need to sharpen your skills or you're looking to break into the field for the first time, you'll learn top industry techniques for this important skill as you follow the instructions for several specific projects. You can even create your own version, using final Maya scene files to validate results. The companion DVD includes supplemental videos, proje