WorldWideScience

Sample records for superior classification accuracy

  1. Classification Accuracy Is Not Enough

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A recent review of the research literature evaluating music genre recognition (MGR) systems over the past two decades shows that most works (81\\%) measure the capacity of a system to recognize genre by its classification accuracy. We show here, by implementing and testing three categorically diff...... classification accuracy obscures the aim of MGR: to select labels indistinguishable from those a person would choose....

  2. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  3. Strategies to Increase Accuracy in Text Classification

    NARCIS (Netherlands)

    D. Blommesteijn (Dennis)

    2014-01-01

    htmlabstractText classification via supervised learning involves various steps from processing raw data, features extraction to training and validating classifiers. Within these steps implementation decisions are critical to the resulting classifier accuracy. This paper contains a report of the

  4. Strategies to Increase Accuracy in Text Classification

    NARCIS (Netherlands)

    Blommesteijn, D.

    2014-01-01

    Text classification via supervised learning involves various steps from processing raw data, features extraction to training and validating classifiers. Within these steps implementation decisions are critical to the resulting classifier accuracy. This paper contains a report of the study performed

  5. Enhancing Accuracy of Plant Leaf Classification Techniques

    Directory of Open Access Journals (Sweden)

    C. S. Sumathi

    2014-03-01

    Full Text Available Plants have become an important source of energy, and are a fundamental piece in the puzzle to solve the problem of global warming. Living beings also depend on plants for their food, hence it is of great importance to know about the plants growing around us and to preserve them. Automatic plant leaf classification is widely researched. This paper investigates the efficiency of learning algorithms of MLP for plant leaf classification. Incremental back propagation, Levenberg–Marquardt and batch propagation learning algorithms are investigated. Plant leaf images are examined using three different Multi-Layer Perceptron (MLP modelling techniques. Back propagation done in batch manner increases the accuracy of plant leaf classification. Results reveal that batch training is faster and more accurate than MLP with incremental training and Levenberg– Marquardt based learning for plant leaf classification. Various levels of semi-batch training used on 9 species of 15 sample each, a total of 135 instances show a roughly linear increase in classification accuracy.

  6. Expected Classification Accuracy using the Latent Distribution

    Directory of Open Access Journals (Sweden)

    Fanmin Guo

    2006-10-01

    Full Text Available Rudner (2001, 2005 proposed a method for evaluating classification accuracy in tests based on item response theory (IRT. In this paper, a latent distribution method is developed. For comparison, both methods are applied to a set of real data from a state test. While the latent distribution method relaxes several of the assumptions needed to apply Rudner's method, both approaches yield extremely comparable results. A simplified approach for applying Rudner's method and a short SPSS routine are presented.

  7. The effect of superior auditory skills on vocal accuracy

    Science.gov (United States)

    Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat

    2003-02-01

    The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.

  8. Superiority of Classification Tree versus Cluster, Fuzzy and Discriminant Models in a Heartbeat Classification System.

    Directory of Open Access Journals (Sweden)

    Vessela Krasteva

    Full Text Available This study presents a 2-stage heartbeat classifier of supraventricular (SVB and ventricular (VB beats. Stage 1 makes computationally-efficient classification of SVB-beats, using simple correlation threshold criterion for finding close match with a predominant normal (reference beat template. The non-matched beats are next subjected to measurement of 20 basic features, tracking the beat and reference template morphology and RR-variability for subsequent refined classification in SVB or VB-class by Stage 2. Four linear classifiers are compared: cluster, fuzzy, linear discriminant analysis (LDA and classification tree (CT, all subjected to iterative training for selection of the optimal feature space among extended 210-sized set, embodying interactive second-order effects between 20 independent features. The optimization process minimizes at equal weight the false positives in SVB-class and false negatives in VB-class. The training with European ST-T, AHA, MIT-BIH Supraventricular Arrhythmia databases found the best performance settings of all classification models: Cluster (30 features, Fuzzy (72 features, LDA (142 coefficients, CT (221 decision nodes with top-3 best scored features: normalized current RR-interval, higher/lower frequency content ratio, beat-to-template correlation. Unbiased test-validation with MIT-BIH Arrhythmia database rates the classifiers in descending order of their specificity for SVB-class: CT (99.9%, LDA (99.6%, Cluster (99.5%, Fuzzy (99.4%; sensitivity for ventricular ectopic beats as part from VB-class (commonly reported in published beat-classification studies: CT (96.7%, Fuzzy (94.4%, LDA (94.2%, Cluster (92.4%; positive predictivity: CT (99.2%, Cluster (93.6%, LDA (93.0%, Fuzzy (92.4%. CT has superior accuracy by 0.3-6.8% points, with the advantage for easy model complexity configuration by pruning the tree consisted of easy interpretable 'if-then' rules.

  9. A Nonparametric Approach to Estimate Classification Accuracy and Consistency

    Science.gov (United States)

    Lathrop, Quinn N.; Cheng, Ying

    2014-01-01

    When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…

  10. Classification accuracy analyses using Shannon’s Entropy

    Directory of Open Access Journals (Sweden)

    Shashi Poonam Indwar

    2014-11-01

    Full Text Available There are many methods for determining the Classification Accuracy. In this paper significance of Entropy of training signatures in Classification has been shown. Entropy of training signatures of the raw digital image represents the heterogeneity of the brightness values of the pixels in different bands. This implies that an image comprising a homogeneous lu/lc category will be associated with nearly the same reflectance values that would result in the occurrence of a very low entropy value. On the other hand an image characterized by the occurrence of diverse lu/lc categories will consist of largely differing reflectance values due to which the entropy of such image would be relatively high. This concept leads to analyses of classification accuracy. Although Entropy has been used many times in RS and GIS but its use in determination of classification accuracy is new approach.

  11. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    DEFF Research Database (Denmark)

    Stilling, M; Kold, S; de Raedt, S

    2012-01-01

    The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear.......The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear....

  12. Assessing Uncertainty in LULC Classification Accuracy by Using Bootstrap Resampling

    Directory of Open Access Journals (Sweden)

    Lin-Hsuan Hsiao

    2016-08-01

    Full Text Available Supervised land-use/land-cover (LULC classifications are typically conducted using class assignment rules derived from a set of multiclass training samples. Consequently, classification accuracy varies with the training data set and is thus associated with uncertainty. In this study, we propose a bootstrap resampling and reclassification approach that can be applied for assessing not only the uncertainty in classification results of the bootstrap-training data sets, but also the classification uncertainty of individual pixels in the study area. Two measures of pixel-specific classification uncertainty, namely the maximum class probability and Shannon entropy, were derived from the class probability vector of individual pixels and used for the identification of unclassified pixels. Unclassified pixels that are identified using the traditional chi-square threshold technique represent outliers of individual LULC classes, but they are not necessarily associated with higher classification uncertainty. By contrast, unclassified pixels identified using the equal-likelihood technique are associated with higher classification uncertainty and they mostly occur on or near the borders of different land-cover.

  13. A Visual mining based framework for classification accuracy estimation

    Science.gov (United States)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  14. Classification Accuracy of Neural Networks with PCA in Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Novakovic Jasmina

    2011-04-01

    Full Text Available This paper presents classification accuracy of neural network with principal component analysis (PCA for feature selections in emotion recognition using facial expressions. Dimensionality reduction of a feature set is a common preprocessing step used for pattern recognition and classification applications. PCA is one of the popular methods used, and can be shown to be optimal using different optimality criteria. Experiment results, in which we achieved a recognition rate of approximately 85% when testing six emotions on benchmark image data set, show that neural networks with PCA is effective in emotion recognition using facial expressions.

  15. Developing an efficient technique for satellite image denoising and resolution enhancement for improving classification accuracy

    Science.gov (United States)

    Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja

    2013-01-01

    Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.

  16. Consistency of accuracy assessment indices for soft classification: Simulation analysis

    Science.gov (United States)

    Chen, Jin; Zhu, Xiaolin; Imura, Hidefumi; Chen, Xuehong

    Accuracy assessment plays a crucial role in the implementation of soft classification. Even though many indices of accuracy assessment for soft classification have been proposed, the consistencies among these indices are not clear, and the impact of sample size on these consistencies has not been investigated. This paper examines two kinds of indices: map-level indices, including root mean square error ( rmse), kappa, and overall accuracy ( oa) from the sub-pixel confusion matrix (SCM); and category-level indices, including crmse, user accuracy ( ua) and producer accuracy ( pa). A careful simulation was conducted to investigate the consistency of these indices and the effect of sample size. The major findings were as follows: (1) The map-level indices are highly consistent with each other, whereas the category-level indices are not. (2) The consistency among map-level and category-level indices becomes weaker when the sample size decreases. (3) The rmse is more affected by error distribution among classes than are kappa and oa. Based on these results, we recommend that rmse can be used for map-level accuracy due to its simplicity, although kappa and oa may be better alternatives when the sample size is limited because the two indices are affected less by the error distribution among classes. We also suggest that crmse should be provided when map users are not concerned about the error source, whereas ua and pa are more useful when the complete information about different errors is required. The results of this study will be of benefit to the development and application of soft classifiers.

  17. Radiologic classification of superior canal dehiscence : Implications for surgical repair

    NARCIS (Netherlands)

    Lookabaugh, Sarah; Kelly, Hillary R.; Carter, Margaret S.; Niesten, Marlien E F; McKenna, Michael J.; Curtin, Hugh; Lee, Daniel J.

    2015-01-01

    Objective: Surgical access to repair a superior canal dehiscence (SCD) is influenced by the location of the bony defect and its relationship to surrounding tegmen topography as seen on computed tomography. There are currently no agreed-upon methods of characterizing these radiologic findings. We

  18. Classification of features selected through Optimum Index Factor (OIF)for improving classification accuracy

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kaushal

    2011-01-01

    The present investigation was performed to determine if the features selected through Optimum Index Factor (OIF) could provide improved classification accuracy of the various categories on the satellite images of the individual years as well as stacked images of two different years as compared to all the features considered together. Further, in order to determine if there occurs increase in the classification accuracy of the different categories with corresponding increase in the OIF values of the features extracted from both the individual years' and stacked images, we performed linear regression between the producer's accuracy (PA) of the various categories with the OIF values of the different combinations of the features. The investigations demonstrated that there occurs significant improvement in the PA of two impervious categories viz. moderate built-up and low density built-up determined from the classification of the bands and principal components associated with the highest OIF value as compared to all the bands and principal components for both the individual years' and stacked images respectively. Regression analyses exhibited positive trends between the regression coefficients and OIF values forthe various categories determined for the individual years' and stacked images respectively signifying the prevalence of direct relationship between the increase in the information content with corresponding increase in the OIF values. The research proved that features extracted through OIF from both the individual years' and stacked images are capable of providing significantly improved PA as compared to all the features pooled together.

  19. Superior thyroid artery origin in Caucasian Greeks: A new classification proposal and review of the literature.

    Science.gov (United States)

    Natsis, Konstantinos; Raikos, Athanasios; Foundos, Ioannis; Noussios, George; Lazaridis, Nikolaos; Njau, Samouel N

    2011-09-01

    Studies on the origin of the superior thyroid artery, define that it could originate either from the external carotid artery, (at the level of common carotid bifurcation), or from the common carotid artery. However, there is a classical anatomic knowledge that the superior thyroid artery is a branch of the external carotid artery. Variability in the anatomy of the superior thyroid artery was studied on 100 carotids. Moreover, a review about the origin of superior thyroid artery between recent and previous cadaveric, autopsy, and angiographic studies, on adults and fetuses, was carried out. The superior thyroid artery originated from the external carotid artery in 39% and at the level of carotid bifurcation and common carotid artery in 61% of cases. The anterior branches of the external carotid artery were separate in 76% of cases, while common trunks between the arteries were found in 24% of the specimens. A new classification proposal on the origin of the superior thyroid artery is also suggested. In this study, the origin of superior thyroid artery is considered at the level of the carotid bifurcation and not from the external carotid artery as stated in many classical anatomy textbooks. This has a great impact on the terminology when referring to the anterior branches of the external carotid artery, which could be termed as anterior branches of the cervical carotid artery. Head and neck surgeons must be familiar with anatomical variations of the superior thyroid artery in order to achieve a better surgical outcome.

  20. Classification Accuracy of Sequentially Administered WAIS-IV Short Forms.

    Science.gov (United States)

    Ryan, Joseph J; Kreiner, David S; Gontkovsky, Samuel T; Glass Umfleet, Laura

    2015-01-01

    A Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) short form (SF) may be effective for ruling out subnormal intelligence. To create a useful SF, subtest administration should follow the order prescribed in the manual and, depending upon individual performance, be terminated after completion of 2, 3, 4, or 5 subtests. One hundred and twenty-two patients completed the WAIS-IV. In two analyses, Full-Scale IQs (FSIQs) ≤69 and ≤79 were classified as impairment. Classification accuracy statistics indicated that all SFs using both cutoff scores exceeded the base rate (i.e., 14% and 34%) of subnormal intelligence, with hit rates ranging from 84% to 95%. The FSIQ cutoff of ≤69 had poor sensitivity for detecting impaired intellectual functioning with the 2-, 3-, 4-, and 5-subtest SFs; specificity, positive predictive value (PPV), and negative predictive value (NPV) were excellent for each SF. With the FSIQ cutoff of ≤79, sensitivity was strong to excellent for the 3-, 4-, and 5-subtest SFs as were specificity, PPV, and NPV.

  1. Evaluation criteria for software classification inventories, accuracies, and maps

    Science.gov (United States)

    Jayroe, R. R., Jr.

    1976-01-01

    Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.

  2. Larger core size has superior technical and analytical accuracy in bladder tissue microarray.

    Science.gov (United States)

    Eskaros, Adel Rh; Egloff, Shanna A Arnold; Boyd, Kelli L; Richardson, Joyce E; Hyndman, M Eric; Zijlstra, Andries

    2017-03-01

    The construction of tissue microarrays (TMAs) with cores from a large number of paraffin-embedded tissues (donors) into a single paraffin block (recipient) is an effective method of analyzing samples from many patient specimens simultaneously. For the TMA to be successful, the cores within it must capture the correct histologic areas from the donor blocks (technical accuracy) and maintain concordance with the tissue of origin (analytical accuracy). This can be particularly challenging for tissues with small histological features such as small islands of carcinoma in situ (CIS), thin layers of normal urothelial lining of the bladder, or cancers that exhibit intratumor heterogeneity. In an effort to create a comprehensive TMA of a bladder cancer patient cohort that accurately represents the tumor heterogeneity and captures the small features of normal and CIS, we determined how core size (0.6 vs 1.0 mm) impacted the technical and analytical accuracy of the TMA. The larger 1.0 mm core exhibited better technical accuracy for all tissue types at 80.9% (normal), 94.2% (tumor), and 71.4% (CIS) compared with 58.6%, 85.9%, and 63.8% for 0.6 mm cores. Although the 1.0 mm core provided better tissue capture, increasing the number of replicates from two to three allowed with the 0.6 mm core compensated for this reduced technical accuracy. However, quantitative image analysis of proliferation using both Ki67+ immunofluorescence counts and manual mitotic counts demonstrated that the 1.0 mm core size also exhibited significantly greater analytical accuracy (P=0.004 and 0.035, respectively, r(2)=0.979 and 0.669, respectively). Ultimately, our findings demonstrate that capturing two or more 1.0 mm cores for TMA construction provides superior technical and analytical accuracy over the smaller 0.6 mm cores, especially for tissues harboring small histological features or substantial heterogeneity.

  3. Optimal region growing segmentation and its effect on classification accuracy

    NARCIS (Netherlands)

    Gao, Y.; Mas, J.F.; Kerle, N.; Navarrete Pacheco, J.A.

    2011-01-01

    Image segmentation is a preliminary and critical step in object-based image classification. Its proper evaluation ensures that the best segmentation is used in image classification. In this article, image segmentations with nine different parameter settings were carried out with a multi-spectral Lan

  4. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    Science.gov (United States)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  5. Does Maximizing Information at the Cut Score Always Maximize Classification Accuracy and Consistency?

    Science.gov (United States)

    Wyse, Adam E.; Babcock, Ben

    2016-01-01

    A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…

  6. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by cons

  7. Toward accountable land use mapping: Using geocomputation to improve classification accuracy and reveal uncertainty

    NARCIS (Netherlands)

    Beekhuizen, J.; Clarke, K.C.

    2010-01-01

    The classification of satellite imagery into land use/cover maps is a major challenge in the field of remote sensing. This research aimed at improving the classification accuracy while also revealing uncertain areas by employing a geocomputational approach. We computed numerous land use maps by

  8. Assessing the Accuracy of Prediction Algorithms for Classification

    DEFF Research Database (Denmark)

    Baldi, P.; Brunak, Søren; Chauvin, Y.

    2000-01-01

    We provide a unified overview of methods that currently are widely used to assess the accuracy of prediction algorithms, from raw percentages, quadratic error measures and other distances, ann correlation coefficients, and to information theoretic measures such as relative entropy and mutual...

  9. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  10. USE SATELLITE IMAGES AND IMPROVE THE ACCURACY OF HYPERSPECTRAL IMAGE WITH THE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Javadi

    2015-12-01

    Full Text Available The best technique to extract information from remotely sensed image is classification. The problem of traditional classification methods is that each pixel is assigned to a single class by presuming all pixels within the image. Mixed pixel classification or spectral unmixing, is a process that extracts the proportions of the pure components of each mixed pixel. This approach is called spectral unmixing. Hyper spectral images have higher spectral resolution than multispectral images. In this paper, pixel-based classification methods such as the spectral angle mapper, maximum likelihood classification and subpixel classification method (linear spectral unmixing were implemented on the AVIRIS hyper spectral images. Then, pixel-based and subpixel based classification algorithms were compared. Also, the capabilities and advantages of spectral linear unmixing method were investigated. The spectral unmixing method that implemented here is an effective technique for classifying a hyperspectral image giving the classification accuracy about 89%. The results of classification when applying on the original images are not good because some of the hyperspectral image bands are subject to absorption and they contain only little signal. So it is necessary to prepare the data at the beginning of the process. The bands can be stored according to their variance. In bands with a high variance, we can distinguish the features from each other in a better mode in order to increase the accuracy of classification. Also, applying the MNF transformation on the hyperspectral images increase the individual classes accuracy of pixel based classification methods as well as unmixing method about 20 percent and 9 percent respectively.

  11. Effects of atmospheric correction and pansharpening on LULC classification accuracy using WorldView-2 imagery

    Directory of Open Access Journals (Sweden)

    Chinsu Lin

    2015-05-01

    Full Text Available Changes of Land Use and Land Cover (LULC affect atmospheric, climatic, and biological spheres of the earth. Accurate LULC map offers detail information for resources management and intergovernmental cooperation to debate global warming and biodiversity reduction. This paper examined effects of pansharpening and atmospheric correction on LULC classification. Object-Based Support Vector Machine (OB-SVM and Pixel-Based Maximum Likelihood Classifier (PB-MLC were applied for LULC classification. Results showed that atmospheric correction is not necessary for LULC classification if it is conducted in the original multispectral image. Nevertheless, pansharpening plays much more important roles on the classification accuracy than the atmospheric correction. It can help to increase classification accuracy by 12% on average compared to the ones without pansharpening. PB-MLC and OB-SVM achieved similar classification rate. This study indicated that the LULC classification accuracy using PB-MLC and OB-SVM is 82% and 89% respectively. A combination of atmospheric correction, pansharpening, and OB-SVM could offer promising LULC maps from WorldView-2 multispectral and panchromatic images.

  12. Impact of spatial resolution on correlation between segmentation evaluation metrics and forest classification accuracy

    Science.gov (United States)

    Švab Lenarčič, Andreja; Ritlop, Klemen; Äńurić, Nataša.; Čotar, Klemen; Oštir, Krištof

    2015-10-01

    Slovenia is one of the most forested countries in Europe. Its forest management authorities need information about the forest extent and state, as their responsibility lies in forest observation and preservation. Together with appropriate geographic information system mapping methods the remotely sensed data represent essential tool for an effective and sustainable forest management. Despite the large data availability, suitable mapping methods still present big challenge in terms of their speed which is often affected by the huge amount of data. The speed of the classification method could be maximised, if each of the steps in object-based classification was automated. However, automation is hard to achieve, since segmentation requires choosing optimum parameter values for optimal classification results. This paper focuses on the analysis of segmentation and classification performance and their correlation in a range of segmentation parameter values applied in the segmentation step. In order to find out which spatial resolution is still suitable for forest classification, forest classification accuracies obtained by using four images with different spatial resolutions were compared. Results of this study indicate that all high or very high spatial resolutions are suitable for optimal forest segmentation and classification, as long as appropriate scale and merge parameters combinations are used in the object-based classification. If computation interval includes all segmentation parameter combinations, all segmentation-classification correlations are spatial resolution independent and are generally high. If computation interval includes over- or optimal-segmentation parameter combinations, most segmentation-classification correlations are spatial resolution dependent.

  13. The effect of atmospheric and topographic correction methods on land cover classification accuracy

    Science.gov (United States)

    Vanonckelen, Steven; Lhermitte, Stefaan; Van Rompaey, Anton

    2013-10-01

    Mapping of vegetation in mountain areas based on remote sensing is obstructed by atmospheric and topographic distortions. A variety of atmospheric and topographic correction methods has been proposed to minimize atmospheric and topographic effects and should in principle lead to a better land cover classification. Only a limited number of atmospheric and topographic combinations has been tested and the effect on class accuracy and on different illumination conditions is not yet researched extensively. The purpose of this study was to evaluate the effect of coupled correction methods on land cover classification accuracy. Therefore, all combinations of three atmospheric (no atmospheric correction, dark object subtraction and correction based on transmittance functions) and five topographic corrections (no topographic correction, band ratioing, cosine correction, pixel-based Minnaert and pixel-based C-correction) were applied on two acquisitions (2009 and 2010) of a Landsat image in the Romanian Carpathian mountains. The accuracies of the fifteen resulting land cover maps were evaluated statistically based on two validation sets: a random validation set and a validation subset containing pixels present in the difference area between the uncorrected classification and one of the fourteen corrected classifications. New insights into the differences in classification accuracy were obtained. First, results showed that all corrected images resulted in higher overall classification accuracies than the uncorrected images. The highest accuracy for the full validation set was achieved after combination of an atmospheric correction based on transmittance functions and a pixel-based Minnaert topographic correction. Secondly, class accuracies of especially the coniferous and mixed forest classes were enhanced after correction. There was only a minor improvement for the other land cover classes (broadleaved forest, bare soil, grass and water). This was explained by the position

  14. The Potential Impact of Not Being Able to Create Parallel Tests on Expected Classification Accuracy

    Science.gov (United States)

    Wyse, Adam E.

    2011-01-01

    In many practical testing situations, alternate test forms from the same testing program are not strictly parallel to each other and instead the test forms exhibit small psychometric differences. This article investigates the potential practical impact that these small psychometric differences can have on expected classification accuracy. Ten…

  15. Classification Consistency and Accuracy for Complex Assessments Using Item Response Theory

    Science.gov (United States)

    Lee, Won-Chan

    2010-01-01

    In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…

  16. Klatskin tumors and the accuracy of the Bismuth-Corlette classification.

    Science.gov (United States)

    Paul, Andreas; Kaiser, Gernot M; Molmenti, Ernesto P; Schroeder, Tobias; Vernadakis, Spiridon; Oezcelik, Arzu; Baba, Hideo A; Cicinnati, Vito R; Sotiropoulos, Georgios C

    2011-12-01

    The Bismuth-Corlette (BC) classification is the current preoperative standard to assess hilar cholangiocarcinomas (HC). The aim of this study is to evaluate the accuracy, sensitivity, and prognostic value of the BC classification. Data of patients undergoing resection for HC were analyzed. Endoscopic retrograde cholangiography and standard computed tomography were undertaken in all cases. Additional 3D-CT-reconstructions, magnetic resonance imaging, and percutaneous transhepatic cholangiography were obtained in selected patients. A systematic review and meta-analysis of the literature was performed. Ninety patients underwent resection of the hilar bile duct confluence, with right or left hemihepatectomy in 68 instances. The overall accuracy of the BC classification was 48 per cent. Rates of BC under- and over-estimation were 29 per cent and 23 per cent, respectively. The addition of MRI, 3D-CT-reconstructions, or percutaneous transhepatic cholangiography improved the accuracy to 49 per cent (P = 1.0), 53 per cent (P = 0.074), and 64 per cent (P < 0.001), respectively. Lowest sensitivity rates were for BC Type IIIA/IIIB tumors. Meta-analysis of published BC data corresponding to 540 patients did not reach significance. The BC classification has low accuracy and no prognostic value in cases of HC undergoing resection.

  17. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Science.gov (United States)

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  18. Associations between psychologists' thinking styles and accuracy on a diagnostic classification task

    NARCIS (Netherlands)

    Aarts, A.A.; Witteman, C.L.M.; Souren, P.M.; Egger, J.I.M.

    2012-01-01

    The present study investigated whether individual differences between psychologists in thinking styles are associated with accuracy in diagnostic classification. We asked novice and experienced clinicians to classify two clinical cases of clients with two co-occurring psychological disorders. No sig

  19. Classification Accuracy of Nonword Repetition when Used with Preschool-Age Spanish-Speaking Children

    Science.gov (United States)

    Guiberson, Mark; Rodriguez, Barbara L.

    2013-01-01

    Purpose: The purpose of the present study was to (a) describe and compare the nonword repetition (NWR) performance of preschool-age Spanish-speaking children (3- to 5-year-olds) with and without language impairment (LI) across 2 scoring approaches and (b) to contrast the classification accuracy of a Spanish NWR task when item-level and percentage…

  20. Examining the Classification Accuracy of a Vocabulary Screening Measure with Preschool Children

    Science.gov (United States)

    Marcotte, Amanda M.; Clemens, Nathan H.; Parker, Christopher; Whitcomb, Sara A.

    2016-01-01

    This study investigated the classification accuracy of the "Dynamic Indicators of Vocabulary Skills" (DIVS) as a preschool vocabulary screening measure. With a sample of 240 preschoolers, fall and winter DIVS scores were used to predict year-end vocabulary risk using the 25th percentile on the "Peabody Picture Vocabulary Test--Third…

  1. Verbal fluency indicators of malingering in traumatic brain injury: classification accuracy in known groups.

    Science.gov (United States)

    Curtis, Kelly L; Thompson, Laura K; Greve, Kevin W; Bianchini, Kevin J

    2008-09-01

    A known-groups design was used to determine the classification accuracy of verbal fluency variables in detecting Malingered Neurocognitive Dysfunction (MND) in traumatic brain injury (TBI). Participants were 204 TBI and 488 general clinical patients. The Slick et al. (1999) criteria were used to classify the TBI patients into non-MND and MND groups. An educationally corrected FAS Total Correct word T-score proved to be the most accurate of the several verbal fluency indicators examined. Classification accuracy of this variable at specific cutoffs is presented in a cumulative frequency table. This variable accurately differentiated non-MND from MND mild TBI patients but its accuracy was unacceptable in moderate/severe TBI. The clinical application of these findings is discussed.

  2. Combining atlas based segmentation and intensity classification with nearest neighbor transform and accuracy weighted vote.

    Science.gov (United States)

    Sdika, Michaël

    2010-04-01

    In this paper, different methods to improve atlas based segmentation are presented. The first technique is a new mapping of the labels of an atlas consistent with a given intensity classification segmentation. This new mapping combines the two segmentations using the nearest neighbor transform and is especially effective for complex and folded regions like the cortex where the registration is difficult. Then, in a multi atlas context, an original weighting is introduced to combine the segmentation of several atlases using a voting procedure. This weighting is derived from statistical classification theory and is computed offline using the atlases as a training dataset. Concretely, the accuracy map of each atlas is computed and the vote is weighted by the accuracy of the atlases. Numerical experiments have been performed on publicly available in vivo datasets and show that, when used together, the two techniques provide an important improvement of the segmentation accuracy.

  3. Study on Increasing the Accuracy of Classification Based on Ant Colony algorithm

    Science.gov (United States)

    Yu, M.; Chen, D.-W.; Dai, C.-Y.; Li, Z.-L.

    2013-05-01

    The application for GIS advances the ability of data analysis on remote sensing image. The classification and distill of remote sensing image is the primary information source for GIS in LUCC application. How to increase the accuracy of classification is an important content of remote sensing research. Adding features and researching new classification methods are the ways to improve accuracy of classification. Ant colony algorithm based on mode framework defined, agents of the algorithms in nature-inspired computation field can show a kind of uniform intelligent computation mode. It is applied in remote sensing image classification is a new method of preliminary swarm intelligence. Studying the applicability of ant colony algorithm based on more features and exploring the advantages and performance of ant colony algorithm are provided with very important significance. The study takes the outskirts of Fuzhou with complicated land use in Fujian Province as study area. The multi-source database which contains the integration of spectral information (TM1-5, TM7, NDVI, NDBI) and topography characters (DEM, Slope, Aspect) and textural information (Mean, Variance, Homogeneity, Contrast, Dissimilarity, Entropy, Second Moment, Correlation) were built. Classification rules based different characters are discovered from the samples through ant colony algorithm and the classification test is performed based on these rules. At the same time, we compare with traditional maximum likelihood method, C4.5 algorithm and rough sets classifications for checking over the accuracies. The study showed that the accuracy of classification based on the ant colony algorithm is higher than other methods. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using remote sensing technology based on ant colony algorithm. In addition, the land use and cover changes in Fuzhou for the near term is studied and display the figures by using

  4. Assessment of Classification Accuracies of SENTINEL-2 and LANDSAT-8 Data for Land Cover / Use Mapping

    Science.gov (United States)

    Hale Topaloğlu, Raziye; Sertel, Elif; Musaoğlu, Nebiye

    2016-06-01

    This study aims to compare classification accuracies of land cover/use maps created from Sentinel-2 and Landsat-8 data. Istanbul metropolitan city of Turkey, with a population of around 14 million, having different landscape characteristics was selected as study area. Water, forest, agricultural areas, grasslands, transport network, urban, airport- industrial units and barren land- mine land cover/use classes adapted from CORINE nomenclature were used as main land cover/use classes to identify. To fulfil the aims of this research, recently acquired dated 08/02/2016 Sentinel-2 and dated 22/02/2016 Landsat-8 images of Istanbul were obtained and image pre-processing steps like atmospheric and geometric correction were employed. Both Sentinel-2 and Landsat-8 images were resampled to 30m pixel size after geometric correction and similar spectral bands for both satellites were selected to create a similar base for these multi-sensor data. Maximum Likelihood (MLC) and Support Vector Machine (SVM) supervised classification methods were applied to both data sets to accurately identify eight different land cover/ use classes. Error matrix was created using same reference points for Sentinel-2 and Landsat-8 classifications. After the classification accuracy, results were compared to find out the best approach to create current land cover/use map of the region. The results of MLC and SVM classification methods were compared for both images.

  5. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  6. EVALUATION OF DECISION TREE CLASSIFICATION ACCURACY TO MAP LAND COVER IN CAPIXABA, ACRE

    Directory of Open Access Journals (Sweden)

    Symone Maria de Melo Figueiredo

    2006-03-01

    Full Text Available This study evaluated the accuracy of mapping land cover in Capixaba, state of Acre, Brazil, using decision trees. Elevenattributes were used to build the decision trees: TM Landsat datafrom bands 1, 2, 3, 4, 5, and 7; fraction images derived from linearspectral unmixing; and the normalized difference vegetation index (NDVI. The Kappa values were greater than 0,83, producingexcellent classification results and demonstrating that the technique is promising for mapping land cover in the study area.

  7. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  8. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    Science.gov (United States)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  9. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  10. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    Full Text Available Abstract Background Detailed knowledge of the subcellular location of each expressed protein is critical to a full understanding of its function. Fluorescence microscopy, in combination with methods for fluorescent tagging, is the most suitable current method for proteome-wide determination of subcellular location. Previous work has shown that neural network classifiers can distinguish all major protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Building on these results, we evaluate here new classifiers and features to improve the recognition of protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Results We report here a thorough comparison of the performance on this problem of eight different state-of-the-art classification methods, including neural networks, support vector machines with linear, polynomial, radial basis, and exponential radial basis kernel functions, and ensemble methods such as AdaBoost, Bagging, and Mixtures-of-Experts. Ten-fold cross validation was used to evaluate each classifier with various parameters on different Subcellular Location Feature sets representing both 2D and 3D fluorescence microscope images, including new feature sets incorporating features derived from Gabor and Daubechies wavelet transforms. After optimal parameters were chosen for each of the eight classifiers, optimal majority-voting ensemble classifiers were formed for each feature set. Comparison of results for each image for all eight classifiers permits estimation of the lower bound classification error rate for each subcellular pattern, which we interpret to reflect the fraction of cells whose patterns are distorted by mitosis, cell death or acquisition errors. Overall, we obtained statistically significant improvements in classification accuracy over the best previously published results, with the overall error rate being reduced by one-third to one-half and with the average

  11. A retrospective study to validate an intraoperative robotic classification system for assessing the accuracy of kirschner wire (K-wire) placements with postoperative computed tomography classification system for assessing the accuracy of pedicle screw placements.

    Science.gov (United States)

    Tsai, Tai-Hsin; Wu, Dong-Syuan; Su, Yu-Feng; Wu, Chieh-Hsin; Lin, Chih-Lung

    2016-09-01

    This purpose of this retrospective study is validation of an intraoperative robotic grading classification system for assessing the accuracy of Kirschner-wire (K-wire) placements with the postoperative computed tomography (CT)-base classification system for assessing the accuracy of pedicle screw placements.We conducted a retrospective review of prospectively collected data from 35 consecutive patients who underwent 176 robotic assisted pedicle screws instrumentation at Kaohsiung Medical University Hospital from September 2014 to November 2015. During the operation, we used a robotic grading classification system for verifying the intraoperative accuracy of K-wire placements. Three months after surgery, we used the common CT-base classification system to assess the postoperative accuracy of pedicle screw placements. The distributions of accuracy between the intraoperative robot-assisted and various postoperative CT-based classification systems were compared using kappa statistics of agreement.The intraoperative accuracies of K-wire placements before and after repositioning were classified as excellent (131/176, 74.4% and 133/176, 75.6%, respectively), satisfactory (36/176, 20.5% and 41/176, 23.3%, respectively), and malpositioned (9/176, 5.1% and 2/176, 1.1%, respectively)In postoperative CT-base classification systems were evaluated. No screw placements were evaluated as unacceptable under any of these systems. Kappa statistics revealed no significant differences between the proposed system and the aforementioned classification systems (P system and various postoperative CT-based grading systems. The robotic grading classification system is a feasible method for evaluating the accuracy of K-wire placements. Using the intraoperative robot grading system to classify the accuracy of K-wire placements enables predicting the postoperative accuracy of pedicle screw placements.

  12. Improving accuracy for cancer classification with a new algorithm for genes selection

    Directory of Open Access Journals (Sweden)

    Zhang Hongyan

    2012-11-01

    Full Text Available Abstract Background Even though the classification of cancer tissue samples based on gene expression data has advanced considerably in recent years, it faces great challenges to improve accuracy. One of the challenges is to establish an effective method that can select a parsimonious set of relevant genes. So far, most methods for gene selection in literature focus on screening individual or pairs of genes without considering the possible interactions among genes. Here we introduce a new computational method named the Binary Matrix Shuffling Filter (BMSF. It not only overcomes the difficulty associated with the search schemes of traditional wrapper methods and overfitting problem in large dimensional search space but also takes potential gene interactions into account during gene selection. This method, coupled with Support Vector Machine (SVM for implementation, often selects very small number of genes for easy model interpretability. Results We applied our method to 9 two-class gene expression datasets involving human cancers. During the gene selection process, the set of genes to be kept in the model was recursively refined and repeatedly updated according to the effect of a given gene on the contributions of other genes in reference to their usefulness in cancer classification. The small number of informative genes selected from each dataset leads to significantly improved leave-one-out (LOOCV classification accuracy across all 9 datasets for multiple classifiers. Our method also exhibits broad generalization in the genes selected since multiple commonly used classifiers achieved either equivalent or much higher LOOCV accuracy than those reported in literature. Conclusions Evaluation of a gene’s contribution to binary cancer classification is better to be considered after adjusting for the joint effect of a large number of other genes. A computationally efficient search scheme was provided to perform effective search in the extensive

  13. Hyperspectral image preprocessing with bilateral filter for improving the classification accuracy of support vector machines

    Science.gov (United States)

    Sahadevan, Anand S.; Routray, Aurobinda; Das, Bhabani S.; Ahmad, Saquib

    2016-04-01

    Bilateral filter (BF) theory is applied to integrate spatial contextual information into the spectral domain for improving the accuracy of the support vector machine (SVM) classifier. The proposed classification framework is a two-stage process. First, an edge-preserved smoothing is carried out on a hyperspectral image (HSI). Then, the SVM multiclass classifier is applied on the smoothed HSI. One of the advantages of the BF-based implementation is that it considers the spatial as well as spectral closeness for smoothing the HSI. Therefore, the proposed method provides better smoothing in the homogeneous region and preserves the image details, which in turn improves the separability between the classes. The performance of the proposed method is tested using benchmark HSIs obtained from the airborne-visible-infrared-imaging-spectrometer (AVIRIS) and the reflective-optics-system-imaging-spectrometer (ROSIS) sensors. Experimental results demonstrate the effectiveness of the edge-preserved filtering in the classification of the HSI. Average accuracies (with 10% training samples) of the proposed classification framework are 99.04%, 98.11%, and 96.42% for AVIRIS-Salinas, ROSIS-Pavia University, and AVIRIS-Indian Pines images, respectively. Since the proposed method follows a combination of BF and the SVM formulations, it will be quite simple and practical to implement in real applications.

  14. The Accuracy of Body Mass Index and Gallagher’s Classification in Detecting Obesity among Iranians

    Directory of Open Access Journals (Sweden)

    Alireza Shahab Jahanlou

    2016-07-01

    Full Text Available Background: The study was conducted to examine the comparability of the BMI and Gallagher’s classification in diagnosing obesity based on the cutoff points of the gold standards and to estimate suitable cutoff points for detecting obesity among Iranians. Methods: The cross-sectional study was comparative in nature. The sample consisted of 20,163 adults. The bioelectrical impedance analysis (BIA was used to measure the variables of interest. Sensitivity, specificity, positive predictive power (PPV, and negative predictive power (NPV were used to evaluate the comparability of the two classification methods in detecting obesity. Results: The BMI wrongly classified 29% of the obese persons as overweight. In both classifications, as age increased, the accuracy of detecting obesity decreased. The Gallagher’s classification is better than MBI in detecting obesity in men with the exception of those older than 59 years. In females, the BMI was better in determining sensitivity. In both classifications, either female or male, an increase in age was associated with a decrease in sensitivity and NPV with the exception of the BMI for the 18 year olds. Gallagher can correctly classify males and females who are less than 40 and 19 years old, respectively. Conclusion: Gallagher’s classification is recommended for non-obese in both sexes and in obese males younger than 40 years old. The BMI is recommended for obese females. The suitable cutoff points for the BMI to detect obesity are 27.70 kg/m2 for females and males, 27.70 kg/m2 for females, and 27.30 kg/m2 for males.

  15. Accuracy of automated classification of major depressive disorder as a function of symptom severity.

    Science.gov (United States)

    Ramasubbu, Rajamannar; Brown, Matthew R G; Cortese, Filmeno; Gaxiola, Ismael; Goodyear, Bradley; Greenshaw, Andrew J; Dursun, Serdar M; Greiner, Russell

    2016-01-01

    Growing evidence documents the potential of machine learning for developing brain based diagnostic methods for major depressive disorder (MDD). As symptom severity may influence brain activity, we investigated whether the severity of MDD affected the accuracies of machine learned MDD-vs-Control diagnostic classifiers. Forty-five medication-free patients with DSM-IV defined MDD and 19 healthy controls participated in the study. Based on depression severity as determined by the Hamilton Rating Scale for Depression (HRSD), MDD patients were sorted into three groups: mild to moderate depression (HRSD 14-19), severe depression (HRSD 20-23), and very severe depression (HRSD ≥ 24). We collected functional magnetic resonance imaging (fMRI) data during both resting-state and an emotional-face matching task. Patients in each of the three severity groups were compared against controls in separate analyses, using either the resting-state or task-based fMRI data. We use each of these six datasets with linear support vector machine (SVM) binary classifiers for identifying individuals as patients or controls. The resting-state fMRI data showed statistically significant classification accuracy only for the very severe depression group (accuracy 66%, p = 0.012 corrected), while mild to moderate (accuracy 58%, p = 1.0 corrected) and severe depression (accuracy 52%, p = 1.0 corrected) were only at chance. With task-based fMRI data, the automated classifier performed at chance in all three severity groups. Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  16. Adjusting for covariate effects on classification accuracy using the covariate-adjusted receiver operating characteristic curve.

    Science.gov (United States)

    Janes, Holly; Pepe, Margaret S

    2009-06-01

    Recent scientific and technological innovations have produced an abundance of potential markers that are being investigated for their use in disease screening and diagnosis. In evaluating these markers, it is often necessary to account for covariates associated with the marker of interest. Covariates may include subject characteristics, expertise of the test operator, test procedures or aspects of specimen handling. In this paper, we propose the covariate-adjusted receiver operating characteristic curve, a measure of covariate-adjusted classification accuracy. Nonparametric and semiparametric estimators are proposed, asymptotic distribution theory is provided and finite sample performance is investigated. For illustration we characterize the age-adjusted discriminatory accuracy of prostate-specific antigen as a biomarker for prostate cancer.

  17. [Procedures for performing meta-analyses of the accuracy of tools for binary classification].

    Science.gov (United States)

    Botella, Juan; Huang, Huiling

    2012-02-01

    The assessment of accuracy in binary classification tools must take into account two non-independent rates: true positives and false positives. A variety of indices have been proposed. They have been estimated for tests employed for early detection or screening purposes. We summarize and review the main methods proposed for performing a meta-analysis that assesses the accuracy of this type of tools. They are applied to the results from 14 studies that report estimates of the accuracy of the AUDIT. The method of direct aggregation does not allow the use of meta-analytic procedures; the separate estimation of sensitivity and specificity does not acknowledge that they are not independent; the SROC method treats accuracy and threshold as fixed effects and has limitations to deal with the potential role of covariates. The Normal Bivariate (NB) model and the Hierarchical Summary ROC (HSROC) model are statistically rigorous and can deal with the covariates properly. They allowed analyzing the association between the gender composition of the sample and the way the test AUDIT behaves in the example.

  18. Use of classification trees to apportion single echo detections to species: Application to the pelagic fish community of Lake Superior

    Science.gov (United States)

    Yule, Daniel L.; Adams, Jean V.; Hrabik, Thomas R.; Vinson, Mark R.; Woiak, Zebadiah; Ahrenstroff, Tyler D.

    2013-01-01

    Acoustic methods are used to estimate the density of pelagic fish in large lakes with results of midwater trawling used to assign species composition. Apportionment in lakes having mixed species can be challenging because only a small fraction of the water sampled acoustically is sampled with trawl gear. Here we describe a new method where single echo detections (SEDs) are assigned to species based on classification tree models developed from catch data that separate species based on fish size and the spatial habitats they occupy. During the summer of 2011, we conducted a spatially-balanced lake-wide acoustic and midwater trawl survey of Lake Superior. A total of 51 sites in four bathymetric depth strata (0–30 m, 30–100 m, 100–200 m, and >200 m) were sampled. We developed classification tree models for each stratum and found fish length was the most important variable for separating species. To apply these trees to the acoustic data, we needed to identify a target strength to length (TS-to-L) relationship appropriate for all abundant Lake Superior pelagic species. We tested performance of 7 general (i.e., multi-species) relationships derived from three published studies. The best-performing relationship was identified by comparing predicted and observed catch compositions using a second independent Lake Superior data set. Once identified, the relationship was used to predict lengths of SEDs from the lake-wide survey, and the classification tree models were used to assign each SED to a species. Exotic rainbow smelt (Osmerus mordax) were the most common species at bathymetric depths 100 m (384 million; 6.0 kt). Cisco (Coregonus artedi) were widely distributed over all strata with their population estimated at 182 million (44 kt). The apportionment method we describe should be transferable to other large lakes provided fish are not tightly aggregated, and an appropriate TS-to-L relationship for abundant pelagic fish species can be determined.

  19. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    Science.gov (United States)

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  20. Improvement of User's Accuracy Through Classification of Principal Component Images and Stacked Temporal Images

    Institute of Scientific and Technical Information of China (English)

    Nilanchal Patel; Brijesh Kumar Kaushal

    2010-01-01

    The classification accuracy of the various categories on the classified remotely sensed images are usually evaluated by two different measures of accuracy, namely, producer's accuracy (PA) and user's accuracy (UA). The PA of a category indicates to what extent the reference pixels of the category are correctly classified, whereas the UA ora category represents to what extent the other categories are less misclassified into the category in question. Therefore, the UA of the various categories determines the reliability of their interpretation on the classified image and is more important to the analyst than the PA. The present investigation has been performed in order to determine ifthere occurs improvement in the UA of the various categories on the classified image of the principal components of the original bands and on the classified image of the stacked image of two different years. We performed the analyses using the IRS LISS Ⅲ images of two different years, i.e., 1996 and 2009, that represent the different magnitude of urbanization and the stacked image of these two years pertaining to Ranchi area, Jharkhand, India, with a view to assessing the impacts of urbanization on the UA of the different categories. The results of the investigation demonstrated that there occurs significant improvement in the UA of the impervious categories in the classified image of the stacked image, which is attributable to the aggregation of the spectral information from twice the number of bands from two different years. On the other hand, the classified image of the principal components did not show any improvement in the UA as compared to the original images.

  1. Novel speech signal processing algorithms for high-accuracy classification of Parkinson's disease.

    Science.gov (United States)

    Tsanas, Athanasios; Little, Max A; McSharry, Patrick E; Spielman, Jennifer; Ramig, Lorraine O

    2012-05-01

    There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.

  2. Reverse Classification Accuracy: Predicting Segmentation Performance in the Absence of Ground Truth.

    Science.gov (United States)

    Valindria, Vanya V; Lavdas, Ioannis; Bai, Wenjia; Kamnitsas, Konstantinos; Aboagye, Eric O; Rockall, Andrea G; Rueckert, Daniel; Glocker, Ben

    2017-08-01

    When integrating computational tools, such as automatic segmentation, into clinical practice, it is of utmost importance to be able to assess the level of accuracy on new data and, in particular, to detect when an automatic method fails. However, this is difficult to achieve due to the absence of ground truth. Segmentation accuracy on clinical data might be different from what is found through cross validation, because validation data are often used during incremental method development, which can lead to overfitting and unrealistic performance expectations. Before deployment, performance is quantified using different metrics, for which the predicted segmentation is compared with a reference segmentation, often obtained manually by an expert. But little is known about the real performance after deployment when a reference is unavailable. In this paper, we introduce the concept of reverse classification accuracy (RCA) as a framework for predicting the performance of a segmentation method on new data. In RCA, we take the predicted segmentation from a new image to train a reverse classifier, which is evaluated on a set of reference images with available ground truth. The hypothesis is that if the predicted segmentation is of good quality, then the reverse classifier will perform well on at least some of the reference images. We validate our approach on multi-organ segmentation with different classifiers and segmentation methods. Our results indicate that it is indeed possible to predict the quality of individual segmentations, in the absence of ground truth. Thus, RCA is ideal for integration into automatic processing pipelines in clinical routine and as a part of large-scale image analysis studies.

  3. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    Science.gov (United States)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  4. Reliability, Validity, and Classification Accuracy of the DSM-5 Diagnostic Criteria for Gambling Disorder and Comparison to DSM-IV.

    Science.gov (United States)

    Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C

    2016-09-01

    The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.

  5. Accuracy and cut-off point selection in three-class classification problems using a generalization of the Youden index.

    Science.gov (United States)

    Nakas, Christos T; Alonzo, Todd A; Yiannoutsos, Constantin T

    2010-12-10

    We study properties of the index J(3), defined as the accuracy, or the maximum correct classification, for a given three-class classification problem. Specifically, using J(3) one can assess the discrimination between the three distributions and obtain an optimal pair of cut-off points c(1)sum of the correct classification proportions will be maximized. It also serves as the generalization of the Youden index in three-class problems. Parametric and non-parametric approaches for estimation and testing are considered and methods are applied to data from an MRS study on human immunodeficiency virus (HIV) patients.

  6. Basic visual dysfunction allows classification of patients with schizophrenia with exceptional accuracy.

    Science.gov (United States)

    González-Hernández, J A; Pita-Alcorta, C; Padrón, A; Finalé, A; Galán, L; Martínez, E; Díaz-Comas, L; Samper-González, J A; Lencer, R; Marot, M

    2014-10-01

    Basic visual dysfunctions are commonly reported in schizophrenia; however their value as diagnostic tools remains uncertain. This study reports a novel electrophysiological approach using checkerboard visual evoked potentials (VEP). Sources of spectral resolution VEP-components C1, P1 and N1 were estimated by LORETA, and the band-effects (BSE) on these estimated sources were explored in each subject. BSEs were Z-transformed for each component and relationships with clinical variables were assessed. Clinical effects were evaluated by ROC-curves and predictive values. Forty-eight patients with schizophrenia (SZ) and 55 healthy controls participated in the study. For each of the 48 patients, the three VEP components were localized to both dorsal and ventral brain areas and also deviated from a normal distribution. P1 and N1 deviations were independent of treatment, illness chronicity or gender. Results from LORETA also suggest that deficits in thalamus, posterior cingulum, precuneus, superior parietal and medial occipitotemporal areas were associated with symptom severity. While positive symptoms were more strongly related to sensory processing deficits (P1), negative symptoms were more strongly related to perceptual processing dysfunction (N1). Clinical validation revealed positive and negative predictive values for correctly classifying SZ of 100% and 77%, respectively. Classification in an additional independent sample of 30 SZ corroborated these results. In summary, this novel approach revealed basic visual dysfunctions in all patients with schizophrenia, suggesting these visual dysfunctions represent a promising candidate as a biomarker for schizophrenia.

  7. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants

    Directory of Open Access Journals (Sweden)

    Malik Yousef

    2016-01-01

    Full Text Available MicroRNAs (miRNAs are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection.

  8. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    Science.gov (United States)

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  9. Pre-Processing Effect on the Accuracy of Event-Based Activity Segmentation and Classification through Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Benish Fida

    2015-09-01

    Full Text Available Inertial sensors are increasingly being used to recognize and classify physical activities in a variety of applications. For monitoring and fitness applications, it is crucial to develop methods able to segment each activity cycle, e.g., a gait cycle, so that the successive classification step may be more accurate. To increase detection accuracy, pre-processing is often used, with a concurrent increase in computational cost. In this paper, the effect of pre-processing operations on the detection and classification of locomotion activities was investigated, to check whether the presence of pre-processing significantly contributes to an increase in accuracy. The pre-processing stages evaluated in this study were inclination correction and de-noising. Level walking, step ascending, descending and running were monitored by using a shank-mounted inertial sensor. Raw and filtered segments, obtained from a modified version of a rule-based gait detection algorithm optimized for sequential processing, were processed to extract time and frequency-based features for physical activity classification through a support vector machine classifier. The proposed method accurately detected >99% gait cycles from raw data and produced >98% accuracy on these segmented gait cycles. Pre-processing did not substantially increase classification accuracy, thus highlighting the possibility of reducing the amount of pre-processing for real-time applications.

  10. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    Science.gov (United States)

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  11. Measurement Properties and Classification Accuracy of Two Spanish Parent Surveys of Language Development for Preschool-Age Children

    Science.gov (United States)

    Guiberson, Mark; Rodriguez, Barbara L.

    2010-01-01

    Purpose: To describe the concurrent validity and classification accuracy of 2 Spanish parent surveys of language development, the Spanish Ages and Stages Questionnaire (ASQ; Squires, Potter, & Bricker, 1999) and the Pilot Inventario-III (Pilot INV-III; Guiberson, 2008a). Method: Forty-eight Spanish-speaking parents of preschool-age children…

  12. Measurement Properties and Classification Accuracy of Two Spanish Parent Surveys of Language Development for Preschool-Age Children

    Science.gov (United States)

    Guiberson, Mark; Rodriguez, Barbara L.

    2010-01-01

    Purpose: To describe the concurrent validity and classification accuracy of 2 Spanish parent surveys of language development, the Spanish Ages and Stages Questionnaire (ASQ; Squires, Potter, & Bricker, 1999) and the Pilot Inventario-III (Pilot INV-III; Guiberson, 2008a). Method: Forty-eight Spanish-speaking parents of preschool-age children…

  13. Automated, high accuracy classification of Parkinsonian disorders: a pattern recognition approach.

    Directory of Open Access Journals (Sweden)

    Andre F Marquand

    Full Text Available Progressive supranuclear palsy (PSP, multiple system atrophy (MSA and idiopathic Parkinson's disease (IPD can be clinically indistinguishable, especially in the early stages, despite distinct patterns of molecular pathology. Structural neuroimaging holds promise for providing objective biomarkers for discriminating these diseases at the single subject level but all studies to date have reported incomplete separation of disease groups. In this study, we employed multi-class pattern recognition to assess the value of anatomical patterns derived from a widely available structural neuroimaging sequence for automated classification of these disorders. To achieve this, 17 patients with PSP, 14 with IPD and 19 with MSA were scanned using structural MRI along with 19 healthy controls (HCs. An advanced probabilistic pattern recognition approach was employed to evaluate the diagnostic value of several pre-defined anatomical patterns for discriminating the disorders, including: (i a subcortical motor network; (ii each of its component regions and (iii the whole brain. All disease groups could be discriminated simultaneously with high accuracy using the subcortical motor network. The region providing the most accurate predictions overall was the midbrain/brainstem, which discriminated all disease groups from one another and from HCs. The subcortical network also produced more accurate predictions than the whole brain and all of its constituent regions. PSP was accurately predicted from the midbrain/brainstem, cerebellum and all basal ganglia compartments; MSA from the midbrain/brainstem and cerebellum and IPD from the midbrain/brainstem only. This study demonstrates that automated analysis of structural MRI can accurately predict diagnosis in individual patients with Parkinsonian disorders, and identifies distinct patterns of regional atrophy particularly useful for this process.

  14. A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery

    Science.gov (United States)

    kaya, S.; Alganci, U.; Sertel, E.; Ustundag, B.

    2013-12-01

    A Comparative Accuracy Analysis of Classification Methods in Determination of Cultivated Lands with Spot 5 Satellite Imagery Ugur ALGANCI1, Sinasi KAYA1,2, Elif SERTEL1,2,Berk USTUNDAG3 1 ITU, Center for Satellite Communication and Remote Sensing, 34469, Maslak-Istanbul,Turkey 2 ITU, Department of Geomatics, 34469, Maslak-Istanbul, Turkey 3 ITU, Agricultural and Environmental Informatics Research Center,34469, Maslak-Istanbul,Turkey alganci@itu.edu.tr, kayasina@itu.edu.tr, sertele@itu.edu.tr, berk@berk.tc ABSTRACT Cultivated land determination and their area estimation are important tasks for agricultural management. Derived information is mostly used in agricultural policies and precision agriculture, in specifically; yield estimation, irrigation and fertilization management and farmers declaration verification etc. The use of satellite image in crop type identification and area estimate is common for two decades due to its capability of monitoring large areas, rapid data acquisition and spectral response to crop properties. With launch of high and very high spatial resolution optical satellites in the last decade, such kind of analysis have gained importance as they provide information at big scale. With increasing spatial resolution of satellite images, image classification methods to derive the information form them have become important with increase of the spectral heterogeneity within land objects. In this research, pixel based classification with maximum likelihood algorithm and object based classification with nearest neighbor algorithm were applied to 2012 dated 2.5 m resolution SPOT 5 satellite images in order to investigate the accuracy of these methods in determination of cotton and corn planted lands and their area estimation. Study area was selected in Sanliurfa Province located on Southeastern Turkey that contributes to Turkey's agricultural production in a major way. Classification results were compared in terms of crop type identification using

  15. Image Reconstruction Using Multi Layer Perceptron MLP And Support Vector Machine SVM Classifier And Study Of Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Shovasis Kumar Biswas

    2015-02-01

    Full Text Available Abstract Support Vector Machine SVM and back-propagation neural network BPNN has been applied successfully in many areas for example rule extraction classification and evaluation. In this paper we studied the back-propagation algorithm for training the multilayer artificial neural network and a support vector machine for data classification and image reconstruction aspects. A model focused on SVM with Gaussian RBF kernel is utilized here for data classification. Back propagation neural network is viewed as one of the most straightforward and is most general methods used for supervised training of multilayered neural network. We compared a support vector machine SVM with a back-propagation neural network BPNN for the task of data classification and image reconstruction. We made a comparison between the performances of the multi-class classification of these two learning methods. Comparing with these two methods we can conclude that the classification accuracy of the support vector machine is better and algorithm is much faster than the MLP with back propagation algorithm.

  16. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  17. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    Directory of Open Access Journals (Sweden)

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  18. Land cover classification accuracy from electro-optical, X, C, and L-band Synthetic Aperture Radar data fusion

    Science.gov (United States)

    Hammann, Mark Gregory

    The fusion of electro-optical (EO) multi-spectral satellite imagery with Synthetic Aperture Radar (SAR) data was explored with the working hypothesis that the addition of multi-band SAR will increase the land-cover (LC) classification accuracy compared to EO alone. Three satellite sources for SAR imagery were used: X-band from TerraSAR-X, C-band from RADARSAT-2, and L-band from PALSAR. Images from the RapidEye satellites were the source of the EO imagery. Imagery from the GeoEye-1 and WorldView-2 satellites aided the selection of ground truth. Three study areas were chosen: Wad Medani, Sudan; Campinas, Brazil; and Fresno- Kings Counties, USA. EO imagery were radiometrically calibrated, atmospherically compensated, orthorectifed, co-registered, and clipped to a common area of interest (AOI). SAR imagery were radiometrically calibrated, and geometrically corrected for terrain and incidence angle by converting to ground range and Sigma Naught (?0). The original SAR HH data were included in the fused image stack after despeckling with a 3x3 Enhanced Lee filter. The variance and Gray-Level-Co-occurrence Matrix (GLCM) texture measures of contrast, entropy, and correlation were derived from the non-despeckled SAR HH bands. Data fusion was done with layer stacking and all data were resampled to a common spatial resolution. The Support Vector Machine (SVM) decision rule was used for the supervised classifications. Similar LC classes were identified and tested for each study area. For Wad Medani, nine classes were tested: low and medium intensity urban, sparse forest, water, barren ground, and four agriculture classes (fallow, bare agricultural ground, green crops, and orchards). For Campinas, Brazil, five generic classes were tested: urban, agriculture, forest, water, and barren ground. For the Fresno-Kings Counties location 11 classes were studied: three generic classes (urban, water, barren land), and eight specific crops. In all cases the addition of SAR to EO resulted

  19. Classification algorithms to improve the accuracy of identifying patients hospitalized with community-acquired pneumonia using administrative data.

    Science.gov (United States)

    Yu, O; Nelson, J C; Bounds, L; Jackson, L A

    2011-09-01

    In epidemiological studies of community-acquired pneumonia (CAP) that utilize administrative data, cases are typically defined by the presence of a pneumonia hospital discharge diagnosis code. However, not all such hospitalizations represent true CAP cases. We identified 3991 hospitalizations during 1997-2005 in a managed care organization, and validated them as CAP or not by reviewing medical records. To improve the accuracy of CAP identification, classification algorithms that incorporated additional administrative information associated with the hospitalization were developed using the classification and regression tree analysis. We found that a pneumonia code designated as the primary discharge diagnosis and duration of hospital stay improved the classification of CAP hospitalizations. Compared to the commonly used method that is based on the presence of a primary discharge diagnosis code of pneumonia alone, these algorithms had higher sensitivity (81-98%) and positive predictive values (82-84%) with only modest decreases in specificity (48-82%) and negative predictive values (75-90%).

  20. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    Science.gov (United States)

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  1. Classification accuracy analysis of selected land use and land cover products in a portion of West-Central Lower Michigan

    Science.gov (United States)

    Ma, Kin Man

    2007-12-01

    Remote sensing satellites have been utilized to characterize and map land cover and its changes since the 1970s. However, uncertainties exist in almost all land use and land cover maps classified from remotely sensed images. In particular, it has been recognized that the spatial mis-registration of land cover maps can affect the true estimates of land use/land cover (LULC) changes. This dissertation addressed the following questions: what are the spatial patterns, magnitudes, and cover-dependencies of classification uncertainty associated with West-Central Lower Michigan's LULC products and how can the adverse effects of spatial misregistration on accuracy assessment be reduced? Two Michigan LULC products were chosen for comparison: 1998 Muskegon River Watershed (MRW) Michigan Resource Information Systems LULC map and a 2001 Integrated Forest Monitoring and Assessment Prescription Project (IFMAP). The 1m resolution 1998 MRW LULC map was derived from U.S. Geological Survey Digital Orthophoto Quarter Quadrangle (USGS DOQQs) color infrared imagery and was used as the reference map, since it has a thematic accuracy of 95%. The IFMAP LULC map was co-registered to a series of selected 1998 USGS DOQQs. The total combined root mean square error (rmse) distance of the georectified 2001 IFMAP was +/-12.20m. A spatial uncertainty buffer of at least 1.5 times the rmse was set at 20m so that polygon core areas would be unaffected by spatial misregistration noise. A new spatial misregistration buffer protocol (SPATIALM_ BUFFER) was developed to limit the effect of spatial misregistration on classification accuracy assessment. Spatial uncertainty buffer zones of 20m were generated around LULC polygons of both datasets. Eight-hundred seventeen (817) stratified random accuracy assessment points (AAPs) were generated across the 1998 MRW map. Classification accuracy and kappa statistics were generated for both the 817 AAPs and 604 AAPs comparisons. For the 817 AAPs comparison, the

  2. Hybrid Brain–Computer Interface Techniques for Improved Classification Accuracy and Increased Number of Commands: A Review

    Directory of Open Access Journals (Sweden)

    Keum-Shik Hong

    2017-07-01

    Full Text Available In this article, non-invasive hybrid brain–computer interface (hBCI technologies for improving classification accuracy and increasing the number of commands are reviewed. Hybridization combining more than two modalities is a new trend in brain imaging and prosthesis control. Electroencephalography (EEG, due to its easy use and fast temporal resolution, is most widely utilized in combination with other brain/non-brain signal acquisition modalities, for instance, functional near infrared spectroscopy (fNIRS, electromyography (EMG, electrooculography (EOG, and eye tracker. Three main purposes of hybridization are to increase the number of control commands, improve classification accuracy and reduce the signal detection time. Currently, such combinations of EEG + fNIRS and EEG + EOG are most commonly employed. Four principal components (i.e., hardware, paradigm, classifiers, and features relevant to accuracy improvement are discussed. In the case of brain signals, motor imagination/movement tasks are combined with cognitive tasks to increase active brain–computer interface (BCI accuracy. Active and reactive tasks sometimes are combined: motor imagination with steady-state evoked visual potentials (SSVEP and motor imagination with P300. In the case of reactive tasks, SSVEP is most widely combined with P300 to increase the number of commands. Passive BCIs, however, are rare. After discussing the hardware and strategies involved in the development of hBCI, the second part examines the approaches used to increase the number of control commands and to enhance classification accuracy. The future prospects and the extension of hBCI in real-time applications for daily life scenarios are provided.

  3. Accuracy of automated classification of major depressive disorder as a function of symptom severity

    Directory of Open Access Journals (Sweden)

    Rajamannar Ramasubbu, MD, FRCPC, MSc

    2016-01-01

    Conclusions: Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  4. Transcutaneous PTCCO2 measurement in combination with arterial blood gas analysis provides superior accuracy and reliability in ICU patients.

    Science.gov (United States)

    Spelten, Oliver; Fiedler, Fritz; Schier, Robert; Wetsch, Wolfgang A; Hinkelbein, Jochen

    2017-02-01

    Hyper or hypoventilation may have serious clinical consequences in critically ill patients and should be generally avoided, especially in neurosurgical patients. Therefore, monitoring of carbon dioxide partial pressure by intermittent arterial blood gas analysis (PaCO2) has become standard in intensive care units (ICUs). However, several additional methods are available to determine PCO2 including end-tidal (PETCO2) and transcutaneous (PTCCO2) measurements. The aim of this study was to compare the accuracy and reliability of different methods to determine PCO2 in mechanically ventilated patients on ICU. After approval of the local ethics committee PCO2 was determined in n = 32 ICU consecutive patients requiring mechanical ventilation: (1) arterial PaCO2 blood gas analysis with Radiometer ABL 625 (ABL; gold standard), (2) arterial PaCO2 analysis with Immediate Response Mobile Analyzer (IRMA), (3) end-tidal PETCO2 by a Propaq 106 EL monitor and (4) transcutaneous PTCCO2 determination by a Tina TCM4. Bland-Altman method was used for statistical analysis; p analysis revealed good correlation between PaCO2 by IRMA and ABL (R(2) = 0.766; p analysis revealed a bias and precision of 2.0 ± 3.7 mmHg for the IRMA, 2.2 ± 5.7 mmHg for transcutaneous, and -5.5 ± 5.6 mmHg for end-tidal measurement. Arterial CO2 partial pressure by IRMA (PaCO2) and PTCCO2 provided greater accuracy compared to the reference measurement (ABL) than the end-tidal CO2 measurements in critically ill in mechanically ventilated patients patients.

  5. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  6. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  7. Optimizing statistical classification accuracy of satellite remotely sensed imagery for supporting fast flood hydrological analysis

    Science.gov (United States)

    Alexakis, Dimitrios; Agapiou, Athos; Hadjimitsis, Diofantos; Retalis, Adrianos

    2012-06-01

    The aim of this study is to improve classification results of multispectral satellite imagery for supporting flood risk assessment analysis in a catchment area in Cyprus. For this purpose, precipitation and ground spectroradiometric data have been collected and analyzed with innovative statistical analysis methods. Samples of regolith and construction material were in situ collected and examined in the spectroscopy laboratory for their spectral response under consecutive different conditions of humidity. Moreover, reflectance values were extracted from the same targets using Landsat TM/ETM+ images, for drought and humid time periods, using archived meteorological data. The comparison of the results showed that spectral responses for all the specimens were less correlated in cases of substantial humidity, both in laboratory and satellite images. These results were validated with the application of different classification algorithms (ISODATA, maximum likelihood, object based, maximum entropy) to satellite images acquired during time period when precipitation phenomena had been recorded.

  8. Absolute Radiometric Calibration of ALS Intensity Data: Effects on Accuracy and Target Classification

    Directory of Open Access Journals (Sweden)

    Anssi Krooks

    2011-11-01

    Full Text Available Radiometric calibration of airborne laser scanning (ALS intensity data aims at retrieving a value related to the target scattering properties, which is independent on the instrument or flight parameters. The aim of a calibration procedure is also to be able to compare results from different flights and instruments, but practical applications are sparsely available, and the performance of calibration methods for this purpose needs to be further assessed. We have studied the radiometric calibration with data from three separate flights and two different instruments using external calibration targets. We find that the intensity data from different flights and instruments can be compared to each other only after a radiometric calibration process using separate calibration targets carefully selected for each flight. The calibration is also necessary for target classification purposes, such as separating vegetation from sand using intensity data from different flights. The classification results are meaningful only for calibrated intensity data.

  9. Classification accuracy of algorithms for blood chemistry data for three aquaculture-affected marine fish species.

    Science.gov (United States)

    Coz-Rakovac, R; Topic Popovic, N; Smuc, T; Strunjak-Perovic, I; Jadan, M

    2009-11-01

    The objective of this study was determination and discrimination of biochemical data among three aquaculture-affected marine fish species (sea bass, Dicentrarchus labrax; sea bream, Sparus aurata L., and mullet, Mugil spp.) based on machine-learning methods. The approach relying on machine-learning methods gives more usable classification solutions and provides better insight into the collected data. So far, these new methods have been applied to the problem of discrimination of blood chemistry data with respect to season and feed of a single species. This is the first time these classification algorithms have been used as a framework for rapid differentiation among three fish species. Among the machine-learning methods used, decision trees provided the clearest model, which correctly classified 210 samples or 85.71%, and incorrectly classified 35 samples or 14.29% and clearly identified three investigated species from their biochemical traits.

  10. Improving ECG classification accuracy using an ensemble of neural network modules.

    Directory of Open Access Journals (Sweden)

    Mehrdad Javadi

    Full Text Available This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization.

  11. Overview of existing algorithms for emotion classification. Uncertainties in evaluations of accuracies.

    Science.gov (United States)

    Avetisyan, H.; Bruna, O.; Holub, J.

    2016-11-01

    A numerous techniques and algorithms are dedicated to extract emotions from input data. In our investigation it was stated that emotion-detection approaches can be classified into 3 following types: Keyword based / lexical-based, learning based, and hybrid. The most commonly used techniques, such as keyword-spotting method, Support Vector Machines, Naïve Bayes Classifier, Hidden Markov Model and hybrid algorithms, have impressive results in this sphere and can reach more than 90% determining accuracy.

  12. Inter- and intraobserver variability of MR arthrography in the detection and classification of superior labral anterior posterior (SLAP) lesions: evaluation in 78 cases with arthroscopic correlation

    Energy Technology Data Exchange (ETDEWEB)

    Holzapfel, Konstantin; Waldt, Simone; Bruegel, Melanie; Rummeny, Ernst J.; Woertler, Klaus [Technische Universitaet Muenchen, Department of Radiology, Klinikum rechts der Isar, Munich (Germany); Paul, Jochen; Imhoff, Andreas B. [Technische Universitaet Muenchen, Department of Sports Orthopedics, Klinikum rechts der Isar, Munich (Germany); Heinrich, Petra [Technische Universitaet Muenchen, Institute of Medical Statistics and Epidemiology, Klinikum rechts der Isar, Munich (Germany)

    2010-03-15

    The purpose of this study was to determine inter- and intraobserver variability of MR arthrography of the shoulder in the detection and classification of superior labral anterior posterior (SLAP) lesions. MR arthrograms of 78 patients who underwent MR arthrography before arthroscopy were retrospectively analysed by three blinded readers for the presence and type of SLAP lesions. MR arthrograms were reviewed twice by each reader with a time interval of 4 months between the two readings. Inter- and intraobserver agreement for detection and classification of SLAP lesions were calculated using {kappa} coefficients. Arthroscopy confirmed 48 SLAP lesions: type I (n = 4), type II (n = 37), type III (n = 3), type IV (n = 4). Sensitivity and specificity for detecting SLAP lesions with MR arthrography for each reader were 88.6%/93.3%, 90.9%/80.0% and 86.4%/76.7%. MR arthrographic and arthroscopic grading were concurrent for 72.7%, 68.2% and 70.5% of SLAP lesions for readers 1-3, respectively. Interobserver agreement was excellent ({kappa} = 0.82) for detection and substantial ({kappa} = 0.63) for classification of SLAP lesions. For each reader intraobserver agreement was excellent for detection ({kappa} = 0.93, {kappa} = 0.97, {kappa} = 0.97) and classification ({kappa} = 0.94, {kappa} = 0.84, {kappa} = 0.93) of SLAP lesions. MR arthrography allows reliable and accurate detection of SLAP lesions. In addition, SLAP lesions can be diagnosed and classified with substantial to excellent inter- and intraobserver agreement. (orig.)

  13. A classification of bioinformatics algorithms from the viewpoint of maximizing expected accuracy (MEA).

    Science.gov (United States)

    Hamada, Michiaki; Asai, Kiyoshi

    2012-05-01

    Many estimation problems in bioinformatics are formulated as point estimation problems in a high-dimensional discrete space. In general, it is difficult to design reliable estimators for this type of problem, because the number of possible solutions is immense, which leads to an extremely low probability for every solution-even for the one with the highest probability. Therefore, maximum score and maximum likelihood estimators do not work well in this situation although they are widely employed in a number of applications. Maximizing expected accuracy (MEA) estimation, in which accuracy measures of the target problem and the entire distribution of solutions are considered, is a more successful approach. In this review, we provide an extensive discussion of algorithms and software based on MEA. We describe how a number of algorithms used in previous studies can be classified from the viewpoint of MEA. We believe that this review will be useful not only for users wishing to utilize software to solve the estimation problems appearing in this article, but also for developers wishing to design algorithms on the basis of MEA.

  14. Verdict Accuracy of Quick Reduct Algorithm using Clustering and Classification Techniques for Gene Expression Data

    Directory of Open Access Journals (Sweden)

    T.Chandrasekhar

    2012-01-01

    Full Text Available In most gene expression data, the number of training samples is very small compared to the large number of genes involved in the experiments. However, among the large amount of genes, only a small fraction is effective for performing a certain task. Furthermore, a small subset of genes is desirable in developing gene expression based diagnostic tools for delivering reliable and understandable results. With the gene selection results, the cost of biological experiment and decision can be greatly reduced by analyzing only the marker genes. An important application of gene expression data in functional genomics is to classify samples according to their gene expression profiles. Feature selection (FS is a process which attempts to select more informative features. It is one of the important steps in knowledge discovery. Conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. This paper studies a feature selection method based on rough set theory. Further K-Means, Fuzzy C-Means (FCM algorithm have implemented for the reduced feature set without considering class labels. Then the obtained results are compared with the original class labels. Back Propagation Network (BPN has also been used for classification. Then the performance of K-Means, FCM, and BPN are analyzed through the confusion matrix. It is found that the BPN is performing well comparatively.

  15. Tradeoff between User Experience and BCI Classification Accuracy with Frequency Modulated Steady-State Visual Evoked Potentials.

    Science.gov (United States)

    Dreyer, Alexander M; Herrmann, Christoph S; Rieger, Jochem W

    2017-01-01

    Steady-state visual evoked potentials (SSVEPs) have been widely employed for the control of brain-computer interfaces (BCIs) because they are very robust, lead to high performance, and allow for a high number of commands. However, such flickering stimuli often also cause user discomfort and fatigue, especially when several light sources are used simultaneously. Different variations of SSVEP driving signals have been proposed to increase user comfort. Here, we investigate the suitability of frequency modulation of a high frequency carrier for SSVEP-BCIs. We compared BCI performance and user experience between frequency modulated (FM) and traditional sinusoidal (SIN) SSVEPs in an offline classification paradigm with four independently flickering light-emitting diodes which were overtly attended (fixated). While classification performance was slightly reduced with the FM stimuli, the user comfort was significantly increased. Comparing the SSVEPs for covert attention to the stimuli (without fixation) was not possible, as no reliable SSVEPs were evoked. Our results reveal that several, simultaneously flickering, light emitting diodes can be used to generate FM-SSVEPs with different frequencies and the resulting occipital electroencephalography (EEG) signals can be classified with high accuracy. While the performance we report could be further improved with adjusted stimuli and algorithms, we argue that the increased comfort is an important result and suggest the use of FM stimuli for future SSVEP-BCI applications.

  16. Impact of the accuracy of automatic segmentation of cell nuclei clusters on classification of thyroid follicular lesions.

    Science.gov (United States)

    Jung, Chanho; Kim, Changick

    2014-08-01

    Automatic segmentation of cell nuclei clusters is a key building block in systems for quantitative analysis of microscopy cell images. For that reason, it has received a great attention over the last decade, and diverse automatic approaches to segment clustered nuclei with varying levels of performance under different test conditions have been proposed in literature. To the best of our knowledge, however, so far there is no comparative study on the methods. This study is a first attempt to fill this research gap. More precisely, the purpose of this study is to present an objective performance comparison of existing state-of-the-art segmentation methods. Particularly, the impact of their accuracy on classification of thyroid follicular lesions is also investigated "quantitatively" under the same experimental condition, to evaluate the applicability of the methods. Thirteen different segmentation approaches are compared in terms of not only errors in nuclei segmentation and delineation, but also their impact on the performance of system to classify thyroid follicular lesions using different metrics (e.g., diagnostic accuracy, sensitivity, specificity, etc.). Extensive experiments have been conducted on a total of 204 digitized thyroid biopsy specimens. Our study demonstrates that significant diagnostic errors can be avoided using more advanced segmentation approaches. We believe that this comprehensive comparative study serves as a reference point and guide for developers and practitioners in choosing an appropriate automatic segmentation technique adopted for building automated systems for specifically classifying follicular thyroid lesions. © 2014 International Society for Advancement of Cytometry.

  17. Effects of age, sex and arm on the accuracy of arm position sense – Left-arm superiority in healthy right-handers

    Directory of Open Access Journals (Sweden)

    Lena eSchmidt

    2013-12-01

    Full Text Available Position sense is an important proprioceptive ability. Disorders of arm position sense (APS often occur after unilateral stroke, and are associated with a negative functional outcome. In the present study we assessed horizontal APS by measuring angular deviations from a visually defined target separately for each arm in a large group of healthy subjects. We analyzed the accuracy and instability of horizontal APS as a function of age, sex and arm. Subjects were required to specify verbally the position of their unseen arm on a 0-90° circuit by comparing the current position with the target position indicated by a LED lamp, while the arm was passively moved by the examiner. Eighty-seven healthy subjects participated in the study, ranging from 20 to 77 years, subdivided into three age groups. The results revealed that APS was not a function of age or sex, but was significantly better in the non-dominant (left arm in absolute but not in constant errors across all age groups of right-handed healthy subjects. This indicates a right-hemisphere superiority for left arm position sense in right-handers and neatly fits to the more frequent and more severe left-sided body-related deficits in patients with unilateral stroke (i.e. impaired arm position sense in left spatial neglect, somatoparaphrenia or in individuals with abnormalities of the right cerebral hemisphere. These clinical issues will be discussed.

  18. The superior analyses of igneous rocks from Roth's Tabellen, 1869 to 1884, arranged according to the quantitative system of classification

    Science.gov (United States)

    Washington, H.S.

    1904-01-01

    In Professional Paper No. 14 there were collected the chemical analyses of igneous rocks published from 1884 to 1900, inclusive, arranged according to the quantitative system of classification recently proposed by Cross, Iddings, Pirsson, and Washington. In order to supplement this work it has appeared advisable to select the more reliable and complete of the earlier analyses collected by Justus Roth and arrange them also in the same manner for publication. Petrographers would thus have available for use according to the new system almost the entire body of chemical work of real value on igneous rocks, the exceptions being a few analyses published prior to 1900 which may have been overlooked by both Roth and myself. The two collections would form a foundation as broad as possible for future research and discussion. I must express my sense of obligation to the United States Geological Survey for publishing the present collection of analyses, and my thanks to my colleagues in the new system of classification for their friendly advice and assistance. 

  19. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  20. Diagnostic performance of whole brain volume perfusion CT in intra-axial brain tumors: Preoperative classification accuracy and histopathologic correlation

    Energy Technology Data Exchange (ETDEWEB)

    Xyda, Argyro, E-mail: argyro.xyda@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany); Department of Radialogy, University Hospital of Heraklion, Voutes, 71110 Heraklion, Crete (Greece); Haberland, Ulrike, E-mail: ulrike.haberland@siemens.com [Siemens AG Healthcare Sector, Computed Tomography, Siemensstr. 1, 91301 Forchheim (Germany); Klotz, Ernst, E-mail: ernst.klotz@siemens.com [Siemens AG Healthcare Sector, Computed Tomography, Siemensstr. 1, 91301 Forchheim (Germany); Jung, Klaus, E-mail: kjung1@uni-goettingen.de [Department of Medical Statistics, Georg-August University, Humboldtallee 32, 37073 Goettingen (Germany); Bock, Hans Christoph, E-mail: cbock@gmx.de [Department of Neurosurgery, Johannes Gutenberg University Hospital of Mainz, Langenbeckstraße 1, 55101 Mainz (Germany); Schramm, Ramona, E-mail: ramona.schramm@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany); Knauth, Michael, E-mail: michael.knauth@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany); Schramm, Peter, E-mail: p.schramm@med.uni-goettingen.de [Department of Neuroradiology, Georg-August University, University Hospital of Goettingen, Robert-Koch Strasse 40, 37075 Goettingen (Germany)

    2012-12-15

    Background: To evaluate the preoperative diagnostic power and classification accuracy of perfusion parameters derived from whole brain volume perfusion CT (VPCT) in patients with cerebral tumors. Methods: Sixty-three patients (31 male, 32 female; mean age 55.6 ± 13.9 years), with MRI findings suspected of cerebral lesions, underwent VPCT. Two readers independently evaluated VPCT data. Volumes of interest (VOIs) were marked circumscript around the tumor according to maximum intensity projection volumes, and then mapped automatically onto the cerebral blood volume (CBV), flow (CBF) and permeability Ktrans perfusion datasets. A second VOI was placed in the contra lateral cortex, as control. Correlations among perfusion values, tumor grade, cerebral hemisphere and VOIs were evaluated. Moreover, the diagnostic power of VPCT parameters, by means of positive and negative predictive value, was analyzed. Results: Our cohort included 32 high-grade gliomas WHO III/IV, 18 low-grade I/II, 6 primary cerebral lymphomas, 4 metastases and 3 tumor-like lesions. Ktrans demonstrated the highest sensitivity, specificity and positive predictive value, with a cut-off point of 2.21 mL/100 mL/min, for both the comparisons between high-grade versus low-grade and low-grade versus primary cerebral lymphomas. However, for the differentiation between high-grade and primary cerebral lymphomas, CBF and CBV proved to have 100% specificity and 100% positive predictive value, identifying preoperatively all the histopathologically proven high-grade gliomas. Conclusion: Volumetric perfusion data enable the hemodynamic assessment of the entire tumor extent and provide a method of preoperative differentiation among intra-axial cerebral tumors with promising diagnostic accuracy.

  1. Predictive Utility and Classification Accuracy of Oral Reading Fluency and the Measures of Academic Progress for the Wisconsin Knowledge and Concepts Exam

    Science.gov (United States)

    Ball, Carrie R.; O'Connor, Edward

    2016-01-01

    This study examined the predictive validity and classification accuracy of two commonly used universal screening measures relative to a statewide achievement test. Results indicated that second-grade performance on oral reading fluency and the Measures of Academic Progress (MAP), together with special education status, explained 68% of the…

  2. Investigation of the trade-off between time window length, classifier update rate and classification accuracy for restorative brain-computer interfaces.

    Science.gov (United States)

    Darvishi, Sam; Ridding, Michael C; Abbott, Derek; Baumert, Mathias

    2013-01-01

    Recently, the application of restorative brain-computer interfaces (BCIs) has received significant interest in many BCI labs. However, there are a number of challenges, that need to be tackled to achieve efficient performance of such systems. For instance, any restorative BCI needs an optimum trade-off between time window length, classification accuracy and classifier update rate. In this study, we have investigated possible solutions to these problems by using a dataset provided by the University of Graz, Austria. We have used a continuous wavelet transform and the Student t-test for feature extraction and a support vector machine (SVM) for classification. We find that improved results, for restorative BCIs for rehabilitation, may be achieved by using a 750 milliseconds time window with an average classification accuracy of 67% that updates every 32 milliseconds.

  3. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  4. Systematic review and meta-analysis of persistent left superior vena cava on prenatal ultrasound: associated anomalies, diagnostic accuracy and postnatal outcome.

    Science.gov (United States)

    Gustapane, S; Leombroni, M; Khalil, A; Giacci, F; Marrone, L; Bascietto, F; Rizzo, G; Acharya, G; Liberati, M; D'Antonio, F

    2016-12-01

    To quantify the prevalence of chromosomal anomalies in fetuses with persistent left superior vena cava (PLSVC), assess the strength of the association between PLSVC and coarctation of the aorta and ascertain the diagnostic accuracy of antenatal ultrasound in correctly identifying isolated cases of PLSVC. MEDLINE, EMBASE, CINHAL and the Cochrane databases were searched from the year 2000 onwards using combinations of keywords 'left superior vena cava' and 'outcome'. Two authors reviewed all abstracts independently. Quality assessment of the included studies was performed using the Newcastle-Ottawa Scale for cohort studies. The rates of the following outcomes were analyzed: chromosomal abnormalities; associated intracardiac anomalies (ICAs) and extracardiac anomalies (ECAs) diagnosed prenatally; additional ICAs and ECAs detected only at postnatal imaging or clinical evaluation but missed at prenatal imaging; and association of PLSVC and coarctation of the aorta. Meta-analyses of proportions were used to combine data. In total, 2708 articles were identified and 13 (n = 501) were included in the systematic review. Associated ICAs and ECAs were detected at the prenatal ultrasound examination or at a follow-up assessment in 60.7% (95% CI, 44.2-75.9%) and 37.8% (95% CI, 31.0-44.8%) of cases, respectively. Chromosomal anomalies occurred in 12.5% (95% CI, 9.0-16.4%) of cases in the overall population of fetuses with PLSVC and in 7.0% (95% CI, 2.7-13.0%) of isolated cases. Additional ICAs and ECAs were detected only after birth and missed at ultrasound in 2.4% (95% CI, 0.5-5.8%) and 6.7% (95% CI, 2.2-13.2%) of cases, respectively. Coarctation of the aorta was associated with isolated PLSVC in 21.3% (95% CI, 13.6-30.3%) of cases. PLSVC is commonly associated with ICAs, ECAs and chromosomal anomalies. Fetuses with isolated PLSVC should be followed up throughout pregnancy in order to rule out coarctation of the aorta. As most of the data in this review were derived from

  5. The Effects of Point or Polygon Based Training Data on RandomForest Classification Accuracy of Wetlands

    Directory of Open Access Journals (Sweden)

    Jennifer Corcoran

    2015-04-01

    Full Text Available Wetlands are dynamic in space and time, providing varying ecosystem services. Field reference data for both training and assessment of wetland inventories in the State of Minnesota are typically collected as GPS points over wide geographical areas and at infrequent intervals. This status-quo makes it difficult to keep updated maps of wetlands with adequate accuracy, efficiency, and consistency to monitor change. Furthermore, point reference data may not be representative of the prevailing land cover type for an area, due to point location or heterogeneity within the ecosystem of interest. In this research, we present techniques for training a land cover classification for two study sites in different ecoregions by implementing the RandomForest classifier in three ways: (1 field and photo interpreted points; (2 fixed window surrounding the points; and (3 image objects that intersect the points. Additional assessments are made to identify the key input variables. We conclude that the image object area training method is the most accurate and the most important variables include: compound topographic index, summer season green and blue bands, and grid statistics from LiDAR point cloud data, especially those that relate to the height of the return.

  6. Influence of the training set on the accuracy of surface EMG classification in dynamic contractions for the control of multifunction prostheses

    Directory of Open Access Journals (Sweden)

    Jiang Ning

    2011-05-01

    Full Text Available Abstract Background For high usability, myo-controlled devices require robust classification schemes during dynamic contractions. Therefore, this study investigates the impact of the training data set in the performance of several pattern recognition algorithms during dynamic contractions. Methods A 9 class experiment was designed involving both static and dynamic situations. The performance of various feature extraction methods and classifiers was evaluated in terms of classification accuracy. Results It is shown that, combined with a threshold to detect the onset of the contraction, current pattern recognition algorithms used on static conditions provide relatively high classification accuracy also on dynamic situations. Moreover, the performance of the pattern recognition algorithms tested significantly improved by optimizing the choice of the training set. Finally, the results also showed that rather simple approaches for classification of time domain features provide results comparable to more complex classification methods of wavelet features. Conclusions Non-stationary surface EMG signals recorded during dynamic contractions can be accurately classified for the control of multi-function prostheses.

  7. Influence of multi-source and multi-temporal remotely sensed and ancillary data on the accuracy of random forest classification of wetlands in northern Minnesota

    Science.gov (United States)

    Corcoran, Jennifer M.; Knight, Joseph F.; Gallant, Alisa L.

    2013-01-01

    Wetland mapping at the landscape scale using remotely sensed data requires both affordable data and an efficient accurate classification method. Random forest classification offers several advantages over traditional land cover classification techniques, including a bootstrapping technique to generate robust estimations of outliers in the training data, as well as the capability of measuring classification confidence. Though the random forest classifier can generate complex decision trees with a multitude of input data and still not run a high risk of over fitting, there is a great need to reduce computational and operational costs by including only key input data sets without sacrificing a significant level of accuracy. Our main questions for this study site in Northern Minnesota were: (1) how does classification accuracy and confidence of mapping wetlands compare using different remote sensing platforms and sets of input data; (2) what are the key input variables for accurate differentiation of upland, water, and wetlands, including wetland type; and (3) which datasets and seasonal imagery yield the best accuracy for wetland classification. Our results show the key input variables include terrain (elevation and curvature) and soils descriptors (hydric), along with an assortment of remotely sensed data collected in the spring (satellite visible, near infrared, and thermal bands; satellite normalized vegetation index and Tasseled Cap greenness and wetness; and horizontal-horizontal (HH) and horizontal-vertical (HV) polarization using L-band satellite radar). We undertook this exploratory analysis to inform decisions by natural resource managers charged with monitoring wetland ecosystems and to aid in designing a system for consistent operational mapping of wetlands across landscapes similar to those found in Northern Minnesota.

  8. Cotas para negros no Ensino Superior e formas de classificação racial Quotas for blacks in higher education and forms of racial classification

    Directory of Open Access Journals (Sweden)

    André Augusto Brandão

    2007-04-01

    Full Text Available Este artigo apresenta e discute dados referentes à aplicação de um questionário voltado para variáveis de classificação racial e opinião sobre a política de cotas para negros em uma amostra de 476 alunos do último ano do Ensino Médio da rede pública de um município periférico da região metropolitana do Rio de Janeiro. Buscamos compreender os elementos que informam as classificações de cor ou raça, bem como o posicionamento que esses alunos tomavam frente à política de cotas que poderia beneficiá-los no acesso a uma universidade pública. Deve-se ressaltar que os alunos entrevistados estariam em breve frente à possibilidade de disputar uma vaga no Ensino Superior em um vestibular com cotas raciais numa universidade pública que mantém um campus no próprio município onde estudam e residem. Essa problemática e esse tipo de investigação nos parecem fundamentais na atualidade, pois as cotas para negros que vêm sendo implantadas desde 2003 em várias instituições de Ensino Superior têm sofrido críticas e atravessado controvérsias jurídicas também por conta das formas de classificação propostas. Na pesquisa realizada, foi possível avançar na discussão de como as opções de classificação racial até o momento utilizadas nessas políticas se relacionam com os formatos de auto-identificação e de identificação do outro, comumente presentes no cotidiano das escolas pesquisadas, bem como verificar como a idéia de cota racial é avaliada pelos seus possíveis beneficiários.This article presents and discusses data obtained with the application of a questionnaire focused on variables for racial classification and opinion about the policy of quotas for blacks; the questionnaire was applied to a sample of 476 pupils from the last year of secondary education of the public school system of a peripheral town in the Metropolitan Area of Rio de Janeiro. We have tried to understand the elements that shape the

  9. Classification of textures in satellite image with Gabor filters and a multi layer perceptron with back propagation algorithm obtaining high accuracy

    Directory of Open Access Journals (Sweden)

    Adriano Beluco, Paulo M. Engel, Alexandre Beluco

    2015-01-01

    Full Text Available The classification of images, in many cases, is applied to identify an alphanumeric string, a facial expression or any other characteristic. In the case of satellite images is necessary to classify all the pixels of the image. This article describes a supervised classification method for remote sensing images that integrates the importance of attributes in selecting features with the efficiency of artificial neural networks in the classification process, resulting in high accuracy for real images. The method consists of a texture segmentation based on Gabor filtering followed by an image classification itself with an application of a multi layer artificial neural network with a back propagation algorithm. The method was first applied to a synthetic image, like training, and then applied to a satellite image. Some results of experiments are presented in detail and discussed. The application of the method to the synthetic image resulted in the identification of 89.05% of the pixels of the image, while applying to the satellite image resulted in the identification of 85.15% of the pixels. The result for the satellite image can be considered a result of high accuracy.

  10. Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy.

    Science.gov (United States)

    Combrisson, Etienne; Jerbi, Karim

    2015-07-30

    Machine learning techniques are increasingly used in neuroscience to classify brain signals. Decoding performance is reflected by how much the classification results depart from the rate achieved by purely random classification. In a 2-class or 4-class classification problem, the chance levels are thus 50% or 25% respectively. However, such thresholds hold for an infinite number of data samples but not for small data sets. While this limitation is widely recognized in the machine learning field, it is unfortunately sometimes still overlooked or ignored in the emerging field of brain signal classification. Incidentally, this field is often faced with the difficulty of low sample size. In this study we demonstrate how applying signal classification to Gaussian random signals can yield decoding accuracies of up to 70% or higher in two-class decoding with small sample sets. Most importantly, we provide a thorough quantification of the severity and the parameters affecting this limitation using simulations in which we manipulate sample size, class number, cross-validation parameters (k-fold, leave-one-out and repetition number) and classifier type (Linear-Discriminant Analysis, Naïve Bayesian and Support Vector Machine). In addition to raising a red flag of caution, we illustrate the use of analytical and empirical solutions (binomial formula and permutation tests) that tackle the problem by providing statistical significance levels (p-values) for the decoding accuracy, taking sample size into account. Finally, we illustrate the relevance of our simulations and statistical tests on real brain data by assessing noise-level classifications in Magnetoencephalography (MEG) and intracranial EEG (iEEG) baseline recordings. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Research on the classification result and accuracy of building windows in high resolution satellite images: take the typical rural buildings in Guangxi, China, as an example

    Science.gov (United States)

    Li, Baishou; Gao, Yujiu

    2015-12-01

    The information extracted from the high spatial resolution remote sensing images has become one of the important data sources of the GIS large scale spatial database updating. The realization of the building information monitoring using the high resolution remote sensing, building small scale information extracting and its quality analyzing has become an important precondition for the applying of the high-resolution satellite image information, because of the large amount of regional high spatial resolution satellite image data. In this paper, a clustering segmentation classification evaluation method for the high resolution satellite images of the typical rural buildings is proposed based on the traditional KMeans clustering algorithm. The factors of separability and building density were used for describing image classification characteristics of clustering window. The sensitivity of the factors influenced the clustering result was studied from the perspective of the separability between high image itself target and background spectrum. This study showed that the number of the sample contents is the important influencing factor to the clustering accuracy and performance, the pixel ratio of the objects in images and the separation factor can be used to determine the specific impact of cluster-window subsets on the clustering accuracy, and the count of window target pixels (Nw) does not alone affect clustering accuracy. The result can provide effective research reference for the quality assessment of the segmentation and classification of high spatial resolution remote sensing images.

  12. Towards a multimodal brain-computer interface: combining fNIRS and fTCD measurements to enable higher classification accuracy.

    Science.gov (United States)

    Faress, Ahmed; Chau, Tom

    2013-08-15

    Previous brain-computer interface (BCI) research has largely focused on single neuroimaging modalities such as near-infrared spectroscopy (NIRS) or transcranial Doppler ultrasonography (TCD). However, multimodal brain-computer interfaces, which combine signals from different brain modalities, have been suggested as a potential means of improving the accuracy of BCI systems. In this paper, we compare the classification accuracies attainable using NIRS signals alone, TCD signals alone, and a combination of NIRS and TCD signals. Nine able-bodied subjects (mean age=25.7) were recruited and simultaneous measurements were made with NIRS and TCD instruments while participants were prompted to perform a verbal fluency task or to remain at rest, within the context of a block-stimulus paradigm. Using Linear Discriminant Analysis, the verbal fluency task was classified at mean accuracies of 76.1±9.9%, 79.4±10.3%, and 86.5±6.0% using NIRS, TCD, and NIRS-TCD systems respectively. In five of nine participants, classification accuracies with the NIRS-TCD system were significantly higher (paccuracy of future brain-computer interfaces. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Computerized assessment of pedophilic sexual interest through self-report and viewing time: reliability, validity, and classification accuracy of the affinity program.

    Science.gov (United States)

    Mokros, Andreas; Gebhard, Michael; Heinz, Volker; Marschall, Roland W; Nitschke, Joachim; Glasgow, David V; Gress, Carmen L Z; Laws, D Richard

    2013-06-01

    Affinity is a computerized assessment tool that combines viewing time and self-report measures of sexual interest. The present study was designed to assess the diagnostic properties of Affinity with respect to sexual interest in prepubescent children. Reliability of both self-report and viewing time components was estimated to be high. The group profile of a sample of pedophilic adult male child molesters (n = 42, all of whom admitted their offenses) differed from the group profiles of male community controls (n = 95) and male nonsexual offenders (n = 27), respectively. More specifically, both ratings and viewing times for images showing small children or prejuvenile children were significantly higher within the child molester sample than in either of the other two groups, attesting to the validity of the measures. Overall classification accuracy, however, was mediocre: A multivariate classification routine yielded 50% sensitivity for child molester status at the cost of 13% false positives. The implications for forensic use of Affinity are discussed.

  14. Improving classification accuracy of spectrally similar tree species: a complex case study in the Kruger National Park

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2009-07-01

    Full Text Available -species class variability can be reduced compared to the between-species class variability. Furthermore, two classification approaches with spectral angle mapper: (i) using a spectral library composed of one spectrum (endmember) per species and (ii) a multiple...

  15. Analyzing the diagnostic accuracy of the causes of spinal pain at neurology hospital in accordance with the International Classification of Diseases

    Directory of Open Access Journals (Sweden)

    I. G. Mikhailyuk

    2014-01-01

    Full Text Available Spinal pain is of great socioeconomic significance as it is widely prevalent and a common cause of disability. However, the diagnosis of its true causes frequently leads to problems. A study has been conducted to evaluate the accuracy of a clinical diagnosis and its coding in conformity with the International Classification of Diseases. The diagnosis of vertebral osteochondrosis and the hypodiagnosis of nonspecific and nonvertebrogenic pain syndromes have been found to be unreasonably widely used. Ways to solve these problems have been proposed, by applying approaches to diagnosing the causes of spinal pain in accordance with international practice.

  16. Research on the accuracy of TM images land-use classification based on QUEST decision tree: A case study of Lijiang in Yunnan%基于QUEST决策树的遥感影像土地利用分类——以云南省丽江市为例

    Institute of Scientific and Technical Information of China (English)

    吴健生; 潘况; 彭建; 黄秀兰

    2012-01-01

    The accuracy of research on land use/cover change (LUCC) is determined directly by the accuracy of land use classification derived from aerial and satellite images. In analysis of the factors of accuracy of current remote sensing image classification, some methods were introduced to study new trends of classification modes. Some previous studies showed that the speed and accuracy of QUEST (Quick, Unbiased, and Efficient Statistical Tree) decision tree classification were superior to those of other decision tree classifications. On the basis of this approach, the research classified the Landsat TM-5 images in Lijiang, Yunnan province. This paper compared the result with that of maximum likelihood image classification. The overall accuracy was 90. 086 %, which was higher than the overall accuracy (85. 965%) of CART (Classification And Regression Tree). Meanwhile, the Kappa efficient was 0. 849, which was higher than the Kappa efficient (0. 760) of CART. Therefore, it is concluded that in the complex terrain area such as in mountainous regions, the choice of QUEST decision tree classification on TM image would improve the accuracy of land use classification. This type of classification decision tree can precisely obtain new classification rules from integrated satellite images, land use thematic maps, DEM maps and other field investigation materials. Simultaneously, the method can also help users to find new classification rules in multidimensional information, and to build decision tree classifier models. Furthermore, the methods, including a large number of high-resolution and hyperspectral image data, integrated multi-sensor platform, multi-temporal remote sensing image, the pattern recognition and data mining of spectral and texture features, and auxiliary geographic data, will become a trend.%土地利用分类精度直接决定土地利用/土地覆被变化相关研究的准确性,而基于决策树的遥感影像分类是近年来提高土地利用分类

  17. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    Science.gov (United States)

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The…

  18. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    Science.gov (United States)

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The sample consisted…

  19. Metric learning for automatic sleep stage classification.

    Science.gov (United States)

    Phan, Huy; Do, Quan; Do, The-Luan; Vu, Duc-Lung

    2013-01-01

    We introduce in this paper a metric learning approach for automatic sleep stage classification based on single-channel EEG data. We show that learning a global metric from training data instead of using the default Euclidean metric, the k-nearest neighbor classification rule outperforms state-of-the-art methods on Sleep-EDF dataset with various classification settings. The overall accuracy for Awake/Sleep and 4-class classification setting are 98.32% and 94.49% respectively. Furthermore, the superior accuracy is achieved by performing classification on a low-dimensional feature space derived from time and frequency domains and without the need for artifact removal as a preprocessing step.

  20. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    Energy Technology Data Exchange (ETDEWEB)

    Lambron, Julien; Frampas, Eric; Toulgoat, Frederique [University Hospital, Department of Radiology, Nantes (France); Rakotonjanahary, Josue [University Hospital, Department of Pediatric Oncology, Angers (France); University Paris Diderot, INSERM CIE5 Robert Debre Hospital, Assistance Publique-Hopitaux de Paris (AP-HP), Paris (France); Loisel, Didier [University Hospital, Department of Radiology, Angers (France); Carli, Emilie de; Rialland, Xavier [University Hospital, Department of Pediatric Oncology, Angers (France); Delion, Matthieu [University Hospital, Department of Neurosurgery, Angers (France)

    2016-02-15

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm{sup 3} (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  1. Boosting Brain Connectome Classification Accuracy in Alzheimer’s disease using Higher-Order Singular Value Decomposition

    Directory of Open Access Journals (Sweden)

    Liang eZhan

    2015-07-01

    Full Text Available Alzheimer's disease (AD is a progressive brain disease. Accurate detection of AD and its prodromal stage, mild cognitive impairment (MCI, are crucial. There is also a growing interest in identifying brain imaging biomarkers that help to automatically differentiate stages of Alzheimer’s disease. Here, we focused on anatomical brain networks computed from diffusion MRI and proposed a new feature extraction and classification framework based on higher order singular value decomposition and sparse logistic regression. In tests on publicly available data from the Alzheimer’s Disease Neuroimaging Initiative, our proposed framework showed promise in detecting brain network differences that help in classifying different stages of Alzheimer’s disease.

  2. Retrospective assessment of interobserver agreement and accuracy in classifications and measurements in subsolid nodules with solid components less than 8mm: which window setting is better?

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Roh-Eul [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of); Hwang, Eui Jin; Yoon, Soon Ho; Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Ahn, Soyeon [Seoul National University Bundang Hospital, Medical Research Collaborating Center, Seongnam-si (Korea, Republic of)

    2017-04-15

    To compare interobserver agreements among multiple readers and accuracy for the assessment of solid components in subsolid nodules between the lung and mediastinal window settings. Seventy-seven surgically resected nodules with solid components smaller than 8 mm were included in this study. In both lung and mediastinal windows, five readers independently assessed the presence and size of solid component. Bootstrapping was used to compare the interobserver agreement between the two window settings. Imaging-pathology correlation was performed to evaluate the accuracy. There were no significant differences in the interobserver agreements between the two windows for both identification (lung windows, k = 0.51; mediastinal windows, k = 0.57) and measurements (lung windows, ICC = 0.70; mediastinal windows, ICC = 0.69) of solid components. The incidence of false negative results for the presence of invasive components and the median absolute difference between the solid component size and the invasive component size were significantly higher on mediastinal windows than on lung windows (P < 0.001 and P < 0.001, respectively). The lung window setting had a comparable reproducibility but a higher accuracy than the mediastinal window setting for nodule classifications and solid component measurements in subsolid nodules. (orig.)

  3. Accuracy of reported flash point values on material safety data sheets and the impact on product classification.

    Science.gov (United States)

    Radnoff, Diane

    2013-01-01

    Material Safety Data Sheets (MSDSs) are the foundation of worker right-to-know legislation for chemical hazards. Suppliers can use product test data to determine a product's classification. Alternatively, they may use evaluation and professional judgment based on test results for the product or a product, material, or substance with similar properties. While the criteria for classifying products under the new Globally Harmonized System of Classification and Labeling of Chemicals (GHS) are different, a similar process is followed. Neither the current Workplace Hazardous Materials Information System (WHMIS) nor GHS require suppliers to test their products to classify them. In this project 83 samples of products classified as flammable or combustible, representing a variety of industry sectors and product types, were collected. Flash points were measured and compared to the reported values on the MSDSs. The classifications of the products were then compared using the WHMIS and GHS criteria. The results of the study indicated that there were significant variations between the disclosed and measured flash point values. Overall, more than one-third of the products had flash points lower than that disclosed on the MSDS. In some cases, the measured values were more than 20°C lower than the disclosed values. This could potentially result in an underestimation regarding the flammability of the product so it is important for employers to understand the limitations in the information provided on MSDSs when developing safe work procedures and training programs in the workplace. Nearly one-fifth of the products were misclassified under the WHMIS system as combustible when the measured flash point indicated that they should be classified as flammable when laboratory measurement error was taken into account. While a similar number of products were misclassified using GHS criteria, the tendency appeared to be to "over-classify" (provide a hazard class that was more conservative

  4. Enhancing the classification accuracy of steady-state visual evoked potential-based brain-computer interfaces using phase constrained canonical correlation analysis

    Science.gov (United States)

    Pan, Jie; Gao, Xiaorong; Duan, Fang; Yan, Zheng; Gao, Shangkai

    2011-06-01

    In this study, a novel method of phase constrained canonical correlation analysis (p-CCA) is presented for classifying steady-state visual evoked potentials (SSVEPs) using multichannel electroencephalography (EEG) signals. p-CCA is employed to improve the performance of the SSVEP-based brain-computer interface (BCI) system using standard CCA. SSVEP response phases are estimated based on the physiologically meaningful apparent latency and are added as a reliable constraint into standard CCA. The results of EEG experiments involving 10 subjects demonstrate that p-CCA consistently outperforms standard CCA in classification accuracy. The improvement is up to 6.8% using 1-4 s data segments. The results indicate that the reliable measurement of phase information is of importance in SSVEP-based BCIs.

  5. Accuracy of the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) as a research tool for identification of patients with uveitis and scleritis.

    Science.gov (United States)

    Uchiyama, Eduardo; Faez, Sepideh; Nasir, Humzah; Unizony, Sebastian H; Plenge, Robert; Papaliodis, George N; Sobrin, Lucia

    2015-04-01

    To report on the accuracy of the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes for identifying patients with polymyalgia rheumatica (PMR) and concurrent noninfectious inflammatory ocular conditions in a large healthcare organization database. Queries for patients with PMR and uveitis or scleritis were executed in two general teaching hospitals' databases. Patients with ocular infections or other rheumatologic conditions were excluded. Patients with PMR and ocular inflammation were identified, and medical records were reviewed to confirm accuracy. The query identified 10,697 patients with the ICD-9-CM code for PMR and 4154 patients with the codes for noninfectious inflammatory ocular conditions. The number of patients with both PMR and noninfectious uveitis or scleritis by ICD-9-CM codes was 66. On detailed review of the charts of these 66 patients, 31 (47%) had a clinical diagnosis of PMR, 43 (65%) had noninfectious uveitis or scleritis, and only 20 (30%) had PMR with concurrent noninfectious uveitis or scleritis confirmed based on clinical notes. While the use of ICD-9-CM codes has been validated for medical research of common diseases, our results suggest that ICD-9-CM codes may be of limited value for epidemiological investigations of diseases which can be more difficult to diagnose. The ICD-9-CM codes for rarer diseases (PMR, uveitis and scleritis) did not reflect the true clinical problem in a large proportion of our patients. This is particularly true when coding is performed by physicians outside the area of specialty of the diagnosis.

  6. Classification of Focal Prostatic Lesions on Transrectal Ultrasound (TRUS) and the Accuracy of TRUS to Diagnose Prostate Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Ho Yun [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Lee, Hak Jong; Byun, Seok Soo; Lee, Sang Eun; Hong, Sung Kyu [Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Kim, Seung Hyup [Seoul National University Hospital, Seoul (Korea, Republic of)

    2009-06-15

    To improve the diagnostic efficacy of transrectal ultrasound (TRUS)-guided targeted prostatic biopsies, we have suggested the use of a new scoring system for the prediction of malignancies regarding the characteristics of focal suspicious lesions as depicted on TRUS. A total of 350 consecutive patients with or without prostate cancer who underwent targeted biopsies for 358 lesions were included in the study. The data obtained from participants were randomized into two groups; the training set (n = 240) and the test set (n = 118). The characteristics of focal suspicious lesions were evaluated for the training set and the correlation between TRUS findings and the presence of a malignancy was analyzed. Multiple logistic regression analysis was used to identify variables capable of predicting prostatic cancer. A scoring system that used a 5-point scale for better malignancy prediction was determined from the training set. Positive predictive values for malignancy prediction and the diagnostic accuracy of the scored components with the use of receiver operating characteristic curve analysis were evaluated by test set analyses. Subsequent multiple logistic regression analysis determined that shape, margin irregularity, and vascularity were factors significantly and independently associated with the presence of a malignancy. Based on the use of the scoring system for malignancy prediction derived from the significant TRUS findings and the interactions of characteristics, a positive predictive value of 80% was achieved for a score of 4 when applied to the test set. The area under the receiver operating characteristic curve (AUC) for the overall lesion score was 0.81. We have demonstrated that a scoring system for malignancy prediction developed for the characteristics of focal suspicious lesions as depicted on TRUS can help predict the outcome of TRUS-guided biopsies.

  7. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    Directory of Open Access Journals (Sweden)

    Santana Isabel

    2011-08-01

    Full Text Available Abstract Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI, but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing.

  8. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests.

    Science.gov (United States)

    Maroco, João; Silva, Dina; Rodrigues, Ana; Guerreiro, Manuela; Santana, Isabel; de Mendonça, Alexandre

    2011-08-17

    Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Press' Q test showed that all classifiers performed better than chance alone (p Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a

  9. The accuracy of echocardiography versus surgical and pathological classification of patients with ruptured mitral chordae tendineae: a large study in a Chinese cardiovascular center

    Directory of Open Access Journals (Sweden)

    Bai Zhigang

    2011-07-01

    Full Text Available Abstract Background The accuracy of echocardiography versus surgical and pathological classification of patients with ruptured mitral chordae tendineae (RMCT has not yet been investigated with a large study. Methods Clinical, hemodynamic, surgical, and pathological findings were reviewed for 242 patients with a preoperative diagnosis of RMCT that required mitral valvular surgery. Subjects were consecutive in-patients at Fuwai Hospital in 2002-2008. Patients were evaluated by thoracic echocardiography (TTE and transesophageal echocardiography (TEE. RMCT cases were classified by location as anterior or posterior, and classified by degree as partial or complete RMCT, according to surgical findings. RMCT cases were also classified by pathology into four groups: myxomatous degeneration, chronic rheumatic valvulitis (CRV, infective endocarditis and others. Results Echocardiography showed that most patients had a flail mitral valve, moderate to severe mitral regurgitation, a dilated heart chamber, mild to moderate pulmonary artery hypertension and good heart function. The diagnostic accuracy for RMCT was 96.7% for TTE and 100% for TEE compared with surgical findings. Preliminary experiments demonstrated that the sensitivity and specificity of diagnosing anterior, posterior and partial RMCT were high, but the sensitivity of diagnosing complete RMCT was low. Surgical procedures for RMCT depended on the location of ruptured chordae tendineae, with no relationship between surgical procedure and complete or partial RMCT. The echocardiographic characteristics of RMCT included valvular thickening, extended subvalvular chordae, echo enhancement, abnormal echo or vegetation, combined with aortic valve damage in the four groups classified by pathology. The incidence of extended subvalvular chordae in the myxomatous group was higher than that in the other groups, and valve thickening in combination with AV damage in the CRV group was higher than that in the other

  10. Relative significance of heat transfer processes to quantify tradeoffs between complexity and accuracy of energy simulations with a building energy use patterns classification

    Science.gov (United States)

    Heidarinejad, Mohammad

    This dissertation develops rapid and accurate building energy simulations based on a building classification that identifies and focuses modeling efforts on most significant heat transfer processes. The building classification identifies energy use patterns and their contributing parameters for a portfolio of buildings. The dissertation hypothesis is "Building classification can provide minimal required inputs for rapid and accurate energy simulations for a large number of buildings". The critical literature review indicated there is lack of studies to (1) Consider synoptic point of view rather than the case study approach, (2) Analyze influence of different granularities of energy use, (3) Identify key variables based on the heat transfer processes, and (4) Automate the procedure to quantify model complexity with accuracy. Therefore, three dissertation objectives are designed to test out the dissertation hypothesis: (1) Develop different classes of buildings based on their energy use patterns, (2) Develop different building energy simulation approaches for the identified classes of buildings to quantify tradeoffs between model accuracy and complexity, (3) Demonstrate building simulation approaches for case studies. Penn State's and Harvard's campus buildings as well as high performance LEED NC office buildings are test beds for this study to develop different classes of buildings. The campus buildings include detailed chilled water, electricity, and steam data, enabling to classify buildings into externally-load, internally-load, or mixed-load dominated. The energy use of the internally-load buildings is primarily a function of the internal loads and their schedules. Externally-load dominated buildings tend to have an energy use pattern that is a function of building construction materials and outdoor weather conditions. However, most of the commercial medium-sized office buildings have a mixed-load pattern, meaning the HVAC system and operation schedule dictate

  11. Accuracy Improvement of Spectral Classification of Crop Using Micro wave Backscatter Data%微波后向散射数据改进农作物光谱分类精度研究

    Institute of Scientific and Technical Information of China (English)

    贾坤; 李强子; 田亦陈; 吴炳方; 张飞飞; 蒙继华

    2011-01-01

    In the present study, VV polarization microwave backscatter data used for improving accuracies of spectral classification of crop is investigated. Classification accuracy using different classifiers based on the fusion data of HJ satellite multi-spectral and Envisat ASAR VV backscatter data are compared. The results indicate that fusion data can take full advantage of spectral information of HJ multi-spectral data and the structure sensitivity feature of ASAR VV polarization data. The fusion data enlarges the spectral difference among different classifications and improves crop classification accuracy. The classification accuracy using fusion data can be increased by 5 percent compared to the single HJ data. Furthermore, ASAR VV polarization data is sensitive to non-agrarian area of planted field, and VV polarization data joined classification can effectively distinguish the field border. VV polarization data associating with multi-spectral data used in crop classification enlarges the application of satellite data and has the potential of spread in the domain of agriculture.%利用实验区环境星多光谱数据与Envisat ASAR VV极化数据进行融合.讨论了VV极化微波后向散射数据用于改善多光谱遥感数据农作物分类的精度,并比较了不同分类方法的分类精度.结果表明,两种数据之间的融合充分利用了环境星数据的光谱信息和VV极化数据对于地物结构敏感的特征,不但增强了不同地物之间的光谱差异,而且提高了作物分类精度.两者融合后分类精度比单独使用环境星数据分类精度提高了约5个百分点,而且由于WW极化数据对田间非耕地信息的敏感性,对于田块边界的识别效果有很大的改善.通过该研究探讨了VV极化数据和多光谱数据融合在作物分类中的应用,拓展了遥感数据在农业领域应用的范围,具有推广价值.

  12. Convolutional neural network for high-accuracy functional near-infrared spectroscopy in a brain-computer interface: three-class classification of rest, right-, and left-hand motor execution.

    Science.gov (United States)

    Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong

    2018-01-01

    The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.

  13. Accuracy of combined maxillary and mandibular repositioning and of soft tissue prediction in relation to maxillary antero-superior repositioning combined with mandibular set back A computerized cephalometric evaluation of the immediate postsurgical outcome using the TIOPS planning system

    DEFF Research Database (Denmark)

    Donatsky, Ole; Bjørn-Jørgensen, Jens; Hermund, Niels Ulrich

    2009-01-01

    surgical planning system (TIOPS). MATERIAL AND METHODS: Out of 100 prospectively and consecutively treated patients, 52 patients manifested dentofacial deformities requiring bimaxillary orthognathic surgery with maxillary antero-superior repositioning combined with mandibular set back and so were included...

  14. Diagnosis of classification of SLAP lesion with MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Yamaguchi, Takayuki; Eto, Masao; Tomonaga, Tadashi; Takahara, Kazuhiro; Kushida, Manabu; Inatomi, Kenshiro; Akase, Keisuke; Wake, Satoshi; Shindo, Hiroyuki [Nagasaki Univ. (Japan). School of Medicine

    2003-03-01

    To determine the accuracy of MR Arthrography (MRA) in the classification of super labrum anterior posterior (SLAP) lesions, we investigated 15 patients (15 men, average 27.5 years) who underwent MRA before arthroscopic operation from 1998 to 2001. Based on the Snyder's classification, we defined the diagnostic criteria for classification of SLAP lesion on MRA: Type I shows irregularity of the labrum, without evidence of detachment from the superior glenoid rim. Type II shows complete detachment of the bicipital-labral complex. Type III shows detachment and inferior displacement of the superior labrum. Type IV shows Gd-DTPA dissecting into the biceps tendon. MRA findings correlated with arthroscopic findings. In MRA, 4 patients were diagnosed as type I, 10 type II, and 1 type IV. But in surgical findings, 3 out of the 4 patients diagnosed as type I were type II, and 3 out of the 10 patients (type II) were type I. MRA had a sensitivity of 25%, specificity of 73%, accuracy of 60% in type I, and sensitivity of 70%, specificity of 40%, accuracy of 60% in type II. The MRA classification corresponded with those of arthroscopy in 9 out of 15 patients (60%). MRA is a useful technique in the diagnosis of SLAP lesion, but classification is still difficult. (author)

  15. Accuracy of prediction of percentage lean meat and authorization of carcass measurement instruments: adverse effects of incorrect sampling of carcasses in pig classification.

    NARCIS (Netherlands)

    Engel, B.; Buist, W.G.; Walstra, P.; Olsen, E.; Daumas, G.

    2003-01-01

    Classification of pig carcasses in the European Community is based on the lean meat percentage of the carcass. The lean meat percentage is predicted from instrumental carcass measurements, such as fat and muscle depth measurements, obtained in the slaughter-line. The prediction formula employed is

  16. Diagnosing multibacillary leprosy: A comparative evaluation of diagnostic accuracy of slit-skin smear, bacterial index of granuloma and WHO operational classification

    Directory of Open Access Journals (Sweden)

    Bhushan Premanshu

    2008-01-01

    Full Text Available Background: In view of the relatively poor performance of skin smears WHO adopted a purely clinical operational classification, however the poor specificity of operational classification leads to overdiagnosis and unwarranted overtreatment while the poor sensitivity leads to underdiagnosis of multibacillary (MB cases with inadequate treatment. Bacilli are more frequently and abundantly demonstrated in tissue sections. Aims and Methods: We compared WHO classification, slit-skin smears (SSS and demonstration of bacilli in biopsies (bacterial index of granuloma or BIG with regards to their efficacy in correctly identifying multibacillary cases. The tests were done on 141 patients and were evaluated for their ability to diagnose true MB leprosy using detailed statistical analysis. Results: A total of 76 patients were truly MB with either positive smears, BIG positivity or with a typical histology of BB, BL or LL. Amongst these 76 true-MB patients, WHO operational classification correctly identified multibacillary status in 56 (73.68%, and SSS in 43 (56.58%, while BIG correctly identified 65 (85.53% true-MB cases. Conclusion: BIG was most sensitive and effective of the three methods especially in paucilesional patients. We suggest adding estimation of bacterial index of granuloma in the diagnostic workup of paucilesional patients.

  17. Three-Class EEG-Based Motor Imagery Classification Using Phase-Space Reconstruction Technique

    Science.gov (United States)

    Djemal, Ridha; Bazyed, Ayad G.; Belwafi, Kais; Gannouni, Sofien; Kaaniche, Walid

    2016-01-01

    Over the last few decades, brain signals have been significantly exploited for brain-computer interface (BCI) applications. In this paper, we study the extraction of features using event-related desynchronization/synchronization techniques to improve the classification accuracy for three-class motor imagery (MI) BCI. The classification approach is based on combining the features of the phase and amplitude of the brain signals using fast Fourier transform (FFT) and autoregressive (AR) modeling of the reconstructed phase space as well as the modification of the BCI parameters (trial length, trial frequency band, classification method). We report interesting results compared with those present in the literature by utilizing sequential forward floating selection (SFFS) and a multi-class linear discriminant analysis (LDA), our findings showed superior classification results, a classification accuracy of 86.06% and 93% for two BCI competition datasets, with respect to results from previous studies. PMID:27563927

  18. Two and three-dimensional computed tomography for the classification and management of distal humeral fractures - Evaluation of reliability and diagnostic accuracy

    NARCIS (Netherlands)

    J. Doornberg; A. Lindenhovius; P. Kloen; C.N. van Dijk; D. Zurakowski; D. Ring

    2006-01-01

    Background: Complex fractures of the distal part of the humerus can be difficult to characterize on plain radiographs and two-dimensional computed tomography scans. We tested the hypothesis that three-dimensional reconstructions of computed tomography scans improve the reliability and accuracy of fr

  19. Identification of Children with Language Impairment: Investigating the Classification Accuracy of the MacArthur-Bates Communicative Development Inventories, Level III

    Science.gov (United States)

    Skarakis-Doyle, Elizabeth; Campbell, Wenonah; Dempsey, Lynn

    2009-01-01

    Purpose: This study tested the accuracy with which the MacArthur-Bates Communicative Development Inventories, Level III (CDI-III), a parent report measure of language ability, discriminated children with language impairment from those developing language typically. Method: Parents of 58 children, 49 with typically developing language (age 30 to 42…

  20. Diagnostic Accuracy of 256-Slice CT for Detecting Isolated Superior Mesenteric Artery Lesions%256层CT对孤立性肠系膜上动脉病变的诊断价值

    Institute of Scientific and Technical Information of China (English)

    叶自青; 王珏; 范占明; 刘英峰

    2016-01-01

    目的:探讨孤立性肠系膜上动脉病变(isolated superior mesenteric artery lesions,ISMAL)的256层CT的影像特征。方法回顾性分析12例ISMAL患者的影像及临床资料,以平扫横轴位、增强扫描横轴位、容积再现(VR)、曲面重建(CPR)、最大密度投影(MIP)等重建方法对肠系膜上动脉病变进行分析观察,总结其影像学特征。结果 ISMAL患者平均年龄为(50.1±2.7)岁,男性9例占75%,女性3例占25%,男性发生率高于女性。肠系膜上动脉栓塞(superior mesenteric artery embolism, SMAE)共8例,其中男性5例,女性3例;肠系膜上动脉夹层(superior mesenteric artery dissection, SMAD)共4例,均为男性;管腔局部增宽、腔内充盈缺损是SMAE典型征象;腔内条形低密度影并双腔的显示是SMAD的典型表现。结论256层CT平扫及CTA图像后处理技术可清晰显示肠系膜上动脉病变,确定其病变性质,了解其累及范围,为临床诊治提供有力的影像学依据。%Objective To evaluate the image features of isolated superior mesenteric artery lesion (ISMAL) by using 256 slice CT.Methods The images and clinical data of 12 patients were analyzed retrospectively. Every case was analyzed by the methods of vascular reconstruction, such as plain scan transverse axial, enhanced scan transverse axis, the volume rendering (VR), curved planar reformation (CPR), maximum intensity projection (MIP), and summarize the imaging characterization.Results The average age of ISMAL patients was (50.1±2.7)years old, 9 male (75%), 3 female (25%), the prevalence's of SMAD are higher among men than women. 8 patients were superior mesenteric artery embolism (SAME) including5 male and 3 female. 4 patients were superior mesenteric artery dissection (SMAD), all of them were male. The appearance of SMAE were filling defect in arteries and luminal local broadening. The typical manifestation of SMAD was the double chamber display of the strip and

  1. 基于图像分类的矿物含量测定及精度评价%Mineral contents determination and accuracy evaluation based on classification of petrographic images

    Institute of Scientific and Technical Information of China (English)

    叶润青; 牛瑞卿; 张良培; 易顺华

    2011-01-01

    There are many human errors and lack accuracy evaluation existing in the mineral contents determination for traditional methods.A new approach is proposed for mineral contents determination and accuracy evaluation based on images classification.The method is firstly to divide the petrographic images into different mineral classes by using image classification algorithms,and then to obtain the mineral contents through pixel statistic,finally contents accuracy evaluation is carried out by Confusion Matrix(CM).According to the spectral and texture features of the petrographic images,two approaches were proposed for mineral contents determination.One is for the petrographic images with simple texture and large color distinction is to adopt direct classification.The experiment of granite photos shows that the supervised classifiers are better than the unsupervised ones in accuracy,and the Maximum Likelihood Classifier(MLC) results with the highest accuracy of 94.25%;The other method is for the petrographic images with complex mineral texture(such as interference colors,twins,etc.),an object-oriented Multi-resolution Segmentation(MS)algorithm is employed for images segmentation before mineral classification.The muscovite monzogranite microscope image experiment shows the content estimated accuracy is 94.85%.%针对传统矿物含量测定中存在人为误差、缺乏精度评价等问题,提出了基于图像分类的矿物含量测定及精度评价方法,该方法通过统计分类后图像中每种矿物的像元数量测定矿物含量,并采用混淆矩阵评价含量测定精度.根据岩石图像的光谱和纹理特征,提出了两种基本的矿物含量测定方式:1)对于纹理简单、矿物光谱区分度大的岩石图像,采用直接分类方式测定矿物含量,花岗岩手标本照片矿物分类实验表明监督分类效果优于非监督分类,且监督分类中最大似然法分类(MLC)的精度最

  2. Classification of PolSAR image based on quotient space theory

    Science.gov (United States)

    An, Zhihui; Yu, Jie; Liu, Xiaomeng; Liu, Limin; Jiao, Shuai; Zhu, Teng; Wang, Shaohua

    2015-12-01

    In order to improve the classification accuracy, quotient space theory was applied in the classification of polarimetric SAR (PolSAR) image. Firstly, Yamaguchi decomposition method is adopted, which can get the polarimetric characteristic of the image. At the same time, Gray level Co-occurrence Matrix (GLCM) and Gabor wavelet are used to get texture feature, respectively. Secondly, combined with texture feature and polarimetric characteristic, Support Vector Machine (SVM) classifier is used for initial classification to establish different granularity spaces. Finally, according to the quotient space granularity synthetic theory, we merge and reason the different quotient spaces to get the comprehensive classification result. Method proposed in this paper is tested with L-band AIRSAR of San Francisco bay. The result shows that the comprehensive classification result based on the theory of quotient space is superior to the classification result of single granularity space.

  3. Classification accuracy of the Millon Clinical Multiaxial Inventory-III modifier indices in the detection of malingering in traumatic brain injury.

    Science.gov (United States)

    Aguerrevere, Luis E; Greve, Kevin W; Bianchini, Kevin J; Ord, Jonathan S

    2011-06-01

    The present study used criterion groups validation to determine the ability of the Millon Clinical Multiaxial Inventory-III (MCMI-III) modifier indices to detect malingering in traumatic brain injury (TBI). Patients with TBI who met criteria for malingered neurocognitive dysfunction (MND) were compared to those who showed no indications of malingering. Data were collected from 108 TBI patients referred for neuropsychological evaluation. Base rate (BR) scores were used for MCMI-III modifier indices: Disclosure, Desirability, and Debasement. Malingering classification was based on the Slick, Sherman, and Iverson (1999) criteria for MND. TBI patients were placed in one of three groups: MND (n = 55), not-MND (n = 26), or Indeterminate (n = 26).The not-MND group had lower modifier index scores than the MND group. At scores associated with a 4% false-positive (FP) error rate, sensitivity was 47% for Disclosure, 51% for Desirability, and 55% for Debasement. Examination of joint classification analysis demonstrated 54% sensitivity at cutoffs associated with 0% FP error rate. Results suggested that scores from all MCMI-III modifier indices are useful for identifying intentional symptom exaggeration in TBI. Debasement was the most sensitive of the three indices. Clinical implications are discussed.

  4. Texture Classification in Lung CT Using Local Binary Patterns

    DEFF Research Database (Denmark)

    Sørensen, Lauge Emil Borch Laurs; Shaker, Saher B.; de Bruijne, Marleen

    2008-01-01

    Abstract In this paper we propose to use local binary patterns (LBP) as features in a classification framework for classifying different texture patterns in lung computed tomography. Image intensity is included by means of the joint LBP and intensity histogram, and classification is performed using...... the k nearest neighbor classifier with histogram similarity as distance measure. The proposed method is evaluated on a set of 168 regions of interest comprising normal tissue and different emphysema patterns, and compared to a filter bank based on Gaussian derivatives. The joint LBP and intensity...... histogram, achieving a classification accuracy of 95.2%, shows superior performance to using the common approach of taking moments of the filter response histograms as features, and slightly better performance than using the full filter response histograms instead. Classification results are better than...

  5. An alternative respiratory sounds classification system utilizing artificial neural networks

    Directory of Open Access Journals (Sweden)

    Rami J Oweis

    2015-04-01

    Full Text Available Background: Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. Methods: This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs and adaptive neuro-fuzzy inference systems (ANFIS toolboxes. The methods have been applied to 10 different respiratory sounds for classification. Results: The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. Conclusions: The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  6. Polarimetric Synthetic Aperture Radar Image Classification by a Hybrid Method

    Institute of Scientific and Technical Information of China (English)

    Kamran Ullah Khan; YANG Jian

    2007-01-01

    Different methods proposed so far for accurate classification of land cover types in polarimetric synthetic aperture radar (SAR) image are data specific and no general method is available. A novel hybrid framework for this classification was developed in this work. A set of effective features derived from the coherence matrix of polarimetric SARdata was proposed.Constituents of the feature set are wavelet,texture,and nonlinear features.The proposed feature set has a strong discrimination power. A neural network was used as the classification engine in a unique way. By exploiting the speed of the conjugate gradient method and the convergence rate of the Levenberg-Marquardt method (near the optimal point), an overall speed up of the classification procedure was achieved. Principal component analysis(PCA)was used to shrink the dimension of the feature vector without sacrificing much of the classification accuracy. The proposed approach is compared with the maximum likelihood estimator (MLE)based on the complex Wishart distribution and the results show the superiority of the proposed method,with the average classification accuracy by the proposed method(95.4%)higher than that of the MLE(93.77%). Use of PCA to reduce the dimensionality of the feature vector helps reduce the memory requirements and computational cost, thereby enhancing the speed of the process.

  7. 岩上静脉的引流类型及其与手术入路的关系%Anatomy classifications of the superior petrosal venous and relations for surgical approaches

    Institute of Scientific and Technical Information of China (English)

    杨汉兵; 陈礼刚; 李定君; 顾应江; 肖洪文

    2010-01-01

    Objective The purpose of this study was to dissect these structure existed in petrous portions of the temporal bones and the posterior fossa nearby,to measure the distence of those important stuctures around the superior petrosal venous (SPV), to propose the patterns of drainage of the SPV along the petrous ridge in relation to the Meckel cave and internal acoustic meatus (IAM) and to delineate its effect on the surgical exposures obtained in subtemporal transtentorial and retrosigmoid suprameatal approaches. Methods Ten adult cadaveric heads (20 hemispheres) were studied, and data of SPV and the structures around were measured. The patterns of drainage of the SPV along the petrous ridge were characterized according to their relation to the Meckel cave and the IAM based on an examination of 20 hemispheres. Subtemporal trans-tentorial and retrosigmoid suprameatal approaches were performed in two additional cadavers to demonstrate the effect of the drainage pattern on the surgical exposures. Result The SPV originated from the cerebellopontine angle cistern, and were multibranch. According to SPV relationship with the Meckel cave and internal acoustic meatus (IAM), the patterns of drainage of the SPV were classified into three groups. Type Ⅰ emptied into the SPS above or medial to the Meckel cave. The most common type-Type Ⅱ, emptied between the lateral limit of the trigeminal nerve at the Meckel cave and the medial limit of the facial nerve at the IAM. Type Ⅲ emptied into the SPS above and lateral to the boundaries of the IAM Conclusions The site which the SPV emptied into the superior petrosal sinus (SPS) was relationship tightly with the Meckel cave and IAM. According to SPV relationship with the Meckel cave and internal acoustic meatus (IAM). The proposed modified classification system and its effect on the surgical exposure may aid in planning the approach directed along the petrous apex and may reduce the probability of venous complications.%目的 对岩

  8. Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

    Institute of Scientific and Technical Information of China (English)

    Xia; JING; Yan; BAO

    2015-01-01

    Different fusion algorithm has its own advantages and limitations,so it is very difficult to simply evaluate the good points and bad points of the fusion algorithm. Whether an algorithm was selected to fuse object images was also depended upon the sensor types and special research purposes. Firstly,five fusion methods,i. e. IHS,Brovey,PCA,SFIM and Gram-Schmidt,were briefly described in the paper. And then visual judgment and quantitative statistical parameters were used to assess the five algorithms. Finally,in order to determine which one is the best suitable fusion method for land cover classification of IKONOS image,the maximum likelihood classification( MLC) was applied using the above five fusion images. The results showed that the fusion effect of SFIM transform and Gram-Schmidt transform were better than the other three image fusion methods in spatial details improvement and spectral information fidelity,and Gram-Schmidt technique was superior to SFIM transform in the aspect of expressing image details. The classification accuracy of the fused image using Gram-Schmidt and SFIM algorithms was higher than that of the other three image fusion methods,and the overall accuracy was greater than 98%. The IHS-fused image classification accuracy was the lowest,the overall accuracy and kappa coefficient were 83. 14% and 0. 76,respectively. Thus the IKONOS fusion images obtained by the Gram-Schmidt and SFIM were better for improving the land cover classification accuracy.

  9. Efficient statistical classification of satellite measurements

    CERN Document Server

    Mills, Peter

    2012-01-01

    Supervised statistical classification is a vital tool for satellite image processing. It is useful not only when a discrete result, such as feature extraction or surface type, is required, but also for continuum retrievals by dividing the quantity of interest into discrete ranges. Because of the high resolution of modern satellite instruments and because of the requirement for real-time processing, any algorithm has to be fast to be useful. Here we describe an algorithm based on kernel estimation called Adaptive Gaussian Filtering that incorporates several innovations to produce superior efficiency as compared to three other popular methods: k-nearest-neighbour (KNN), Learning Vector Quantization (LVQ) and Support Vector Machines (SVM). This efficiency is gained with no compromises: accuracy is maintained, while estimates of the conditional probabilities are returned. These are useful not only to gauge the accuracy of an estimate in the absence of its true value, but also to re-calibrate a retrieved image and...

  10. Classification of papulo-squamous skin diseases using image analysis.

    Science.gov (United States)

    Mashaly, H M; Masood, N A; Mohamed, Abdalla S A

    2012-02-01

    Papulo-squamous skin diseases are variable but are very close in their clinical features. They present with the same lesions, erythematous scaly lesions. Clinical evaluation of skin lesions is based on common sense and experience of the dermatologist to differentiate features of each disease. To evaluate a computer-based image analysis system as a helping tool for classification of commonly encountered diseases. The study included 50 selected images from each of psoriasis, lichen planus, atopic dermatitis, seborrheic dermatitis, pityrasis rosea, and pitryasis rubra pilaris with a total of 300 images. The study comprised three main processes peformed on the 300 included images: segmentation, feature extraction followed by classification. Rough sets recorded the highest percentage of accuracy and sensitivity of segmentation for the six groups of diseases compared with the other three used techniques (topological derivative, K-means clustering, and watershed). Rule-based classifier using the concept of rough sets recorded the best percentage of classification (96.7%) for the six groups of diseases compared with the other six techniques of classification used: K-means clustering, fuzzy c-means clustering, classification and regression tree, rule-based classifier with discretization, and K-nearest neighbor technique. Rough sets approach proves its superiority for both the segmentation and the classification processes of papulo-squamous skin diseases compared with the other used segmentation and classification techniques. © 2011 John Wiley & Sons A/S.

  11. 基于SAR提高喀斯特地区LUCC光谱 分类精度研究%IMPROVING KARST REGION LUCC SOECTRA CLASSIFICATION ACCURACY BASED ON SAR

    Institute of Scientific and Technical Information of China (English)

    廖娟; 周忠发; 王昆; 黄智灵; 陈全

    2016-01-01

    The surface morphology of Karst area is complex which causes difficult ground land investigation and low accuracy of investigation. Remote sensing is used as the main means of effective monitoring and studying human ac-tivity which impacts land use pattern and utilization degree. Combining ALOS multi spectral data with TerraSAR X polarization data, this paper discussed how the HH polarized microwave backscatter data was used to improve LUCC classification accuracy of the multi spectral remote sensing data. And then it compared the different fusion methods which were more suitable for every object to distinguish ground object. The results showed that: the combination with the two kinds of data could make full use of the characteristics of the spectral information of multi spectral da-ta, as well as the rich texture and structure information of HH polarization data, enhance the spectral differences a-mong different objects,and improve the distinguishable of ground features. Compared to the method of separately u-sing spectral data,the classification accuracy using the PC method and IHS method improved 8, 13 percentage, re-spectively. And the HH polarization improved the distinguish accuracy of "flower" distribution of dry land, grass-land, and woodland,because of the sensitivity of vegetation water content of HH. This research expanded the scope of application of remote sensing data in the field of land and resources and has the value of popularization.%喀斯特地区复杂地表形态导致地面调查可深入性差、 精度不高,遥感则作为该区有效监测与研究人类活动对土地利用(LUCC)方式与利用程度影响的主要手段.文章利用ALOS多光谱数据与TerraSAR-X的数据进行融合,讨论了HH极化微波后向散射数据用于改善多光谱遥感数据LUCC分类的精度,并比较了不同融合方法对地物识别.结果表明:2种数据之间的融合充分利用了多光谱的光谱信息与HH极化数据丰富的结构与纹

  12. Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders.

    Science.gov (United States)

    Subasi, Abdulhamit

    2013-06-01

    Support vector machine (SVM) is an extensively used machine learning method with many biomedical signal classification applications. In this study, a novel PSO-SVM model has been proposed that hybridized the particle swarm optimization (PSO) and SVM to improve the EMG signal classification accuracy. This optimization mechanism involves kernel parameter setting in the SVM training procedure, which significantly influences the classification accuracy. The experiments were conducted on the basis of EMG signal to classify into normal, neurogenic or myopathic. In the proposed method the EMG signals were decomposed into the frequency sub-bands using discrete wavelet transform (DWT) and a set of statistical features were extracted from these sub-bands to represent the distribution of wavelet coefficients. The obtained results obviously validate the superiority of the SVM method compared to conventional machine learning methods, and suggest that further significant enhancements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. The PSO-SVM yielded an overall accuracy of 97.41% on 1200 EMG signals selected from 27 subject records against 96.75%, 95.17% and 94.08% for the SVM, the k-NN and the RBF classifiers, respectively. PSO-SVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of PSO-SVM for diagnosis of neuromuscular disorders.

  13. Spatiotemporal representations of rapid visual target detection: a single-trial EEG classification algorithm.

    Science.gov (United States)

    Fuhrmann Alpert, Galit; Manor, Ran; Spanier, Assaf B; Deouell, Leon Y; Geva, Amir B

    2014-08-01

    Brain computer interface applications, developed for both healthy and clinical populations, critically depend on decoding brain activity in single trials. The goal of the present study was to detect distinctive spatiotemporal brain patterns within a set of event related responses. We introduce a novel classification algorithm, the spatially weighted FLD-PCA (SWFP), which is based on a two-step linear classification of event-related responses, using fisher linear discriminant (FLD) classifier and principal component analysis (PCA) for dimensionality reduction. As a benchmark algorithm, we consider the hierarchical discriminant component Analysis (HDCA), introduced by Parra, et al. 2007. We also consider a modified version of the HDCA, namely the hierarchical discriminant principal component analysis algorithm (HDPCA). We compare single-trial classification accuracies of all the three algorithms, each applied to detect target images within a rapid serial visual presentation (RSVP, 10 Hz) of images from five different object categories, based on single-trial brain responses. We find a systematic superiority of our classification algorithm in the tested paradigm. Additionally, HDPCA significantly increases classification accuracies compared to the HDCA. Finally, we show that presenting several repetitions of the same image exemplars improve accuracy, and thus may be important in cases where high accuracy is crucial.

  14. 基于模糊集合理论的中国区域土地覆盖数据集融合及精度分析%CHINA LAND COVER CLASSIFICATION FUSION BASED ON EXPERT DECISION AND ACCURACY ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    崔林丽; 陈昭; 尹球; 唐世浩; 刘荣高

    2014-01-01

    theory,is usually performed by experts according to semantic rules.Scoring is followed by the voting and decision-making procedure.Among the affinity scores of a pixel,the highest one suggests that the pixel falls into the target class linked by itself.In addition,we have also exploited spatial correlation by weighting the affinity scores of the neighboring pixels.When fusion is completed,a synthetic map (SYNMAP) combining the features of all original classification products is created.Overall consistency of class between SYNMAP and each land cover is engaged to evaluate the fusion method.All the datasets,including SYNMAP,are evaluated after being further categorized into a few of simple classes,each of which include several original or target legends.Note that classification accuracy,which offers an absolute index and is commonly seen,is not presented in the paper since we are short of ground truth data.Nevertheless,the goal of the fuzzy-theory-based method is to produce a fused map that accommodates all the advantages of different original land cover data sets and reconcile their discrepancy caused by the disagreement of different classification system.Therefore,the index of consistency between two land covers should suffice.In our experiment,ESA,MODIS/IGBP,MODIS/UMD,and MODIS/PFT are employed as the original land covers to be fused.IGBP legends are set as the target.Meanwhile,nine simple classes are used during evaluation.Overall consistencies indicate improved agreement of SYNMAP with all the other land cover products.It means that the proposed fusion method has successfully combined various features of different land cover products.The conclusions can be used for national and regional numerical model and ecological environment evaluation for further research and applications.

  15. Radar transmitter classification using non-stationary signal classifier

    CSIR Research Space (South Africa)

    Du Plessis, MC

    2009-07-01

    Full Text Available support vector machine which is applied to the radar pulse's time-frequency representation. The time-frequency representation is refined using particle swarm optimization to increase the classification accuracy. The classification accuracy is tested...

  16. Superior Hiking Trail

    Data.gov (United States)

    Minnesota Department of Natural Resources — Superior Hiking Trail main trail, spurs, and camp spurs for completed trail throughout Cook, Lake, St. Louis and Carlton counties. These data were collected with...

  17. Bathymetry of Lake Superior

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Bathymetry of Lake Superior has been compiled as a component of a NOAA project to rescue Great Lakes lake floor geological and geophysical data and make it more...

  18. Superior Hiking Trail Facilities

    Data.gov (United States)

    Minnesota Department of Natural Resources — Superior Hiking Trail main trail, spurs, and camp spurs for completed trail throughout Cook, Lake, St. Louis and Carlton counties. These data were collected with...

  19. Locked Superior Dislocation of the Acromioclavicular Joint

    Directory of Open Access Journals (Sweden)

    Salma Eltoum Elamin

    2013-01-01

    Full Text Available Acromioclavicular (AC joint injuries account for approximately 3–5% of shoulder girdle injuries (Rockwood et al., 1998. Depending on severity of injury and direction of displacement these are classified using Rockwood classification system for AC joint dislocation. We present an unusual case presenting with locked superior dislocation of the AC joint highlighting the presentation and subsequent successful surgical management of such case. To our knowledge this has not been reported previously in literature.

  20. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    One of the simplest, and yet most consistently well-performing setof classifiers is the \\NB models. These models rely on twoassumptions: $(i)$ All the attributes used to describe an instanceare conditionally independent given the class of that instance,and $(ii)$ all attributes follow a specific...... parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  1. Classifier in Age classification

    Directory of Open Access Journals (Sweden)

    B. Santhi

    2012-12-01

    Full Text Available Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This study addresses the limitations in the existing classifiers, as it uses the Grey Level Co-occurrence Matrix (GLCM for feature extraction and Support Vector Machine (SVM for classification. This improves the accuracy of the classification as it outperforms the existing methods.

  2. PERSISTENT LEFT SUPERIOR VENACAVA

    Directory of Open Access Journals (Sweden)

    Devinder Singh

    2014-05-01

    Full Text Available A Persistent Left Superior Venacava (PLSVC is the most common variation of the thoracic venous system and rare congenital vascular anomaly and is prevalent in 0.3% of the population. It may be associated with other cardiovascular abnormalities including atrial septal defect, bicuspid aortic valve, coarctation of aorta, coronary sinus ostial atresia, and cor triatriatum. Incidental rotation of a dilated coronary sinus on echocardiography should raise the suspicion of PLSVC. The diagnosis should be confirmed by saline contrast echocardiography. Condition is usually asymptomatic. Here we present a rare case of persistent left superior vena cava presented in OPD with dyspnoea & palpitations.

  3. A novel Neuro-fuzzy classification technique for data mining

    Directory of Open Access Journals (Sweden)

    Soumadip Ghosh

    2014-11-01

    Full Text Available In our study, we proposed a novel Neuro-fuzzy classification technique for data mining. The inputs to the Neuro-fuzzy classification system were fuzzified by applying generalized bell-shaped membership function. The proposed method utilized a fuzzification matrix in which the input patterns were associated with a degree of membership to different classes. Based on the value of degree of membership a pattern would be attributed to a specific category or class. We applied our method to ten benchmark data sets from the UCI machine learning repository for classification. Our objective was to analyze the proposed method and, therefore compare its performance with two powerful supervised classification algorithms Radial Basis Function Neural Network (RBFNN and Adaptive Neuro-fuzzy Inference System (ANFIS. We assessed the performance of these classification methods in terms of different performance measures such as accuracy, root-mean-square error, kappa statistic, true positive rate, false positive rate, precision, recall, and f-measure. In every aspect the proposed method proved to be superior to RBFNN and ANFIS algorithms.

  4. An Improved Shape Contexts Based Ship Classification in SAR Images

    Directory of Open Access Journals (Sweden)

    Ji-Wei Zhu

    2017-02-01

    Full Text Available In synthetic aperture radar (SAR imagery, relating to maritime surveillance studies, the ship has always been the main focus of study. In this letter, a method of ship classification in SAR images is proposed to enhance classification accuracy. In the proposed method, to fully exploit the distinguishing characters of the ship targets, both topology and intensity of the scattering points of the ship are considered. The results of testing the proposed method on a data set of three types of ships, collected via a space-borne SAR sensor designed by the Institute of Electronics, Chinese Academy of Sciences (IECAS, establish that the proposed method is superior to several existing methods, including the original shape contexts method, traditional invariant moments and the recent approach.

  5. Combinatorial Approach of Associative Classification

    OpenAIRE

    P. R. Pal; R.C. Jain

    2010-01-01

    Association rule mining and classification are two important techniques of data mining in knowledge discovery process. Integration of these two has produced class association rule mining or associative classification techniques, which in many cases have shown better classification accuracy than conventional classifiers. Motivated by this study we have explored and applied the combinatorial mathematics in class association rule mining in this paper. Our algorithm is based on producing co...

  6. Test expectancy affects metacomprehension accuracy.

    Science.gov (United States)

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  7. The postdiction superiority effect in metacomprehension of text.

    Science.gov (United States)

    Pierce, B H; Smith, S M

    2001-01-01

    Metacomprehension accuracy for texts was greater after, rather than before, answering test questions about the texts-a postdiction superiority effect. Although postdiction superiority was found across successive sets of test questions and across successive texts, there was no improvement in metacomprehension accuracy after participants had taken more tests. Neither prediction nor postdiction gamma correlations with test performance improved with successive tests. Although the results are consistent with retrieval hypotheses, they contradict predictions made by test knowledge hypotheses, which state that increasing knowledge of the nature of the tests should increase metacomprehension accuracy.

  8. Air Superiority Fighter Characteristics.

    Science.gov (United States)

    1998-06-05

    many a dispute could have been deflated into a single paragraph if the disputants had just dared to define their terms.7 Aristotle ...meaningful. This section will expand on some key ideology concepts. The phrase "air superiority fighter" may bring to mind visions of fighter... biographies are useful in garnering airpower advocate theories as well as identifying key characteristics. Air campaign results, starting with World

  9. Classification and retrieval on macroinvertebrate image databases.

    Science.gov (United States)

    Kiranyaz, Serkan; Ince, Turker; Pulkkinen, Jenni; Gabbouj, Moncef; Ärje, Johanna; Kärkkäinen, Salme; Tirronen, Ville; Juhola, Martti; Turpeinen, Tuomas; Meissner, Kristian

    2011-07-01

    Aquatic ecosystems are continuously threatened by a growing number of human induced changes. Macroinvertebrate biomonitoring is particularly efficient in pinpointing the cause-effect structure between slow and subtle changes and their detrimental consequences in aquatic ecosystems. The greatest obstacle to implementing efficient biomonitoring is currently the cost-intensive human expert taxonomic identification of samples. While there is evidence that automated recognition techniques can match human taxa identification accuracy at greatly reduced costs, so far the development of automated identification techniques for aquatic organisms has been minimal. In this paper, we focus on advancing classification and data retrieval that are instrumental when processing large macroinvertebrate image datasets. To accomplish this for routine biomonitoring, in this paper we shall investigate the feasibility of automated river macroinvertebrate classification and retrieval with high precision. Besides the state-of-the-art classifiers such as Support Vector Machines (SVMs) and Bayesian Classifiers (BCs), the focus is particularly drawn on feed-forward artificial neural networks (ANNs), namely multilayer perceptrons (MLPs) and radial basis function networks (RBFNs). Since both ANN types have been proclaimed superior by different investigations even for the same benchmark problems, we shall first show that the main reason for this ambiguity lies in the static and rather poor comparison methodologies applied in most earlier works. Especially the most common drawback occurs due to the limited evaluation of the ANN performances over just one or few network architecture(s). Therefore, in this study, an extensive evaluation of each classifier performance over an ANN architecture space is performed. The best classifier among all, which is trained over a dataset of river macroinvertebrate specimens, is then used in the MUVIS framework for the efficient search and retrieval of particular

  10. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Science.gov (United States)

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  11. Automated Classification of Periodic Variable Stars detected by the Wide-field Infrared Survey Explorer

    CERN Document Server

    Masci, Frank J; Grillmair, Carl J; Cutri, Roc M

    2014-01-01

    We describe a methodology to classify periodic variable stars identified in the Wide-field Infrared Survey Explorer (WISE) full-mission single-exposure Source Database. This will assist in the future construction of a WISE periodic-Variable Source Database that assigns variables to specific science classes as constrained by the WISE observing cadence with statistically meaningful classification probabilities. We have analyzed the WISE light curves of 8273 variable stars identified in previous optical variability surveys (MACHO, GCVS, and ASAS) and show that Fourier decomposition techniques can be extended into the mid-IR to assist with their classification. Combined with other periodic light-curve features, this sample is then used to train a machine-learned classifier based on the random forest (RF) method. Consistent with previous classification studies of variable stars in general, the RF machine-learned classifier is superior to other methods in terms of accuracy, robustness against outliers, and relative...

  12. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Directory of Open Access Journals (Sweden)

    Cuihong Wen

    Full Text Available Optical Music Recognition (OMR has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM. The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM, which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs and Neural Networks (NNs.

  13. Optimizing Classification in Intelligence Processing

    Science.gov (United States)

    2010-12-01

    ACC Classification Accuracy AUC Area Under the ROC Curve CI Competitive Intelligence COMINT Communications Intelligence DoD Department of...indispensible tool to support a national leader’s decision making process, competitive intelligence (CI) has emerged in recent decades as an environment meant...effectiveness for the intelligence product in competitive intelligence environment: accuracy, objectivity, usability, relevance, readiness, and timeliness

  14. Introduction to Relational Networks for Classification

    CERN Document Server

    Marivate, Vukosi

    2008-01-01

    The use of computational intelligence techniques for classification has been used in numerous applications. This paper compares the use of a Multi Layer Perceptron Neural Network and a new Relational Network on classifying the HIV status of women at ante-natal clinics. The paper discusses the architecture of the relational network and its merits compared to a neural network and most other computational intelligence classifiers. Results gathered from the study indicate comparable classification accuracies as well as revealed relationships between data features in the classification data. Much higher classification accuracies are recommended for future research in the area of HIV classification as well as missing data estimation.

  15. Contabilidad Financiera Superior

    OpenAIRE

    Ipiñazar Petralanda, Izaskun

    2013-01-01

    Duración (en horas): De 31 a 40 horas. Destinatario: Estudiante y Docente A través de este material se presentan las pautas necesarias para implementar un aprendizaje basado en problemas en la asignatura de Contabilidad Financiera Superior dentro de los temas “Constitución de S.A. y S.R.L.” (Tema 2), “Ampliaciones de Capital” (Tema 3) y “Reducciones de Capital” (Tema 4). En primer lugar se presentan las guías generales de la asignatura, y a continuación, las diferentes activida...

  16. Contabilidad Financiera Superior

    OpenAIRE

    Ipiñazar Petralanda, Izaskun

    2013-01-01

    Duración (en horas): De 31 a 40 horas. Destinatario: Estudiante y Docente A través de este material se presentan las pautas necesarias para implementar un aprendizaje basado en problemas en la asignatura de Contabilidad Financiera Superior dentro de los temas “Constitución de S.A. y S.R.L.” (Tema 2), “Ampliaciones de Capital” (Tema 3) y “Reducciones de Capital” (Tema 4). En primer lugar se presentan las guías generales de la asignatura, y a continuación, las diferentes activida...

  17. Noise-Tolerant Hyperspectral Signature Classification in Unresolved Object Detection with Adaptive Tabular Nearest Neighbor Encoding

    Science.gov (United States)

    Schmalz, M.; Key, G.

    ) and rate of false detections (Rfa). Adaptive TNE can thus achieve accurate signature classification in the presence of time-varying noise, closely spaced or interleaved signatures, and imaging system optical distortions. We analyze classification accuracy of closely spaced spectral signatures adapted from a NASA database of space material signatures. Additional analysis pertains to computational complexity and noise sensitivity, which are superior to non-adaptive TNE or Bayesian techniques based on classical neural networks.

  18. Hyperspectral Image Classification Based on the Weighted Probabilistic Fusion of Multiple Spectral-spatial Features

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-08-01

    Full Text Available A hyperspectral images classification method based on the weighted probabilistic fusion of multiple spectral-spatial features was proposed in this paper. First, the minimum noise fraction (MNF approach was employed to reduce the dimension of hyperspectral image and extract the spectral feature from the image, then combined the spectral feature with the texture feature extracted based on gray level co-occurrence matrix (GLCM, the multi-scale morphological feature extracted based on OFC operator and the end member feature extracted based on sequential maximum angle convex cone (SMACC method to form three spectral-spatial features. Afterwards, support vector machine (SVM classifier was used for the classification of each spectral-spatial feature separately. Finally, we established the weighted probabilistic fusion model and applied the model to fuse the SVM outputs for the final classification result. In order to verify the proposed method, the ROSIS and AVIRIS image were used in our experiment and the overall accuracy reached 97.65% and 96.62% separately. The results indicate that the proposed method can not only overcome the limitations of traditional single-feature based hyperspectral image classification, but also be superior to conventional VS-SVM method and probabilistic fusion method. The classification accuracy of hyperspectral images was improved effectively.

  19. Automated classification of periodic variable stars detected by the wide-field infrared survey explorer

    Energy Technology Data Exchange (ETDEWEB)

    Masci, Frank J.; Grillmair, Carl J.; Cutri, Roc M. [Infrared Processing and Analysis Center, Caltech 100-22, Pasadena, CA 91125 (United States); Hoffman, Douglas I., E-mail: fmasci@ipac.caltech.edu [NASA Ames Research Center, Moffett Field, CA 94035 (United States)

    2014-07-01

    We describe a methodology to classify periodic variable stars identified using photometric time-series measurements constructed from the Wide-field Infrared Survey Explorer (WISE) full-mission single-exposure Source Databases. This will assist in the future construction of a WISE Variable Source Database that assigns variables to specific science classes as constrained by the WISE observing cadence with statistically meaningful classification probabilities. We have analyzed the WISE light curves of 8273 variable stars identified in previous optical variability surveys (MACHO, GCVS, and ASAS) and show that Fourier decomposition techniques can be extended into the mid-IR to assist with their classification. Combined with other periodic light-curve features, this sample is then used to train a machine-learned classifier based on the random forest (RF) method. Consistent with previous classification studies of variable stars in general, the RF machine-learned classifier is superior to other methods in terms of accuracy, robustness against outliers, and relative immunity to features that carry little or redundant class information. For the three most common classes identified by WISE: Algols, RR Lyrae, and W Ursae Majoris type variables, we obtain classification efficiencies of 80.7%, 82.7%, and 84.5% respectively using cross-validation analyses, with 95% confidence intervals of approximately ±2%. These accuracies are achieved at purity (or reliability) levels of 88.5%, 96.2%, and 87.8% respectively, similar to that achieved in previous automated classification studies of periodic variable stars.

  20. Statistics of superior records

    Science.gov (United States)

    Ben-Naim, E.; Krapivsky, P. L.

    2013-08-01

    We study statistics of records in a sequence of random variables. These identical and independently distributed variables are drawn from the parent distribution ρ. The running record equals the maximum of all elements in the sequence up to a given point. We define a superior sequence as one where all running records are above the average record expected for the parent distribution ρ. We find that the fraction of superior sequences SN decays algebraically with sequence length N, SN˜N-β in the limit N→∞. Interestingly, the decay exponent β is nontrivial, being the root of an integral equation. For example, when ρ is a uniform distribution with compact support, we find β=0.450265. In general, the tail of the parent distribution governs the exponent β. We also consider the dual problem of inferior sequences, where all records are below average, and find that the fraction of inferior sequences IN decays algebraically, albeit with a different decay exponent, IN˜N-α. We use the above statistical measures to analyze earthquake data.

  1. Frenillo labial superior doble

    Directory of Open Access Journals (Sweden)

    Carlos Albornoz López del Castillo

    Full Text Available El frenillo labial superior doble no sindrómico es una anomalía del desarrollo que no hemos encontrado reportada en la revisión bibliográfica realizada. Se presenta una niña de 11 años de edad que fue remitida al servicio de Cirugía Maxilofacial del Hospital "Eduardo Agramonte Piña", de Camagüey, por presentar un frenillo labial superior doble de baja inserción. Se describen los síntomas clínicos asociados a esta anomalía y el tratamiento quirúrgico utilizado para su solución: una frenectomía y plastia sobre la banda muscular frénica anormal que provocaba exceso de tejido en la mucosa labial. Consideramos muy interesante la descripción de este caso, por no haber encontrado reporte similar en la literatura revisada.

  2. AN ADABOOST OPTIMIZED CCFIS BASED CLASSIFICATION MODEL FOR BREAST CANCER DETECTION

    Directory of Open Access Journals (Sweden)

    CHANDRASEKAR RAVI

    2017-06-01

    Full Text Available Classification is a Data Mining technique used for building a prototype of the data behaviour, using which an unseen data can be classified into one of the defined classes. Several researchers have proposed classification techniques but most of them did not emphasis much on the misclassified instances and storage space. In this paper, a classification model is proposed that takes into account the misclassified instances and storage space. The classification model is efficiently developed using a tree structure for reducing the storage complexity and uses single scan of the dataset. During the training phase, Class-based Closed Frequent ItemSets (CCFIS were mined from the training dataset in the form of a tree structure. The classification model has been developed using the CCFIS and a similarity measure based on Longest Common Subsequence (LCS. Further, the Particle Swarm Optimization algorithm is applied on the generated CCFIS, which assigns weights to the itemsets and their associated classes. Most of the classifiers are correctly classifying the common instances but they misclassify the rare instances. In view of that, AdaBoost algorithm has been used to boost the weights of the misclassified instances in the previous round so as to include them in the training phase to classify the rare instances. This improves the accuracy of the classification model. During the testing phase, the classification model is used to classify the instances of the test dataset. Breast Cancer dataset from UCI repository is used for experiment. Experimental analysis shows that the accuracy of the proposed classification model outperforms the PSOAdaBoost-Sequence classifier by 7% superior to other approaches like Naïve Bayes Classifier, Support Vector Machine Classifier, Instance Based Classifier, ID3 Classifier, J48 Classifier, etc.

  3. Semi-Supervised Learning for Classification of Protein Sequence Data

    Directory of Open Access Journals (Sweden)

    Brian R. King

    2008-01-01

    Full Text Available Protein sequence data continue to become available at an exponential rate. Annotation of functional and structural attributes of these data lags far behind, with only a small fraction of the data understood and labeled by experimental methods. Classification methods that are based on semi-supervised learning can increase the overall accuracy of classifying partly labeled data in many domains, but very few methods exist that have shown their effect on protein sequence classification. We show how proven methods from text classification can be applied to protein sequence data, as we consider both existing and novel extensions to the basic methods, and demonstrate restrictions and differences that must be considered. We demonstrate comparative results against the transductive support vector machine, and show superior results on the most difficult classification problems. Our results show that large repositories of unlabeled protein sequence data can indeed be used to improve predictive performance, particularly in situations where there are fewer labeled protein sequences available, and/or the data are highly unbalanced in nature.

  4. Classification of Medical Datasets Using SVMs with Hybrid Evolutionary Algorithms Based on Endocrine-Based Particle Swarm Optimization and Artificial Bee Colony Algorithms.

    Science.gov (United States)

    Lin, Kuan-Cheng; Hsieh, Yi-Hsiu

    2015-10-01

    The classification and analysis of data is an important issue in today's research. Selecting a suitable set of features makes it possible to classify an enormous quantity of data quickly and efficiently. Feature selection is generally viewed as a problem of feature subset selection, such as combination optimization problems. Evolutionary algorithms using random search methods have proven highly effective in obtaining solutions to problems of optimization in a diversity of applications. In this study, we developed a hybrid evolutionary algorithm based on endocrine-based particle swarm optimization (EPSO) and artificial bee colony (ABC) algorithms in conjunction with a support vector machine (SVM) for the selection of optimal feature subsets for the classification of datasets. The results of experiments using specific UCI medical datasets demonstrate that the accuracy of the proposed hybrid evolutionary algorithm is superior to that of basic PSO, EPSO and ABC algorithms, with regard to classification accuracy using subsets with a reduced number of features.

  5. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  6. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification.

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-12-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value.

  7. Bosniak classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;

    2016-01-01

    at MR and CEUS imaging and those at CT. PURPOSE: To compare diagnostic accuracy of MR, CEUS, and CT when categorizing complex renal cystic masses according to the Bosniak classification. MATERIAL AND METHODS: From February 2011 to June 2012, 46 complex renal cysts were prospectively evaluated by three...... readers. Each mass was categorized according to the Bosniak classification and CT was chosen as gold standard. Kappa was calculated for diagnostic accuracy and data was compared with pathological results. RESULTS: CT images found 27 BII, six BIIF, seven BIII, and six BIV. Forty-three cysts could...... one category lower. Pathologic correlation in six lesions revealed four malignant and two benign lesions. CONCLUSION: CEUS and MR both up- and downgraded renal cysts compared to CT, and until these non-radiation modalities have been refined and adjusted, CT should remain the gold standard...

  8. Sobredentadura total superior implantosoportada

    Directory of Open Access Journals (Sweden)

    Luis Orlando Rodríguez García

    2010-06-01

    Full Text Available Se presenta un caso de un paciente desdentado total superior, rehabilitado en la consulta de implantología de la Clínica "Pedro Ortiz" del municipio Habana del Este en Ciudad de La Habana, Cuba, en el año 2009, mediante prótesis sobre implantes osteointegrados, técnica que se ha incorporado a la práctica estomatológica en Cuba como alternativa al tratamiento convencional en los pacientes desdentados totales. Se siguió un protocolo que comprendió una fase quirúrgica, procedimiento con o sin realización de colgajo y carga precoz o inmediata. Se presenta un paciente masculino de 56 años de edad, que acudió a la consulta multidisciplinaria, preocupado, porque se le habían elaborado tres prótesis en los últimos dos años y ninguna reunía los requisitos de retención que él necesitaba para sentirse seguro y cómodo con las mismas. El resultado final fue la satisfacción total del paciente, con el mejoramiento de la calidad estética y funcional.

  9. Algorithms for Hyperspectral Signature Classification in Non-resolved Object Characterization Using Tabular Nearest Neighbor Encoding

    Science.gov (United States)

    Schmalz, M.; Key, G.

    Accurate spectral signature classification is key to the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications, signature classification accuracy depends on accurate spectral endmember determination [1]. However, in selected target recognition (ATR) applications, it is possible to circumvent the endmember detection problem by employing a Bayesian classifier. Previous approaches to Bayesian classification of spectral signatures have been rule- based, or predicated on a priori parameterized information obtained from offline training, as in the case of neural networks [1,2]. Unfortunately, class separation and classifier refinement results in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of inputs. This can lead to potentially significant classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an emerging technology for nonimaging spectral signature classfication based on a highly accurate but computationally efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. This allows the TNE programmer or user to determine parameters for classification accuracy, and to mathematically analyze the signatures for which TNE did not obtain classification matches. This dual approach to analysis (i.e., correct vs. incorrect classification) has been shown to

  10. Accuracy of Birth Certificate Data for Classifying Preterm Birth.

    Science.gov (United States)

    Stout, Molly J; Macones, George A; Tuuli, Methodius G

    2017-05-01

    Classifying preterm birth as spontaneous or indicated is critical both for clinical care and research, yet the accuracy of classification based on different data sources is unclear. We examined the accuracy of preterm birth classification as spontaneous or indicated based on birth certificate data. This is a retrospective cohort study of 123 birth certificates from preterm births in Missouri. Correct classification of spontaneous or indicated preterm birth subtype was based on multi-provider (RN, MFM Fellow, MFM attending) consensus after full medical record review. A categorisation algorithm based on clinical data available in the birth certificate was designed a priori and classification was performed by a single investigator according to the algorithm. Accuracy of birth certificate classification as spontaneous or indicated was compared to the consensus classification. Errors in misclassification were explored. Classification based on birth certificates was correct for 66% of preterm births. Most errors in classification by birth certificate occurred in classifying a birth as spontaneous when it was in fact indicated. The vast majority of errors occurred when preterm rupture of membranes (≥12 h) was checked on the birth certificate causing classification as spontaneous when there was a maternal or fetal indication for delivery. Birth certificate classification overestimated spontaneous preterm birth and underestimated indicated preterm birth compared to classification performed from medical record review. Revisions to birth certificate clinical data would allow more accurate population level surveillance of preterm birth subtypes. © 2017 John Wiley & Sons Ltd.

  11. A comparison of classification techniques for glacier change detection using multispectral images

    OpenAIRE

    Rahul Nijhawan; Pradeep Garg; Praveen Thakur

    2016-01-01

    Main aim of this paper is to compare the classification accuracies of glacier change detection by following classifiers: sub-pixel classification algorithm, indices based supervised classification and object based algorithm using Landsat imageries. It was observed that shadow effect was not removed in sub-pixel based classification which was removed by the indices method. Further the accuracy was improved by object based classification. Objective of the paper is to analyse different classific...

  12. Prediction of Depression in Cancer Patients With Different Classification Criteria, Linear Discriminant Analysis versus Logistic Regression.

    Science.gov (United States)

    Shayan, Zahra; Mohammad Gholi Mezerji, Naser; Shayan, Leila; Naseri, Parisa

    2015-11-03

    Logistic regression (LR) and linear discriminant analysis (LDA) are two popular statistical models for prediction of group membership. Although they are very similar, the LDA makes more assumptions about the data. When categorical and continuous variables used simultaneously, the optimal choice between the two models is questionable. In most studies, classification error (CE) is used to discriminate between subjects in several groups, but this index is not suitable to predict the accuracy of the outcome. The present study compared LR and LDA models using classification indices. This cross-sectional study selected 243 cancer patients. Sample sets of different sizes (n = 50, 100, 150, 200, 220) were randomly selected and the CE, B, and Q classification indices were calculated by the LR and LDA models. CE revealed the a lack of superiority for one model over the other, but the results showed that LR performed better than LDA for the B and Q indices in all situations. No significant effect for sample size on CE was noted for selection of an optimal model. Assessment of the accuracy of prediction of real data indicated that the B and Q indices are appropriate for selection of an optimal model. The results of this study showed that LR performs better in some cases and LDA in others when based on CE. The CE index is not appropriate for classification, although the B and Q indices performed better and offered more efficient criteria for comparison and discrimination between groups.

  13. A novel algorithm for ventricular arrhythmia classification using a fuzzy logic approach.

    Science.gov (United States)

    Weixin, Nong

    2016-12-01

    In the present study, it has been shown that an unnecessary implantable cardioverter-defibrillator (ICD) shock is often delivered to patients with an ambiguous ECG rhythm in the overlap zone between ventricular tachycardia (VT) and ventricular fibrillation (VF); these shocks significantly increase mortality. Therefore, accurate classification of the arrhythmia into VT, organized VF (OVF) or disorganized VF (DVF) is crucial to assist ICDs to deliver appropriate therapy. A classification algorithm using a fuzzy logic classifier was developed for accurately classifying the arrhythmias into VT, OVF or DVF. Compared with other studies, our method aims to combine ten ECG detectors that are calculated in the time domain and the frequency domain in addition to different levels of complexity for detecting subtle structure differences between VT, OVF and DVF. The classification in the overlap zone between VT and VF is refined by this study to avoid ambiguous identification. The present method was trained and tested using public ECG signal databases. A two-level classification was performed to first detect VT with an accuracy of 92.6 %, and then the discrimination between OVF and DVF was detected with an accuracy of 84.5 %. The validation results indicate that the proposed method has superior performance in identifying the organization level between the three types of arrhythmias (VT, OVF and DVF) and is promising for improving the appropriate therapy choice and decreasing the possibility of sudden cardiac death.

  14. Classification of first-episode psychosis: a multi-modal multi-feature approach integrating structural and diffusion imaging.

    Science.gov (United States)

    Peruzzo, Denis; Castellani, Umberto; Perlini, Cinzia; Bellani, Marcella; Marinelli, Veronica; Rambaldelli, Gianluca; Lasalvia, Antonio; Tosato, Sarah; De Santi, Katia; Murino, Vittorio; Ruggeri, Mirella; Brambilla, Paolo

    2015-06-01

    Currently, most of the classification studies of psychosis focused on chronic patients and employed single machine learning approaches. To overcome these limitations, we here compare, to our best knowledge for the first time, different classification methods of first-episode psychosis (FEP) using multi-modal imaging data exploited on several cortical and subcortical structures and white matter fiber bundles. 23 FEP patients and 23 age-, gender-, and race-matched healthy participants were included in the study. An innovative multivariate approach based on multiple kernel learning (MKL) methods was implemented on structural MRI and diffusion tensor imaging. MKL provides the best classification performances in comparison with the more widely used support vector machine, enabling the definition of a reliable automatic decisional system based on the integration of multi-modal imaging information. Our results show a discrimination accuracy greater than 90 % between healthy subjects and patients with FEP. Regions with an accuracy greater than 70 % on different imaging sources and measures were middle and superior frontal gyrus, parahippocampal gyrus, uncinate fascicles, and cingulum. This study shows that multivariate machine learning approaches integrating multi-modal and multisource imaging data can classify FEP patients with high accuracy. Interestingly, specific grey matter structures and white matter bundles reach high classification reliability when using different imaging modalities and indices, potentially outlining a prefronto-limbic network impaired in FEP with particular regard to the right hemisphere.

  15. Improving the Accuracy of Tagging Recommender System by Using Classification%使用分类改进标签推荐系统准确度的研究

    Institute of Scientific and Technical Information of China (English)

    谌颃

    2011-01-01

    由于标签的灵活性及其概念可理解性,使用标签可以提高推荐系统的推荐性能.协同标签系统在网络资源推荐服务中取得了巨大的成功.分类为用户显示了不同的利益群体的不同喜好.基于此,提出了基于分类的标签推荐系统一TRSUC,将它作为内分类推荐,使分类标签成为全球用户和项目之间的中介实体.通过对MovieLens中数据集进行实验,结果表明,TRSUC的推荐准确度明显优越于传统推荐算法.%Collaborative tagging system has become more and more popular and recently achieved widespread success due to flexibility and conceptual comprehensibility of tagging systems. Recommender system has the access to adopt tagging systems to achieve better performance. In this paper we consider that the items can be categorized into different classifications in which users show different interests. Here we adopt a two-step recommender method called TRSUC (Tagging Recommender Systems by Using Classification) which can be described as Inner-Class Recommender or Global Recommender in which we use tag as the intermediary entity between user and item. The experiment using MovieLens as dataset shows that we acquire better results than the recommender algorithms without classifying the items.

  16. Diagnostic accuracy of a new instrument for detecting cognitive dysfunction in an emergent psychiatric population: the Brief Cognitive Screen.

    Science.gov (United States)

    Cercy, Steven P; Simakhodskaya, Zoya; Elliott, Aaron

    2010-03-01

    In certain clinical contexts, the sensitivity of the Mini-Mental State Examination (MMSE) is limited. The authors developed a new cognitive screening instrument, the Brief Cognitive Screen (BCS), with the aim of improving diagnostic accuracy for cognitive dysfunction in the psychiatric emergency department (ED) in a quick and convenient format. The BCS, consisting of the Oral Trail Making Test (OTMT), animal fluency, the Clock Drawing Test (CDT), and the MMSE, was administered to 32 patients presenting with emergent psychiatric conditions. Comprehensive neuropsychological evaluation served as the criterion standard for determining cognitive dysfunction. Diagnostic accuracy of the MMSE was determined using the traditional clinical cutoff and receiver operating characteristic (ROC) curve analyses. Diagnostic accuracy of individual BCS components and BCS Summary Scores was determined by ROC analyses. At the traditional clinical cutoff, MMSE sensitivity (46.4%) and total diagnostic accuracy (53.1%) were inadequate. Under ROC analyses, the diagnostic accuracy of the full BCS Summary Score (area under the curve [AUC]=0.857) was comparable to the MMSE (AUC=0.828). However, a reduced BCS Summary Score consisting of OTMT Part B (OTMT-B), animal fluency, and the CDT yielded classification accuracy (AUC=0.946) that was superior to the MMSE. Preliminary findings suggest the BCS is an effective, convenient alternative cognitive screening instrument for use in emergent psychiatric populations. Copyright (c) 2010 by the Society for Academic Emergency Medicine.

  17. Multi-scale classification based lesion segmentation for dermoscopic images.

    Science.gov (United States)

    Abedini, Mani; Codella, Noel; Chakravorty, Rajib; Garnavi, Rahil; Gutman, David; Helba, Brian; Smith, John R

    2016-08-01

    This paper presents a robust segmentation method based on multi-scale classification to identify the lesion boundary in dermoscopic images. Our proposed method leverages a collection of classifiers which are trained at various resolutions to categorize each pixel as "lesion" or "surrounding skin". In detection phase, trained classifiers are applied on new images. The classifier outputs are fused at pixel level to build probability maps which represent lesion saliency maps. In the next step, Otsu thresholding is applied to convert the saliency maps to binary masks, which determine the border of the lesions. We compared our proposed method with existing lesion segmentation methods proposed in the literature using two dermoscopy data sets (International Skin Imaging Collaboration and Pedro Hispano Hospital) which demonstrates the superiority of our method with Dice Coefficient of 0.91 and accuracy of 94%.

  18. Classification of interstitial lung disease patterns using local DCT features and random forest.

    Science.gov (United States)

    Anthimopoulos, M; Christodoulidis, S; Christe, A; Mougiakakou, S

    2014-01-01

    Over the last decade, a plethora of computer-aided diagnosis (CAD) systems have been proposed aiming to improve the accuracy of the physicians in the diagnosis of interstitial lung diseases (ILD). In this study, we propose a scheme for the classification of HRCT image patches with ILD abnormalities as a basic component towards the quantification of the various ILD patterns in the lung. The feature extraction method relies on local spectral analysis using a DCT-based filter bank. After convolving the image with the filter bank, q-quantiles are computed for describing the distribution of local frequencies that characterize image texture. Then, the gray-level histogram values of the original image are added forming the final feature vector. The classification of the already described patches is done by a random forest (RF) classifier. The experimental results prove the superior performance and efficiency of the proposed approach compared against the state-of-the-art.

  19. An Experimental Comparative Study on Three Classification Algorithms

    Institute of Scientific and Technical Information of China (English)

    蔡巍; 王永成; 李伟; 尹中航

    2003-01-01

    Classification algorithm is one of the key techniques to affect text automatic classification system's performance, play an important role in automatic classification research area. This paper comparatively analyzed k-NN. VSM and hybrid classification algorithm presented by our research group. Some 2000 pieces of Internet news provided by ChinaInfoBank are used in the experiment. The result shows that the hybrid algorithm's performance presented by the groups is superior to the other two algorithms.

  20. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce...... the convolutional classification RBM, a combination of the existing convolutional RBM and classification RBM, and use it for discriminative feature learning. We evaluate the classification accuracy of convolutional and non-convolutional classification RBMs on two lung CT problems. We find that RBM-learned features...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  1. Real-time visual concept classification

    NARCIS (Netherlands)

    Uijlings, J.R.R.; Smeulders, A.W.M.; Scha, R.J.H.

    2010-01-01

    As datasets grow increasingly large in content-based image and video retrieval, computational efficiency of concept classification is important. This paper reviews techniques to accelerate concept classification, where we show the trade-off between computational efficiency and accuracy. As a basis,

  2. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  3. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments

    Science.gov (United States)

    Li, Manchun; Ma, Lei; Blaschke, Thomas; Cheng, Liang; Tiede, Dirk

    2016-07-01

    Geographic Object-Based Image Analysis (GEOBIA) is becoming more prevalent in remote sensing classification, especially for high-resolution imagery. Many supervised classification approaches are applied to objects rather than pixels, and several studies have been conducted to evaluate the performance of such supervised classification techniques in GEOBIA. However, these studies did not systematically investigate all relevant factors affecting the classification (segmentation scale, training set size, feature selection and mixed objects). In this study, statistical methods and visual inspection were used to compare these factors systematically in two agricultural case studies in China. The results indicate that Random Forest (RF) and Support Vector Machines (SVM) are highly suitable for GEOBIA classifications in agricultural areas and confirm the expected general tendency, namely that the overall accuracies decline with increasing segmentation scale. All other investigated methods except for RF and SVM are more prone to obtain a lower accuracy due to the broken objects at fine scales. In contrast to some previous studies, the RF classifiers yielded the best results and the k-nearest neighbor classifier were the worst results, in most cases. Likewise, the RF and Decision Tree classifiers are the most robust with or without feature selection. The results of training sample analyses indicated that the RF and adaboost. M1 possess a superior generalization capability, except when dealing with small training sample sizes. Furthermore, the classification accuracies were directly related to the homogeneity/heterogeneity of the segmented objects for all classifiers. Finally, it was suggested that RF should be considered in most cases for agricultural mapping.

  4. A novel feature extracting method of QRS complex classification for mobile ECG signals

    Science.gov (United States)

    Zhu, Lingyun; Wang, Dong; Huang, Xianying; Wang, Yue

    2007-12-01

    The conventional classification parameters of QRS complex suffer from larger activity rang of patients and lower signal to noise ratio in mobile cardiac telemonitoring system and can not meet the identification needs of ECG signal. Based on individual sinus heart rhythm template built with mobile ECG signals in time window, we present semblance index to extract the classification features of QRS complex precisely and expeditiously. Relative approximation r2 and absolute error r3 are used as estimating parameters of semblance between testing QRS complex and template. The evaluate parameters corresponding to QRS width and types are demonstrated to choose the proper index. The results show that 99.99 percent of the QRS complex for sinus and superventricular ECG signals can be distinguished through r2 but its average accurate ratio is only 46.16%. More than 97.84 percent of QRS complexes are identified using r3 but its accurate ratio to the sinus and superventricular is not better than r2. By the feature parameter of width, only 42.65 percent of QRS complexes are classified correctly, but its accurate ratio to the ventricular is superior to r2. To combine the respective superiority of three parameters, a nonlinear weighing computation of QRS width, r2 and r3 is introduced and the total classification accuracy up to 99.48% by combing indexes.

  5. Classification of Medical Brain Images

    Institute of Scientific and Technical Information of China (English)

    Pan Haiwei(潘海为); Li Jianzhong; Zhang Wei

    2003-01-01

    Since brain tumors endanger people's living quality and even their lives, the accuracy of classification becomes more important. Conventional classifying techniques are used to deal with those datasets with characters and numbers. It is difficult, however, to apply them to datasets that include brain images and medical history (alphanumeric data), especially to guarantee the accuracy. For these datasets, this paper combines the knowledge of medical field and improves the traditional decision tree. The new classification algorithm with the direction of the medical knowledge not only adds the interaction with the doctors, but also enhances the quality of classification. The algorithm has been used on real brain CT images and a precious rule has been gained from the experiments. This paper shows that the algorithm works well for real CT data.

  6. Application of Data Mining in Protein Sequence Classification

    Directory of Open Access Journals (Sweden)

    Suprativ Saha

    2012-11-01

    Full Text Available Protein sequence classification involves feature selection for accurate classification. Popular protein sequence classification techniques involve extraction of specific features from the sequences. Researchers apply some well-known classification techniques like neural networks, Genetic algorithm, Fuzzy ARTMAP,Rough Set Classifier etc for accurate classification. This paper presents a review is with three different classification models such as neural network model, fuzzy ARTMAP model and Rough set classifier model.This is followed by a new technique for classifying protein sequences. The proposed model is typicallyimplemented with an own designed tool and tries to reduce the computational overheads encountered by earlier approaches and increase the accuracy of classification.

  7. Large margin classification with indefinite similarities

    KAUST Repository

    Alabdulmohsin, Ibrahim

    2016-01-07

    Classification with indefinite similarities has attracted attention in the machine learning community. This is partly due to the fact that many similarity functions that arise in practice are not symmetric positive semidefinite, i.e. the Mercer condition is not satisfied, or the Mercer condition is difficult to verify. Examples of such indefinite similarities in machine learning applications are ample including, for instance, the BLAST similarity score between protein sequences, human-judged similarities between concepts and words, and the tangent distance or the shape matching distance in computer vision. Nevertheless, previous works on classification with indefinite similarities are not fully satisfactory. They have either introduced sources of inconsistency in handling past and future examples using kernel approximation, settled for local-minimum solutions using non-convex optimization, or produced non-sparse solutions by learning in Krein spaces. Despite the large volume of research devoted to this subject lately, we demonstrate in this paper how an old idea, namely the 1-norm support vector machine (SVM) proposed more than 15 years ago, has several advantages over more recent work. In particular, the 1-norm SVM method is conceptually simpler, which makes it easier to implement and maintain. It is competitive, if not superior to, all other methods in terms of predictive accuracy. Moreover, it produces solutions that are often sparser than more recent methods by several orders of magnitude. In addition, we provide various theoretical justifications by relating 1-norm SVM to well-established learning algorithms such as neural networks, SVM, and nearest neighbor classifiers. Finally, we conduct a thorough experimental evaluation, which reveals that the evidence in favor of 1-norm SVM is statistically significant.

  8. [Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].

    Science.gov (United States)

    Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong

    2013-03-01

    Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.

  9. Tissue Classification

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  10. Efficient Fingercode Classification

    Science.gov (United States)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  11. 75 FR 28542 - Superior Resource Advisory Committee

    Science.gov (United States)

    2010-05-21

    ... orient the new Superior Resource Advisory Committee members on their roles and responsibilities. DATES... of the roles and responsibilities of the Superior Resource Advisory Committee members; Election of... Forest Service Superior Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice...

  12. [The superior laryngeal nerve and the superior laryngeal artery].

    Science.gov (United States)

    Lang, J; Nachbaur, S; Fischer, K; Vogel, E

    1987-01-01

    Length, diameter and anastomoses of the nervus vagus and its ganglion inferius were measured 44 halved heads. On the average, 8.65 fiber bundles of the vagus nerve leave the retro-olivary area. In the area of the jugular foramen is the near superior ganglion of the 10th cranial nerve. In this area were found 1.48 (mean value) anastomoses with the 9th cranial nerve. 11.34 mm below the margo terminalis sigmoidea branches off the ramus internus of the accessory nerve which has a length of 9.75 mm. Further anastomoses with the 10th cranial nerve were found. The inferior ganglion of the 10th nerve had a length of 25.47 mm and a diameter of 3.46 mm. Five mm below the ganglion the 10th nerve had a width of 2.9 and a thickness of 1.5 mm. The mean length of the superior sympathetic ganglion was 26.6 mm, its width 7.2 and its thickness 3.4 mm. In nearly all specimens anastomoses of the superior sympathetic ganglion with the ansa cervicalis profunda and the inferior ganglion of the 10th cranial nerve were found. The superior laryngeal nerve branches off about 36 mm below the margo terminalis sigmoidea. The width of this nerve was 1.9 mm, its thickness 0.8 mm on the right and 1.0 mm on the left side. The division in the internal and external rami was found about 21 mm below its origin. Between the n. vagus and thyreohyoid membrane the ramus internus had a length of 64 mm, the length of external ramus between the vagal nerve and the inferior pharyngeal constrictor muscle was 89 mm. Its mean length below the thyreopharyngeal part was 10.7 mm, 8.6 branchlets to the cricothyroid muscle were counted. The superior laryngeal artery had its origin in 80% of cases in the superior thyroideal artery, in 6.8% this vessel was a branch of the external carotid artery. Its average outer diameter was 1.23 mm on the right side and 1.39 mm on the left. The length of this vessel between its origin and the thyreohyoid membrane was 34 mm. In 7% on the right side and in 13% on the left, the superior

  13. Hardwood species classification with DWT based hybrid texture feature extraction techniques

    Indian Academy of Sciences (India)

    Arvind R Yadav; R S Anand; M L Dewal; Sangeeta Gupta

    2015-12-01

    In this work, discrete wavelet transform (DWT) based hybrid texture feature extraction techniques have been used to categorize the microscopic images of hardwood species into 75 different classes. Initially, the DWT has been employed to decompose the image up to 7 levels using Daubechies (db3) wavelet as decomposition filter. Further, first-order statistics (FOS) and four variants of local binary pattern (LBP) descriptors are used to acquire distinct features of these images at various levels. The linear support vector machine (SVM), radial basis function (RBF) kernel SVM and random forest classifiers have been employed for classification. The classification accuracy obtained with state-of-the-art and DWT based hybrid texture features using various classifiers are compared. The DWT based FOS-uniform local binary pattern (DWTFOSLBPu2) texture features at the 4th level of image decomposition have produced best classification accuracy of 97.67 ± 0.79% and 98.40 ± 064% for grayscale and RGB images, respectively, using linear SVM classifier. Reduction in feature dataset by minimal redundancy maximal relevance (mRMR) feature selection method is achieved and the best classification accuracy of 99.00 ± 0.79% and 99.20 ± 0.42% have been obtained for DWT based FOS-LBP histogram Fourier features (DWTFOSLBP-HF) technique at the 5th and 6th levels of image decomposition for grayscale and RGB images, respectively, using linear SVM classifier. The DWTFOSLBP-HF features selected with mRMR method has also established superiority amongst the DWT based hybrid texture feature extraction techniques for randomly divided database into different proportions of training and test datasets.

  14. Ontology-Based Classification System Development Methodology

    Directory of Open Access Journals (Sweden)

    Grabusts Peter

    2015-12-01

    Full Text Available The aim of the article is to analyse and develop an ontology-based classification system methodology that uses decision tree learning with statement propositionalized attributes. Classical decision tree learning algorithms, as well as decision tree learning with taxonomy and propositionalized attributes have been observed. Thus, domain ontology can be extracted from the data sets and can be used for data classification with the help of a decision tree. The use of ontology methods in decision tree-based classification systems has been researched. Using such methodologies, the classification accuracy in some cases can be improved.

  15. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  16. Xenolog classification.

    Science.gov (United States)

    Darby, Charlotte A; Stolzer, Maureen; Ropp, Patrick J; Barker, Daniel; Durand, Dannie

    2017-03-01

    Orthology analysis is a fundamental tool in comparative genomics. Sophisticated methods have been developed to distinguish between orthologs and paralogs and to classify paralogs into subtypes depending on the duplication mechanism and timing, relative to speciation. However, no comparable framework exists for xenologs: gene pairs whose history, since their divergence, includes a horizontal transfer. Further, the diversity of gene pairs that meet this broad definition calls for classification of xenologs with similar properties into subtypes. We present a xenolog classification that uses phylogenetic reconciliation to assign each pair of genes to a class based on the event responsible for their divergence and the historical association between genes and species. Our classes distinguish between genes related through transfer alone and genes related through duplication and transfer. Further, they separate closely-related genes in distantly-related species from distantly-related genes in closely-related species. We present formal rules that assign gene pairs to specific xenolog classes, given a reconciled gene tree with an arbitrary number of duplications and transfers. These xenology classification rules have been implemented in software and tested on a collection of ∼13 000 prokaryotic gene families. In addition, we present a case study demonstrating the connection between xenolog classification and gene function prediction. The xenolog classification rules have been implemented in N otung 2.9, a freely available phylogenetic reconciliation software package. http://www.cs.cmu.edu/~durand/Notung . Gene trees are available at http://dx.doi.org/10.7488/ds/1503 . durand@cmu.edu. Supplementary data are available at Bioinformatics online.

  17. What are Millian Qualitative Superiorities?

    Directory of Open Access Journals (Sweden)

    Jonathan Riley

    2008-04-01

    Full Text Available In an article published in Prolegomena 2006, Christoph Schmidt-Petri has defended his interpretation and attacked mine of Mill’s idea that higher kinds of pleasure are superior in quality to lower kinds, regardless of quantity. Millian qualitative superiorities as I understand them are infinite superiorities. In this paper, I clarify my interpretation and show how Schmidt-Petri has misrepresented it and ignored the obvious textual support for it. As a result, he fails to understand how genuine Millian qualitative superiorities determine the novel structure of Mill’s pluralistic utilitarianism, in which a social code of justice that distributes equal rights and duties takes absolute priority over competing considerations. Schmidt-Petri’s own interpretation is a non-starter, because it does noteven recognize that Mill is talking about different kinds of pleasant feelings, such that the higher kinds are intrinsically more valuable than the lower. I conclude by outlining why my interpretation is free of any metaphysical commitment to the “essence” of pleasure.

  18. Isolated superior mesenteric artery dissection

    Directory of Open Access Journals (Sweden)

    Lalitha Palle

    2010-01-01

    Full Text Available Isolated superior mesenteric artery (SMA dissection without involvement of the aorta and the SMA origin is unusual. We present a case of an elderly gentleman who had chronic abdominal pain, worse after meals. CT angiography, performed on a 64-slice CT scanner, revealed SMA dissection with a thrombus. A large artery of Drummond was also seen. The patient was managed conservatively.

  19. Gene selection and classification for cancer microarray data based on machine learning and similarity measures

    Directory of Open Access Journals (Sweden)

    Liu Qingzhong

    2011-12-01

    Full Text Available Abstract Background Microarray data have a high dimension of variables and a small sample size. In microarray data analyses, two important issues are how to choose genes, which provide reliable and good prediction for disease status, and how to determine the final gene set that is best for classification. Associations among genetic markers mean one can exploit information redundancy to potentially reduce classification cost in terms of time and money. Results To deal with redundant information and improve classification, we propose a gene selection method, Recursive Feature Addition, which combines supervised learning and statistical similarity measures. To determine the final optimal gene set for prediction and classification, we propose an algorithm, Lagging Prediction Peephole Optimization. By using six benchmark microarray gene expression data sets, we compared Recursive Feature Addition with recently developed gene selection methods: Support Vector Machine Recursive Feature Elimination, Leave-One-Out Calculation Sequential Forward Selection and several others. Conclusions On average, with the use of popular learning machines including Nearest Mean Scaled Classifier, Support Vector Machine, Naive Bayes Classifier and Random Forest, Recursive Feature Addition outperformed other methods. Our studies also showed that Lagging Prediction Peephole Optimization is superior to random strategy; Recursive Feature Addition with Lagging Prediction Peephole Optimization obtained better testing accuracies than the gene selection method varSelRF.

  20. Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification.

    Science.gov (United States)

    Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R

    2013-01-01

    Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html.

  1. A escrita no Ensino Superior

    Directory of Open Access Journals (Sweden)

    Maria Conceição Pillon Christofoli

    2013-01-01

    Full Text Available http://dx.doi.org/10.5902/198464445865 O presente artigo trata de apresentar resultados oriundos de pesquisa realizada no Ensino Superior, enfocando a escrita em contextos universitários. Depoimentos por parte dos acadêmicos evidenciam certa resistência ao ato de escrever, o que acaba muitas vezes distanciando o sujeito da produção de um texto. Assim sendo, mesmo que parciais, os resultados até então analisados dão conta de que: pressuposto 1 – há ruptura da ideia de coerência entre o que pensamos, o que conseguimos escrever, o que entende nosso interlocutor; pressuposto 2 – a autocorreção de textos como exercício de pesquisa é imprescindível para a qualificação da escrita; pressuposto 3 – os diários de aula representam rico instrumento para a qualificação da escrita no Ensino Superior; pressuposto 4 – há necessidade de que o aluno do Ensino Superior escreva variados tipos de escrita, ainda que a universidade cumpra com seu papel, enfatizando a escrita acadêmica; pressuposto 5 – o trabalho com a escrita no Ensino Superior deve enfatizar os componentes básicos da expressão escrita: o código escrito e a composição da escrita. Palavras-chave: Escrita; Ensino Superior; formação de professores.

  2. Acurácia dos achados mamográficos do câncer de mama: correlação da classificação BI-RADS e achados histológicos Accuracy of mammographic findings in breast cancer: correlation between BI-RADS classification and histological findings

    Directory of Open Access Journals (Sweden)

    José Hermes Ribas do Nascimento

    2010-04-01

    Full Text Available OBJETIVO: A proposta deste estudo foi avaliar a acurácia da classificação BI-RADS® na mamografia. Os pontos secundários foram descrever a frequência de apresentação dos diferentes achados mamográficos e avaliar a concordância entre observadores. MATERIAIS E MÉTODOS: Os exames de 115 pacientes, encaminhados para core biopsy, foram reavaliados independentemente por dois médicos especialistas, cegados, utilizando a recomendação do BI-RADS. Posteriormente, os exames foram comparados com a histologia. A acurácia da classificação BI-RADS na mamografia foi avaliada. A concordância entre os médicos foi calculada pela estatística kappa (κ de Cohen e as diferenças nos grupos de comparação foram analisadas com teste qui-quadrado. RESULTADOS: Esta pesquisa demonstrou que a acurácia mamográfica oscilou de 75% a 62% na diferenciação entre lesões benignas de malignas com o uso do BI-RADS. Houve importante concordância na descrição das margens dos nódulos (κ= 0,66. Baixa concordância foi identificada na descrição dos contornos (formas dos nódulos (κ= 0,40 e na descrição das calcificações, tanto em relação à sua distribuição (κ= 0,24 como também em relação à morfologia (κ= 0,36. CONCLUSÃO: O presente estudo demonstrou que o método é acurado na diferenciação de lesões benignas de malignas. A concordância foi fraca na análise das calcificações quanto a morfologia e distribuição, no entanto, identificou-se elevação progressiva dos valores preditivos positivos nas subcategorias 4.OBJECTIVE: The present study was aimed at evaluating the BI-RADS® classification accuracy in mammography. Additionally, the frequency of different findings was described and the interobserver agreement was evaluated. MATERIALS AND METHODS: Mammographic images of 115 patients were independently and blindly reviewed by two specialists in compliance with BI-RADS recommendations, and later compared with histological data. The

  3. Tissue classification of large-scale multi-site MR data using fuzzy k-nearest neighbor method

    Science.gov (United States)

    Ghayoor, Ali; Paulsen, Jane S.; Kim, Regina E. Y.; Johnson, Hans J.

    2016-03-01

    This paper describes enhancements to automate classification of brain tissues for multi-site degenerative magnetic resonance imaging (MRI) data analysis. Processing of large collections of MR images is a key research technique to advance our understanding of the human brain. Previous studies have developed a robust multi-modal tool for automated tissue classification of large-scale data based on expectation maximization (EM) method initialized by group-wise prior probability distributions. This work aims to augment the EM-based classification using a non-parametric fuzzy k-Nearest Neighbor (k-NN) classifier that can model the unique anatomical states of each subject in the study of degenerative diseases. The presented method is applicable to multi-center heterogeneous data analysis and is quantitatively validated on a set of 18 synthetic multi-modal MR datasets having six different levels of noise and three degrees of bias-field provided with known ground truth. Dice index and average Hausdorff distance are used to compare the accuracy and robustness of the proposed method to a state-of-the-art classification method implemented based on EM algorithm. Both evaluation measurements show that presented enhancements produce superior results as compared to the EM only classification.

  4. Comparison of LDA and SPRT on Clinical Dataset Classifications.

    Science.gov (United States)

    Lee, Chih; Nkounkou, Brittany; Huang, Chun-Hsi

    2011-04-19

    In this work, we investigate the well-known classification algorithm LDA as well as its close relative SPRT. SPRT affords many theoretical advantages over LDA. It allows specification of desired classification error rates α and β and is expected to be faster in predicting the class label of a new instance. However, SPRT is not as widely used as LDA in the pattern recognition and machine learning community. For this reason, we investigate LDA, SPRT and a modified SPRT (MSPRT) empirically using clinical datasets from Parkinson's disease, colon cancer, and breast cancer. We assume the same normality assumption as LDA and propose variants of the two SPRT algorithms based on the order in which the components of an instance are sampled. Leave-one-out cross-validation is used to assess and compare the performance of the methods. The results indicate that two variants, SPRT-ordered and MSPRT-ordered, are superior to LDA in terms of prediction accuracy. Moreover, on average SPRT-ordered and MSPRT-ordered examine less components than LDA before arriving at a decision. These advantages imply that SPRT-ordered and MSPRT-ordered are the preferred algorithms over LDA when the normality assumption can be justified for a dataset.

  5. Target Price Accuracy

    Directory of Open Access Journals (Sweden)

    Alexander G. Kerl

    2011-04-01

    Full Text Available This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio. However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.

  6. Rethinking Empathic Accuracy

    OpenAIRE

    Meadors, Joshua

    2014-01-01

    The present study is a methodological examination of the implicit empathic accuracy measure introduced by Zaki, Ochsner, and Bolger (2008). Empathic accuracy (EA) is defined as the ability to understand another person's thoughts and feelings (Ickes, 1993). Because this definition is similar to definitions of cognitive empathy (e.g., Shamay-Tsoory, 2011) and because affective empathy does not appear to be related to empathic accuracy (Zaki et al., 2008), the Basic Empathy Scale--which measures...

  7. Pensamiento Superior y Desarrollo Territorial

    Directory of Open Access Journals (Sweden)

    Víctor Manuel Racancoj Alonzo

    2015-04-01

    Full Text Available Esta reflexión pretende explicar el papel, fundamental, que juega el pensamiento superior, en la formulación y la práctica de modelos de desarrollo territorial local; para que contribuyan de forma sustantiva, en la transformación de las condiciones socioeconómicas adversas que hoy viven comunidades indígenas y rurales de muchos países, como Guatemala, situación que puede resumirse en altos índices de pobreza y desnutrición. Pero, el pensamiento superior, debe ser competencia de la población con pertenencia a lo local, pues si y solo si esta condición existe, se dará validez y viabilidad al desarrollo territorial. Para alcanzar competencias de pensamiento superior, en los espacios locales, se tiene que superar obstáculos en el modelo de universidad, que hoy estamos familiarizados a ver y pensar; modelos que tienen las características de: herencia colonial, disfunción con la problemática económica, cultural, social y política de la sociedad y la negación de los saberes ancestrales.

  8. Superior sulcus tumors (Pancoast tumors).

    Science.gov (United States)

    Marulli, Giuseppe; Battistella, Lucia; Mammana, Marco; Calabrese, Francesca; Rea, Federico

    2016-06-01

    Superior Sulcus Tumors, frequently termed as Pancoast tumors, are a wide range of tumors invading the apical chest wall. Due to its localization in the apex of the lung, with the potential invasion of the lower part of the brachial plexus, first ribs, vertebrae, subclavian vessels or stellate ganglion, the superior sulcus tumors cause characteristic symptoms, like arm or shoulder pain or Horner's syndrome. The management of superior sulcus tumors has dramatically evolved over the past 50 years. Originally deemed universally fatal, in 1956, Shaw and Paulson introduced a new treatment paradigm with combined radiotherapy and surgery ensuring 5-year survival of approximately 30%. During the 1990s, following the need to improve systemic as well as local control, a trimodality approach including induction concurrent chemoradiotherapy followed by surgical resection was introduced, reaching 5-year survival rates up to 44% and becoming the standard of care. Many efforts have been persecuted, also, to obtain higher complete resection rates using appropriate surgical approaches and involving multidisciplinary team including spine surgeon or vascular surgeon. Other potential treatment options are under consideration like prophylactic cranial irradiation or the addition of other chemotherapy agents or biologic agents to the trimodality approach.

  9. Algorithms for Hyperspectral Endmember Extraction and Signature Classification with Morphological Dendritic Networks

    Science.gov (United States)

    Schmalz, M.; Ritter, G.

    Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and

  10. Spectral organization of the human lateral superior temporal gyrus revealed by intracranial recordings.

    Science.gov (United States)

    Nourski, Kirill V; Steinschneider, Mitchell; Oya, Hiroyuki; Kawasaki, Hiroto; Jones, Robert D; Howard, Matthew A

    2014-02-01

    The place of the posterolateral superior temporal (PLST) gyrus within the hierarchical organization of the human auditory cortex is unknown. Understanding how PLST processes spectral information is imperative for its functional characterization. Pure-tone stimuli were presented to subjects undergoing invasive monitoring for refractory epilepsy. Recordings were made using high-density subdural grid electrodes. Pure tones elicited robust high gamma event-related band power responses along a portion of PLST adjacent to the transverse temporal sulcus (TTS). Responses were frequency selective, though typically broadly tuned. In several subjects, mirror-image response patterns around a low-frequency center were observed, but typically, more complex and distributed patterns were seen. Frequency selectivity was greatest early in the response. Classification analysis using a sparse logistic regression algorithm yielded above-chance accuracy in all subjects. Classifier performance typically peaked at 100-150 ms after stimulus onset, was comparable for the left and right hemisphere cases, and was stable across stimulus intensities. Results demonstrate that representations of spectral information within PLST are temporally dynamic and contain sufficient information for accurate discrimination of tone frequencies. PLST adjacent to the TTS appears to be an early stage in the hierarchy of cortical auditory processing. Pure-tone response patterns may aid auditory field identification.

  11. Change Detection Accuracy and Image Properties: A Study Using Simulated Data

    Directory of Open Access Journals (Sweden)

    Abdullah Almutairi

    2010-06-01

    Full Text Available Simulated data were used to investigate the relationships between image properties and change detection accuracy in a systematic manner. The image properties examined were class separability, radiometric normalization and image spectral band-to-band correlation. The change detection methods evaluated were post-classification comparison, direct classification of multidate imagery, image differencing, principal component analysis, and change vector analysis. The simulated data experiments showed that the relative accuracy of the change detection methods varied with changes in image properties, thus confirming the hypothesis that caution should be used in generalizing from studies that use only a single image pair. In most cases, direct classification and post-classification comparison were the least sensitive to changes in the image properties of class separability, radiometric normalization error and band correlation. Furthermore, these methods generally produced the highest accuracy, or were amongst those with a high accuracy. PCA accuracy was highly variable; the use of four principal components consistently resulted in substantial decreased classification accuracy relative to using six components, or classification using the original six bands. The accuracy of image differencing also varied greatly in the experiments. Of the three methods that require radiometric normalization, image differencing was the method most affected by radiometric error, relative to change vector and classification methods, for classes that have moderate and low separability. For classes that are highly separable, image differencing was relatively unaffected by radiometric normalization error. CVA was found to be the most accurate method for classes with low separability and all but the largest radiometric errors. CVA accuracy tended to be the least affected by changes in the degree of band correlation in situations where the class means were moderately dispersed, or

  12. Automated Periodontal Diseases Classification System

    Directory of Open Access Journals (Sweden)

    Aliaa A. A. Youssif

    2012-01-01

    Full Text Available This paper presents an efficient and innovative system for automated classification of periodontal diseases, The strength of our technique lies in the fact that it incorporates knowledge from the patients' clinical data, along with the features automatically extracted from the Haematoxylin and Eosin (H&E stained microscopic images. Our system uses image processing techniques based on color deconvolution, morphological operations, and watershed transforms for epithelium & connective tissue segmentation, nuclear segmentation, and extraction of the microscopic immunohistochemical features for the nuclei, dilated blood vessels & collagen fibers. Also, Feedforward Backpropagation Artificial Neural Networks are used for the classification process. We report 100% classification accuracy in correctly identifying the different periodontal diseases observed in our 30 samples dataset.

  13. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    of the Aachen Aphasia Test (AAT). First a coarse classification was achieved by using an assessment of spontaneous speech of the patient. This classifier produced correct results in 87% of the test cases. For a second test, data analysis tools were used to select four features out of the 30 available test...... be done in about half an hour in a free interview. The results of the classifiers were analyzed regarding their accuracy dependent on the diagnosis....

  14. TEXT CLASSIFICATION TOWARD A SCIENTIFIC FORUM

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Text mining, also known as discovering knowledge from the text, which has emerged as a possible solution for the current information explosion, refers to the process of extracting non-trivial and useful patterns from unstructured text. Among the general tasks of text mining such as text clustering,summarization, etc, text classification is a subtask of intelligent information processing, which employs unsupervised learning to construct a classifier from training text by which to predict the class of unlabeled text. Because of its simplicity and objectivity in performance evaluation, text classification was usually used as a standard tool to determine the advantage or weakness of a text processing method, such as text representation, text feature selection, etc. In this paper, text classification is carried out to classify the Web documents collected from XSSC Website (http://www. xssc.ac.cn). The performance of support vector machine (SVM) and back propagation neural network (BPNN) is compared on this task. Specifically, binary text classification and multi-class text classification were conducted on the XSSC documents. Moreover, the classification results of both methods are combined to improve the accuracy of classification. An experiment is conducted to show that BPNN can compete with SVM in binary text classification; but for multi-class text classification, SVM performs much better. Furthermore, the classification is improved in both binary and multi-class with the combined method.

  15. The effects of shadow removal on across-date settlement type classification of quickbird images

    CSIR Research Space (South Africa)

    Luus, FPS

    2012-07-01

    Full Text Available QuickBird imagery acquired on separate dates may have significant differences in viewing- and illumination geometries, which can negatively impact across-date settlement type classification accuracy. The effect of cast shadows on classification...

  16. Morphological variation of siscowet lake trout in Lake Superior

    Science.gov (United States)

    Bronte, C.R.; Moore, S.A.

    2007-01-01

    Historically, Lake Superior has contained many morphologically distinct forms of the lake trout Salvelinus namaycush that have occupied specific depths and locations and spawned at specific times of the year. Today, as was probably the case historically, the siscowet morphotype is the most abundant. Recent interest in harvesting siscowets to extract oil containing omega-3 fatty acids will require additional knowledge of the biology and stock structure of these lightly exploited populations. The objective of this study was to determine whether shape differences exist among siscowet populations across Lake Superior and whether these shape differences can be used to infer stock structure. Morphometric analysis (truss protocol) was used to differentiate among siscowets sampled from 23 locations in Lake Superior. We analyzed 31 distance measurements among 14 anatomical landmarks taken from digital images of fish recorded in the field. Cluster analysis of size-corrected data separated fish into three geographic groups: The Isle Royale, eastern (Michigan), and western regions (Michigan). Finer scales of stock structure were also suggested. Discriminant function analysis demonstrated that head measurements contributed to most of the observed variation. Cross-validation classification rates indicated that 67–71% of individual fish were correctly classified to their region of capture. This is the first study to present shape differences associated with location within a lake trout morphotype in Lake Superior.

  17. Least squares twin support vector machine with Universum data for classification

    Science.gov (United States)

    Xu, Yitian; Chen, Mei; Li, Guohui

    2016-11-01

    Universum, a third class not belonging to either class of the classification problem, allows to incorporate the prior knowledge into the learning process. A lot of previous work have demonstrated that the Universum is helpful to the supervised and semi-supervised classification. Moreover, Universum has already been introduced into the support vector machine (SVM) and twin support vector machine (TSVM) to enhance the generalisation performance. To further increase the generalisation performance, we propose a least squares TSVM with Universum data (?-TSVM) in this paper. Our ?-TSVM possesses the following advantages: first, it exploits Universum data to improve generalisation performance. Besides, it implements the structural risk minimisation principle by adding a regularisation to the objective function. Finally, it costs less computing time by solving two small-sized systems of linear equations instead of a single larger-sized quadratic programming problem. To verify the validity of our proposed algorithm, we conduct various experiments around the size of labelled samples and the number of Universum data on data-sets including seven benchmark data-sets, Toy data, MNIST and Face images. Empirical experiments indicate that Universum contributes to making prediction accuracy improved even stable. Especially when fewer labelled samples given, ?-TSVM is far superior to the improved LS-TSVM (ILS-TSVM), and slightly superior to the ?-TSVM.

  18. Specific Property of Ultrafine Particle Classification

    Institute of Scientific and Technical Information of China (English)

    LI Guo-hua; HUANG Zhi-chu; ZHANG You-lin

    2003-01-01

    In the process of ultrafine particle classification,the separation curve,which reflects the characteristics of separating process,is frequently influenced by the characteristics of separation flow field and operating parameters,etc.This paper introduces the concept of system deviation and deduces the calculating method of the separation curves.Meanwhile,it analyses the influences of classification flow field's specific properties and some operating parameters on the separation curves.The results show that,in the process of ultrafine particle classification,the local vortex in the separation field improves the separation efficiency to a certain degree,but the accuracy will decrease;the coacervation action of particles will seriously influence the classification accuracy.

  19. Compressed classification learning with Markov chain samples.

    Science.gov (United States)

    Cao, Feilong; Dai, Tenghui; Zhang, Yongquan; Tan, Yuanpeng

    2014-02-01

    In this article, we address the problem of compressed classification learning. A generalization bound of the support vector machines (SVMs) compressed classification algorithm with uniformly ergodic Markov chain samples is established. This bound indicates that the accuracy of the SVM classifier in the compressed domain is close to that of the best classifier in the data domain. In a sense, the fact that the compressed learning can avoid the curse of dimensionality in the learning process is shown. In addition, we show that compressed classification learning reduces the learning time at the price of decreasing the classification accuracy, but the decrement can be controlled. The numerical experiments further verify the results claimed in this article.

  20. Entidades fiscalizadoras superiores y accountability

    OpenAIRE

    Estela Moreno, María

    2016-01-01

    OBJETIVOS DE LA TESIS: El objetivo general del trabajo es establecer el nivel de eficacia de las Entidades Fiscalizadoras Superiores (EFS) como agencia asignada y herramienta de accountability horizontal, a través de la valoración de su diseño institucional y de la calidad de sus productos finales, los informes de auditoría, estableciéndose los siguientes objetivos específicos: 1. Relevar las nociones de accountability, actualizando el Estado del Arte de la cuestión. 2. Analizar la ...

  1. Transportation Modes Classification Using Sensors on Smartphones

    Directory of Open Access Journals (Sweden)

    Shih-Hau Fang

    2016-08-01

    Full Text Available This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user’s transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.

  2. Transportation Modes Classification Using Sensors on Smartphones.

    Science.gov (United States)

    Fang, Shih-Hau; Liao, Hao-Hsiang; Fei, Yu-Xiang; Chen, Kai-Hsiang; Huang, Jen-Wei; Lu, Yu-Ding; Tsao, Yu

    2016-08-19

    This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user's transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.

  3. Variation in the oxytocin receptor gene is associated with behavioral and neural correlates of empathic accuracy

    DEFF Research Database (Denmark)

    Laursen, Helle Ruff; Siebner, Hartwig Roman; Haren, Tina

    2014-01-01

    , but not the SLC6A4 5-HTTLPR, were associated with significant differences in empathic accuracy, with CC- and AA-carriers, respectively, displaying higher empathic accuracy. For OXTR rs2268498 there was also a genotype difference in the correlation between empathic accuracy and activity in the superior temporal...

  4. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  5. Optimizing selection of training and auxiliary data for operational land cover classification for the LCMAP initiative

    Science.gov (United States)

    Zhu, Zhe; Gallant, Alisa L.; Woodcock, Curtis E.; Pengra, Bruce; Olofsson, Pontus; Loveland, Thomas R.; Jin, Suming; Dahal, Devendra; Yang, Limin; Auch, Roger F.

    2016-12-01

    The U.S. Geological Survey's Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4-8. Training data were selected from map products of the U.S. Geological Survey's Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening "Function of mask" (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow

  6. Wavelet features in motion data classification

    Science.gov (United States)

    Szczesna, Agnieszka; Świtoński, Adam; Słupik, Janusz; Josiński, Henryk; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of motion data classification based on result of multiresolution analysis implemented in form of quaternion lifting scheme. Scheme processes directly on time series of rotations coded in form of unit quaternion signal. In the work new features derived from wavelet energy and entropy are proposed. To validate the approach gait database containing data of 30 different humans is used. The obtained results are satisfactory. The classification has over than 91% accuracy.

  7. Avaliação da classificação digital de povoamentos florestais em imagens de satélite através de índices de acurácia Digital classification assessment of forest stands in satellite images using accuracy indices

    Directory of Open Access Journals (Sweden)

    Édson Luis Bolfe

    2004-02-01

    Full Text Available A utilização de matéria-prima de origem florestal aumentou significativamente nas últimas décadas. A busca por alta produtividade concretizou-se com a introdução de espécies exóticas, principalmente Eucalyptus sp. e Pinus sp. Neste trabalho avaliou-se a precisão da classificação digital obtida no levantamento de povoamentos florestais implantados e naturais da área da carta de Cachoeira do Sul - RS, utilizando técnicas de geoprocessamento, sensoriamento remoto, SIG (sistema de informação geográfica e GPS (sistema de posicionamento global. Verificou-se que a área é ocupada por vegetação natural (35,54%, Pinus sp. (1,89% e Eucalyptus sp. (0,77%, cuja precisão na classificação supervisionada digital foi: Exatidão global (85,23%, Kappa (84,90% e Tau (77,74%. Concluiu-se que os três índices de acurácia podem ser utilizados, apesar de os índices Kappa e Tau mostrarem-se mais consistentes.The use of forest raw products has increased significantly in the last decades. The search for high productivity has led to the introduction of exotic species, mainly Eucalyptus sp. and Pinus sp. This work evaluated the precision of classification obtained from surveying forest implemented and natural stands in the region of the map area of Cachoeira do Sul - RS, using geoprocessing techniques, remote sensing, GIS (geographic information system and GPS (global positioning system. It was verified that the area is occupied by natural vegetation (35.54%, Pinus sp. (1.89% and Eucalyptus sp. (0.77%, with the following values precision in the supervised digital classification: Global precision (85.23%, Kappa (84.90% and Tau (77.74%. It was concluded that the three accuracy indexes can be used, although Kappa and Tau were more consistent.

  8. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    Science.gov (United States)

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification.

  9. Negative node count improvement prognostic prediction of the seventh edition of the TNM classification for gastric cancer.

    Science.gov (United States)

    Deng, Jingyu; Zhang, Rupeng; Zhang, Li; Liu, Yong; Hao, Xishan; Liang, Han

    2013-01-01

    To demonstrate that the seventh edition of the tumor-node-metastasis (TNM) classification for gastric cancer (GC) should be updated with the number of negative lymph nodes for the improvement of its prognostic prediction accuracy. Clinicopathological data of 769 GC patients who underwent curative gastrectomy with lymphadenectomy between 1997 and 2006 were retrospectively analyzed to demonstrate the superiority of prognostic efficiency of the seventh edition of the TNM classification, which can be improved by combining the number of negative lymph nodes. With the Cox regression multivariate analysis, the seventh edition of the TNM classification, the number of negative nodes, the type of gastrectomy, and the depth of tumor invasion (T stage) were identified as independent factors for predicting the overall survival of GC patients. Furthermore, we confirmed that the T stage-N stage-number of negative lymph nodes-metastasis (TNnM) classification is the most appropriate prognostic predictor of GC patients by using case-control matched fashion and multinominal logistic regression. Finally, we were able to clarify that TNnM classification may provide more precise survival differences among the different TNM sub-stages of GC by using the measure of agreement (Kappa coefficient), the McNemar value, the Akaike information criterion, and the Bayesian Information Criterion compared with the seventh edition of the TNM classification. The number of negative nodes, as an important prognostic predictor of GC, can improve the prognostic prediction efficiency of the seventh edition of the TNM classification for GC, which should be recommended for conventional clinical applications.

  10. Negative node count improvement prognostic prediction of the seventh edition of the TNM classification for gastric cancer.

    Directory of Open Access Journals (Sweden)

    Jingyu Deng

    Full Text Available OBJECTIVE: To demonstrate that the seventh edition of the tumor-node-metastasis (TNM classification for gastric cancer (GC should be updated with the number of negative lymph nodes for the improvement of its prognostic prediction accuracy. METHODS: Clinicopathological data of 769 GC patients who underwent curative gastrectomy with lymphadenectomy between 1997 and 2006 were retrospectively analyzed to demonstrate the superiority of prognostic efficiency of the seventh edition of the TNM classification, which can be improved by combining the number of negative lymph nodes. RESULTS: With the Cox regression multivariate analysis, the seventh edition of the TNM classification, the number of negative nodes, the type of gastrectomy, and the depth of tumor invasion (T stage were identified as independent factors for predicting the overall survival of GC patients. Furthermore, we confirmed that the T stage-N stage-number of negative lymph nodes-metastasis (TNnM classification is the most appropriate prognostic predictor of GC patients by using case-control matched fashion and multinominal logistic regression. Finally, we were able to clarify that TNnM classification may provide more precise survival differences among the different TNM sub-stages of GC by using the measure of agreement (Kappa coefficient, the McNemar value, the Akaike information criterion, and the Bayesian Information Criterion compared with the seventh edition of the TNM classification. CONCLUSION: The number of negative nodes, as an important prognostic predictor of GC, can improve the prognostic prediction efficiency of the seventh edition of the TNM classification for GC, which should be recommended for conventional clinical applications.

  11. The sentence superiority effect revisited.

    Science.gov (United States)

    Snell, Joshua; Grainger, Jonathan

    2017-11-01

    A sentence superiority effect was investigated using post-cued word-in-sequence identification with the rapid parallel visual presentation (RPVP) of four horizontally aligned words. The four words were presented for 200ms followed by a post-mask and cue for partial report. They could form a grammatically correct sentence or were formed of the same words in a scrambled agrammatical sequence. Word identification was higher in the syntactically correct sequences, and crucially, this sentence superiority effect did not vary as a function of the target's position in the sequence. Cloze probability measures for words at the final, arguably most predictable position, revealed overall low values that did not interact with the effects of sentence context, suggesting that these effects were not driven by word predictability. The results point to a level of parallel processing across multiple words that enables rapid extraction of their syntactic categories. These generate a sentence-level representation that constrains the recognition process for individual words, thus facilitating parallel word processing when the sequence is grammatically sound. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A Hidden Markov Models Approach for Crop Classification: Linking Crop Phenology to Time Series of Multi-Sensor Remote Sensing Data

    Directory of Open Access Journals (Sweden)

    Sofia Siachalou

    2015-03-01

    Full Text Available Vegetation monitoring and mapping based on multi-temporal imagery has recently received much attention due to the plethora of medium-high spatial resolution satellites and the improved classification accuracies attained compared to uni-temporal approaches. Efficient image processing strategies are needed to exploit the phenological information present in temporal image sequences and to limit data redundancy and computational complexity. Within this framework, we implement the theory of Hidden Markov Models in crop classification, based on the time-series analysis of phenological states, inferred by a sequence of remote sensing observations. More specifically, we model the dynamics of vegetation over an agricultural area of Greece, characterized by spatio-temporal heterogeneity and small-sized fields, using RapidEye and Landsat ETM+ imagery. In addition, the classification performance of image sequences with variable spatial and temporal characteristics is evaluated and compared. The classification model considering one RapidEye and four pan-sharpened Landsat ETM+ images was found superior, resulting in a conditional kappa from 0.77 to 0.94 per class and an overall accuracy of 89.7%. The results highlight the potential of the method for operational crop mapping in Euro-Mediterranean areas and provide some hints for optimal image acquisition windows regarding major crop types in Greece.

  13. Data selection in EEG signals classification.

    Science.gov (United States)

    Wang, Shuaifang; Li, Yan; Wen, Peng; Lai, David

    2016-03-01

    The alcoholism can be detected by analyzing electroencephalogram (EEG) signals. However, analyzing multi-channel EEG signals is a challenging task, which often requires complicated calculations and long execution time. This paper proposes three data selection methods to extract representative data from the EEG signals of alcoholics. The methods are the principal component analysis based on graph entropy (PCA-GE), the channel selection based on graph entropy (GE) difference, and the mathematic combinations channel selection, respectively. For comparison purposes, the selected data from the three methods are then classified by three classifiers: the J48 decision tree, the K-nearest neighbor and the Kstar, separately. The experimental results show that the proposed methods are successful in selecting data without compromising the classification accuracy in discriminating the EEG signals from alcoholics and non-alcoholics. Among them, the proposed PCA-GE method uses only 29.69% of the whole data and 29.5% of the computation time but achieves a 94.5% classification accuracy. The channel selection method based on the GE difference also gains a 91.67% classification accuracy by using only 29.69% of the full size of the original data. Using as little data as possible without sacrificing the final classification accuracy is useful for online EEG analysis and classification application design.

  14. Accuracy of Sphygmomanometers

    OpenAIRE

    Basak, Okay

    2014-01-01

    One of the factors affecting the accuracy of readings of blood pressure is the equipment used. Defects or inaccuracy of aneroid sphygmomanometers may be source of error in blood pressure measurement. We inspected 100 sphygmomanometers for physical defects and assessed their accuracy against a standard mercury manometer at four different pressure points. 46 of the 100 sphygmomanometers were determined to be intolerant (deviation from the mercury manometer by greater than±3 mm Hg at two or more...

  15. Accuracy of migrant landbird habitat maps produced from LANDSAT TM data: Two case studies in southern Belize

    Science.gov (United States)

    Spruce, J.P.; Sader, S.; Robbins, C.S.; Dowell, B.A.; Wilson, Marcia H.; Sader, Steven A.

    1995-01-01

    The study investigated the utility of Landsat TM data applied to produce geo-referenced habitat maps for two study areas (Toledo and Stann Creek). Locational and non-site-specific map accuracy was evaluated by stratified random sampling and statistical analysis of satellite classification (SCR) versus air photo interpretation results (PIR) for the overall classification and individual classes. The effect of classification scheme specificity on map accuracy was also assessed. A decision criteria was developed for the minimum acceptable level of map performance (i.e., classification accuracy and scheme specificity). A satellite map was deemed acceptable if it has a useful degree of classification specificity, plus either an adequate overall locational agreement (SCR and PIR are equal). For the most detailed revised classification, overall locational accuracy ranges from 52% (5 classes) for the Toledo to 63% (9 classes) for the Stann Creek. For the least detailed revised classification, overall locational accuracy ranges from 91% (2 classes) for Toledo to 86% (5 classes) for Stann Creek. Considering both location and non-site-specific accuracy results, the most detailed yet insufficient accurate classification for both sites includes low/medium/tall broadleaf forest, broadleaf forest scrub and herb-dominated openings. For these classifications, the overall locational accuracy is 72% for Toledo (4 classes) and 75% for Stann Creek (7 classes). This level of classification detail is suitable for aiding many analyses of migrant landbird habitat use.

  16. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  17. Classification in Australia.

    Science.gov (United States)

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  18. Classification of Electrocardiogram Signals With Extreme Learning Machine and Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    S. Karpagachelvi

    2011-01-01

    Full Text Available The ECG is one of the most effective diagnostic tools to detect cardiac diseases. It is a method to measure and record different electrical potentials of the heart. The electrical potential generated by electrical activity in cardiac tissue is measured on the surface of the human body. Current flow, in the form of ions, signals contraction of cardiac muscle fibers leading to the heart's pumping action. This ECG can be classified as normal and abnormal signals. In this paper, a thorough experimental study was conducted to show the superiority of the generalization capability of the Relevance Vector Machine (RVM compared with Extreme Learning Machine (ELM approach in the automatic classification of ECG beats. The generalization performance of the ELM classifier has not achieved the nearest maximum accuracy of ECG signal classsification. To achieve the maximum accuracy the RVM classifier design by searching for the best value of the parameters that tune its discriminant function, and upstream by looking for the best subset of features that feed the classifier. The experiments were conducted on the ECG data from the Massachusetts Institute of Technology-Beth Israel Hospital (MIT- BIH arrhythmia database to classify five kinds of abnormal waveforms and normal beats. In particular, the sensitivity of the RVM classifier is tested and that is compared with ELM. Both the approaches are compared by giving raw input data and preprocessed data. The obtained results clearly confirm the superiority of the RVM approach when compared to traditional classifiers.

  19. A BENCHMARK TO SELECT DATA MINING BASED CLASSIFICATION ALGORITHMS FOR BUSINESS INTELLIGENCE AND DECISION SUPPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    Pardeep Kumar

    2012-09-01

    Full Text Available In today’s business scenario, we percept major changes in how managers use computerized support inmaking decisions. As more number of decision-makers use computerized support in decision making,decision support systems (DSS is developing from its starting as a personal support tool and is becomingthe common resource in an organization. DSS serve the management, operations, and planning levels of anorganization and help to make decisions, which may be rapidly changing and not easily specified inadvance. Data mining has a vital role to extract important information to help in decision making of adecision support system. It has been the active field of research in the last two-three decades. Integration ofdata mining and decision support systems (DSS can lead to the improved performance and can enable thetackling of new types of problems. Artificial Intelligence methods are improving the quality of decisionsupport, and have become embedded in many applications ranges from ant locking automobile brakes tothese days interactive search engines. It provides various machine learning techniques to support datamining. The classification is one of the main and valuable tasks of data mining. Several types ofclassification algorithms have been suggested, tested and compared to determine the future trends based onunseen data. There has been no single algorithm found to be superior over all others for all data sets.Various issues such as predictive accuracy, training time to build the model, robustness and scalabilitymust be considered and can have tradeoffs, further complex the quest for an overall superior method. Theobjective of this paper is to compare various classification algorithms that have been frequently used indata mining for decision support systems. Three decision trees based algorithms, one artificial neuralnetwork, one statistical, one support vector machines with and without adaboost and one clusteringalgorithm are tested and compared on

  20. Hyperspectral image classification based on NMF Features Selection Method

    Science.gov (United States)

    Abe, Bolanle T.; Jordaan, J. A.

    2013-12-01

    Hyperspectral instruments are capable of collecting hundreds of images corresponding to wavelength channels for the same area on the earth surface. Due to the huge number of features (bands) in hyperspectral imagery, land cover classification procedures are computationally expensive and pose a problem known as the curse of dimensionality. In addition, higher correlation among contiguous bands increases the redundancy within the bands. Hence, dimension reduction of hyperspectral data is very crucial so as to obtain good classification accuracy results. This paper presents a new feature selection technique. Non-negative Matrix Factorization (NMF) algorithm is proposed to obtain reduced relevant features in the input domain of each class label. This aimed to reduce classification error and dimensionality of classification challenges. Indiana pines of the Northwest Indiana dataset is used to evaluate the performance of the proposed method through experiments of features selection and classification. The Waikato Environment for Knowledge Analysis (WEKA) data mining framework is selected as a tool to implement the classification using Support Vector Machines and Neural Network. The selected features subsets are subjected to land cover classification to investigate the performance of the classifiers and how the features size affects classification accuracy. Results obtained shows that performances of the classifiers are significant. The study makes a positive contribution to the problems of hyperspectral imagery by exploring NMF, SVMs and NN to improve classification accuracy. The performances of the classifiers are valuable for decision maker to consider tradeoffs in method accuracy versus method complexity.

  1. Innovating Web Page Classification Through Reducing Noise

    Institute of Scientific and Technical Information of China (English)

    LI Xiaoli (李晓黎); SHI Zhongzhi(史忠植)

    2002-01-01

    This paper presents a new method that eliminates noise in Web page classification. It first describes the presentation of a Web page based on HTML tags. Then through a novel distance formula, it eliminates the noise in similarity measure. After carefully analyzing Web pages, we design an algorithm that can distinguish related hyperlinks from noisy ones.We can utilize non-noisy hyperlinks to improve the performance of Web page classification (the CAWN algorithm). For any page, wecan classify it through the text and category of neighbor pages related to the page. The experimental results show that our approach improved classification accuracy.

  2. 78 FR 21116 - Superior Supplier Incentive Program

    Science.gov (United States)

    2013-04-09

    ... Department of the Navy Superior Supplier Incentive Program AGENCY: Department of the Navy, DoD. ACTION... policy that will establish a Superior Supplier Incentive Program (SSIP). Under the SSIP, contractors that..., performance, quality, and business relations would be granted Superior Supplier Status (SSS). Contractors...

  3. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...... classification (as by using transport layer port numbers, Deep Packet Inspection (DPI), statistical classification) and assessed their usefulness in particular areas. We found that the classification techniques based on port numbers are not accurate anymore as most applications use dynamic port numbers, while...... DPI is relatively slow, requires a lot of processing power, and causes a lot of privacy concerns. Statistical classifiers based on Machine Learning Algorithms (MLAs) were shown to be fast and accurate. At the same time, they do not consume a lot of resources and do not cause privacy concerns. However...

  4. Hyperspectral image classification using functional data analysis.

    Science.gov (United States)

    Li, Hong; Xiao, Guangrun; Xia, Tian; Tang, Y Y; Li, Luoqing

    2014-09-01

    The large number of spectral bands acquired by hyperspectral imaging sensors allows us to better distinguish many subtle objects and materials. Unlike other classical hyperspectral image classification methods in the multivariate analysis framework, in this paper, a novel method using functional data analysis (FDA) for accurate classification of hyperspectral images has been proposed. The central idea of FDA is to treat multivariate data as continuous functions. From this perspective, the spectral curve of each pixel in the hyperspectral images is naturally viewed as a function. This can be beneficial for making full use of the abundant spectral information. The relevance between adjacent pixel elements in the hyperspectral images can also be utilized reasonably. Functional principal component analysis is applied to solve the classification problem of these functions. Experimental results on three hyperspectral images show that the proposed method can achieve higher classification accuracies in comparison to some state-of-the-art hyperspectral image classification methods.

  5. Analytic radar micro-Doppler signatures classification

    Science.gov (United States)

    Oh, Beom-Seok; Gu, Zhaoning; Wang, Guan; Toh, Kar-Ann; Lin, Zhiping

    2017-06-01

    Due to its capability of capturing the kinematic properties of a target object, radar micro-Doppler signatures (m-DS) play an important role in radar target classification. This is particularly evident from the remarkable number of research papers published every year on m-DS for various applications. However, most of these works rely on the support vector machine (SVM) for target classification. It is well known that training an SVM is computationally expensive due to its nature of search to locate the supporting vectors. In this paper, the classifier learning problem is addressed by a total error rate (TER) minimization where an analytic solution is available. This largely reduces the search time in the learning phase. The analytically obtained TER solution is globally optimal with respect to the classification total error count rate. Moreover, our empirical results show that TER outperforms SVM in terms of classification accuracy and computational efficiency on a five-category radar classification problem.

  6. Evaluation of image features and classification methods for Barrett's cancer detection using VLE imaging

    Science.gov (United States)

    Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.

    2017-03-01

    Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.

  7. superior en México

    Directory of Open Access Journals (Sweden)

    César Mureddu Torres

    2008-01-01

    Full Text Available El presente artículo desarrolla algunos de los retos que ha traído consigo el acceso a la información existente en la red de Internet y lo que ello supone. Se abordan principalmente las consecuencias de la presencia actual de una sociedad llamada del conocimiento, si se mantiene la confusión entre conocimiento e información. Por ello, la sola gestión de la información no puede ser tomada como definitoria respecto a la función de educación superior confiada a las universidades. Hacerlo sería cometer un error aún más grave que la confusión teórica entre los términos mencionados.

  8. Pro duct Image Classification Based on Fusion Features

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-hui; LIU Jing-jing; YANG Li-jun

    2015-01-01

    Two key challenges raised by a product images classification system are classi-fication precision and classification time. In some categories, classification precision of the latest techniques, in the product images classification system, is still low. In this paper, we propose a local texture descriptor termed fan refined local binary pattern, which captures more detailed information by integrating the spatial distribution into the local binary pattern feature. We compare our approach with different methods on a subset of product images on Amazon/eBay and parts of PI100 and experimental results have demonstrated that our proposed approach is superior to the current existing methods. The highest classification precision is increased by 21%and the average classification time is reduced by 2/3.

  9. Neural network parameters affecting image classification

    Directory of Open Access Journals (Sweden)

    K.C. Tiwari

    2001-07-01

    Full Text Available The study is to assess the behaviour and impact of various neural network parameters and their effects on the classification accuracy of remotely sensed images which resulted in successful classification of an IRS-1B LISS II image of Roorkee and its surrounding areas using neural network classification techniques. The method can be applied for various defence applications, such as for the identification of enemy troop concentrations and in logistical planning in deserts by identification of suitable areas for vehicular movement. Five parameters, namely training sample size, number of hidden layers, number of hidden nodes, learning rate and momentum factor were selected. In each case, sets of values were decided based on earlier works reported. Neural network-based classifications were carried out for as many as 450 combinations of these parameters. Finally, a graphical analysis of the results obtained was carried out to understand the relationship among these parameters. A table of recommended values for these parameters for achieving 90 per cent and higher classification accuracy was generated and used in classification of an IRS-1B LISS II image. The analysis suggests the existence of an intricate relationship among these parameters and calls for a wider series of classification experiments as also a more intricate analysis of the relationships.

  10. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  11. A comparison of classification techniques for glacier change detection using multispectral images

    Directory of Open Access Journals (Sweden)

    Rahul Nijhawan

    2016-09-01

    Full Text Available Main aim of this paper is to compare the classification accuracies of glacier change detection by following classifiers: sub-pixel classification algorithm, indices based supervised classification and object based algorithm using Landsat imageries. It was observed that shadow effect was not removed in sub-pixel based classification which was removed by the indices method. Further the accuracy was improved by object based classification. Objective of the paper is to analyse different classification algorithms and interpret which one gives the best results in mountainous regions. The study showed that object based method was best in mountainous regions as optimum results were obtained in the shadowed covered regions.

  12. [Examination of the hypothesis 'the factors and mechanisms of superiority'].

    Science.gov (United States)

    Sierra-Fitzgerald, O; Quevedo-Caicedo, J; López-Calderón, M G

    INTRODUCTION. The hypothesis of Geschwind and Galaburda suggests that specific cognitive superiority arises as a result of an alteration in development of the nervous system. In this article we review the co existence of superiority and inferiority . PATIENTS AND METHODS. A study was made of six children aged between 6 and 8 years old at the Instituto de Belles Artes Antonio Maria Valencia in Cali,Columbia with an educational level between second and third grade at a primary school and of medium low socio economic status. The children were considered to have superior musical ability by music experts, which is the way in which the concept of superiority was to be tested. The concept of inferiority was tested by neuropsychological tests = 1.5 DE below normal for the same age. We estimated the perinatal neurological risk in each case. Subsequently the children s general intelligence and specific cognitive abilities were evaluated. In the first case the WISC R and MSCA were used. The neuropsychological profiles were obtained by broad evaluation using a verbal fluency test, a test using counters, Boston vocabulary test, the Wechster memory scale, sequential verbal memory test, super imposed figures test, Piaget Head battery, Rey Osterrieth complex figure and the Wisconsin card classification test. The RESULTS showed slight/moderate deficits in practical construction ability and mild defects of memory and concept abilities. In general the results supported the hypothesis tested. The mechanisms of superiority proposed in the classical hypothesis mainly involve the contralateral hemisphere: in this study the ipsilateral mechanism was more important.

  13. Segmentation and classification of colon glands with deep convolutional neural networks and total variation regularization

    Directory of Open Access Journals (Sweden)

    Philipp Kainz

    2017-10-01

    Full Text Available Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.

  14. Application of Contrast-Enhanced Ultrasound in Cystic Pancreatic Lesions Using a Simplified Classification Diagnostic Criterion

    Directory of Open Access Journals (Sweden)

    Zhihui Fan

    2015-01-01

    Full Text Available Objective. Classification diagnosis was performed for cystic pancreatic lesions using ultrasound (US and contrast-enhanced ultrasound (CEUS to explore the diagnostic value of CEUS by comparison with enhanced CT. Methods. Sixty-four cases with cystic pancreatic lesions were included in this study. The cystic lesions of pancreas were classified into four types by US, CEUS, and CT: type I unilocular cysts; type II microcystic lesions; type III macrocystic lesions; and type IV cystic lesions with solid components or irregular thickening of the cystic wall or septa. Results. Eighteen type I, 7 type II, 10 type III, and 29 type IV cases were diagnosed by CT. The classification results by US were as follows: 6 type I; 5 type II; 4 type III; and 49 type IV cases. Compared with the results by enhanced CT, the kappa value was 0.36. Using CEUS, 15, 6, 12, and 31 cases were diagnosed as types I–IV, respectively. The kappa value was 0.77. Conclusion. CEUS has obvious superiority over US in the classification diagnostic accuracy in cystic pancreatic lesions and CEUS results showed substantial agreement with enhanced CT. CEUS could contribute to the differential diagnosis of cystic pancreatic diseases.

  15. Stochastic gradient boosting classification trees for forest fuel types mapping through airborne laser scanning and IRS LISS-III imagery

    Science.gov (United States)

    Chirici, G.; Scotti, R.; Montaghi, A.; Barbati, A.; Cartisano, R.; Lopez, G.; Marchetti, M.; McRoberts, R. E.; Olsson, H.; Corona, P.

    2013-12-01

    This paper presents an application of Airborne Laser Scanning (ALS) data in conjunction with an IRS LISS-III image for mapping forest fuel types. For two study areas of 165 km2 and 487 km2 in Sicily (Italy), 16,761 plots of size 30-m × 30-m were distributed using a tessellation-based stratified sampling scheme. ALS metrics and spectral signatures from IRS extracted for each plot were used as predictors to classify forest fuel types observed and identified by photointerpretation and fieldwork. Following use of traditional parametric methods that produced unsatisfactory results, three non-parametric classification approaches were tested: (i) classification and regression tree (CART), (ii) the CART bagging method called Random Forests, and (iii) the CART bagging/boosting stochastic gradient boosting (SGB) approach. This contribution summarizes previous experiences using ALS data for estimating forest variables useful for fire management in general and for fuel type mapping, in particular. It summarizes characteristics of classification and regression trees, presents the pre-processing operation, the classification algorithms, and the achieved results. The results demonstrated superiority of the SGB method with overall accuracy of 84%. The most relevant ALS metric was canopy cover, defined as the percent of non-ground returns. Other relevant metrics included the spectral information from IRS and several other ALS metrics such as percentiles of the height distribution, the mean height of all returns, and the number of returns.

  16. Fusion analysis of functional MRI data for classification of individuals based on patterns of activation.

    Science.gov (United States)

    Ramezani, Mahdi; Abolmaesumi, Purang; Marble, Kris; Trang, Heather; Johnsrude, Ingrid

    2015-06-01

    Classification of individuals based on patterns of brain activity observed in functional MRI contrasts may be helpful for diagnosis of neurological disorders. Prior work for classification based on these patterns have primarily focused on using a single contrast, which does not take advantage of complementary information that may be available in multiple contrasts. Where multiple contrasts are used, the objective has been only to identify the joint, distinct brain activity patterns that differ between groups of subjects; not to use the information to classify individuals. Here, we use joint Independent Component Analysis (jICA) within a Support Vector Machine (SVM) classification method, and take advantage of the relative contribution of activation patterns generated from multiple fMRI contrasts to improve classification accuracy. Young (age: 19-26) and older (age: 57-73) adults (16 each) were scanned while listening to noise alone and to speech degraded with noise, half of which contained meaningful context that could be used to enhance intelligibility. Functional contrasts based on these conditions (and a silent baseline condition) were used within jICA to generate spatially independent joint activation sources and their corresponding modulation profiles. Modulation profiles were used within a non-linear SVM framework to classify individuals as young or older. Results demonstrate that a combination of activation maps across the multiple contrasts yielded an area under ROC curve of 0.86, superior to classification resulting from individual contrasts. Moreover, class separability, measured by a divergence criterion, was substantially higher when using the combination of activation maps.

  17. New Dandelion Algorithm Optimizes Extreme Learning Machine for Biomedical Classification Problems

    Directory of Open Access Journals (Sweden)

    Xiguang Li

    2017-01-01

    Full Text Available Inspired by the behavior of dandelion sowing, a new novel swarm intelligence algorithm, namely, dandelion algorithm (DA, is proposed for global optimization of complex functions in this paper. In DA, the dandelion population will be divided into two subpopulations, and different subpopulations will undergo different sowing behaviors. Moreover, another sowing method is designed to jump out of local optimum. In order to demonstrate the validation of DA, we compare the proposed algorithm with other existing algorithms, including bat algorithm, particle swarm optimization, and enhanced fireworks algorithm. Simulations show that the proposed algorithm seems much superior to other algorithms. At the same time, the proposed algorithm can be applied to optimize extreme learning machine (ELM for biomedical classification problems, and the effect is considerable. At last, we use different fusion methods to form different fusion classifiers, and the fusion classifiers can achieve higher accuracy and better stability to some extent.

  18. Induction of decision trees and Bayesian classification applied to diagnosis of sport injuries.

    Science.gov (United States)

    Zelic, I; Kononenko, I; Lavrac, N; Vuga, V

    1997-12-01

    Machine learning techniques can be used to extract knowledge from data stored in medical databases. In our application, various machine learning algorithms were used to extract diagnostic knowledge which may be used to support the diagnosis of sport injuries. The applied methods include variants of the Assistant algorithm for top-down induction of decision trees, and variants of the Bayesian classifier. The available dataset was insufficient for reliable diagnosis of all sport injuries considered by the system. Consequently, expert-defined diagnostic rules were added and used as pre-classifiers or as generators of additional training instances for diagnoses for which only few training examples were available. Experimental results show that the classification accuracy and the explanation capability of the naive Bayesian classifier with the fuzzy discretization of numerical attributes were superior to other methods and estimated as the most appropriate for practical use.

  19. Integrated knowledge-based modeling and its application for classification problems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Knowledge discovery from data directly can hardly avoid the fact that it is biased towards the collected experimental data, whereas, expert systems are always baffled with the manual knowledge acquisition bottleneck. So it is believable that integrating the knowledge embedded in data and those possessed by experts can lead to a superior modeling approach. Aiming at the classification problems, a novel integrated knowledge-based modeling methodology, oriented by experts and driven by data, is proposed. It starts from experts identifying modeling parameters, and then the input space is partitioned followed by fuzzification. Afterwards, single rules are generated and then aggregated to form a rule base. on which a fuzzy inference mechanism is proposed. The experts are allowed to make necessary changes on the rule base to improve the model accuracy. A real-world application, welding fault diagnosis, is presented to demonstrate the effectiveness of the methodology.

  20. Deep Learning in Label-free Cell Classification

    National Research Council Canada - National Science Library

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram

    2016-01-01

    .... Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification...

  1. Evaluating Measurement Accuracy

    CERN Document Server

    Rabinovich, Semyon G

    2010-01-01

    The goal of Evaluating Measurement Accuracy: A Practical Approach is to present methods for estimating the accuracy of measurements performed in industry, trade, and scientific research. Although multiple measurements are the focus of current theory, single measurements are the ones most commonly used. This book answers fundamental questions not addressed by present theory, such as how to discover the complete uncertainty of a measurement result. In developing a general theory of processing experimental data, this book, for the first time, presents the postulates of the theory of measurements. It introduces several new terms and definitions about the relationship between the accuracy of measuring instruments and measurements utilizing these instruments. It also offers well-grounded and practical methods for combining the components of measurement inaccuracy. From developing the theory of indirect measurements to proposing new methods of reduction in place of the traditional ones, this work encompasses the ful...

  2. Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery

    OpenAIRE

    Li, Guiying; Lu, Dengsheng; MORAN, EMILIO; Hetrick, Scott

    2011-01-01

    This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms – maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based clas...

  3. Escuela Superior de Palos Verdes

    Directory of Open Access Journals (Sweden)

    Neutra, Richard J.

    1965-02-01

    Full Text Available Before initiating the building operations for the «Palos Verdes» School, the site was divided into two large horizontal surfaces, at different levels. The lower one served to accommodate the playing fields, a car park, the physical training building, and shop and ancillary buildings. On the higher of these two surfaces, and to the West of the access road, there is a car park and also the building and plot of ground devoted to agricultural technology, as well as the literary studies and general purpose buildings. As a complement to these, there is a series of blocks, arranged in parallel rows, which house the administrative offices, the art school, the craft's school, the general classrooms, and those devoted to higher education. The fascinating aspect of this school is the outstanding penetration of the architect's mind into the essential function of the project. Its most evident merit is the sense of comradeship and harmony that permeates the whole architectural manifold.Antes de construir el complejo escolar «Palos Verdes» se comenzó por crear, en el terreno, dos grandes mesetas a niveles diferentes. Sobre el inferior se organizaron: los campos de juegos, de deportes, un aparcamiento, el edificio para educación física y los destinados a tiendas y servicios. Sobre la meseta superior, al oeste de la vía de acceso, se dispuso un aparcamiento y el edificio y campo para adiestramiento agrícola; al este, otro aparcamiento, el edificio dedicado a materias literarias, y el destinado a usos múltiples. Completan las instalaciones de la escuela una serie de bloques paralelos: la administración, la escuela de arte, las clases de trabajos manuales, las aulas de enseñanzas generales, y las de los cursos superiores. Lo fascinante de este complejo escolar es la perfecta y magistral compenetración del arquitecto con el tema proyectado, y su mayor mérito, la sensación de cordialidad y armonía con el ambiente.

  4. The Truth about Accuracy

    NARCIS (Netherlands)

    Buekens, F.A.I.; Truyen, Frederick; Martini, Carlo; Boumans, Marcel

    2014-01-01

    When we evaluate the outcomes of investigative actions as justified or unjustified, good or bad, rational or irrational, we make, in a broad sense of the term, evaluative judgements about them. We look at operational accuracy as a desirable and evaluable quality of the outcomes and explore how the c

  5. Preprocessing for classification of thermograms in breast cancer detection

    Science.gov (United States)

    Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz

    2016-09-01

    Performance of binary classification of breast cancer suffers from high imbalance between classes. In this article we present the preprocessing module designed to negate the discrepancy in training examples. Preprocessing module is based on standardization, Synthetic Minority Oversampling Technique and undersampling. We show how each algorithm influences classification accuracy. Results indicate that described module improves overall Area Under Curve up to 10% on the tested dataset. Furthermore we propose other methods of dealing with imbalanced datasets in breast cancer classification.

  6. Analysis of the classification of US and Canadian intensive test sites using the Image 100 hybrid classification system

    Science.gov (United States)

    Hocutt, W. T. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Labeling of wheat rather than total grains, particularly with only one acquisition, led to significant overestimates in some segments. The Image-100 software and procedures were written to facilitate classification of the LACIE segments but were not designed to record data for later accuracy assessment. A much better evaluation would have been possible if accuracy assessment data had been collected following each satisfactory classification.

  7. Random forests for classification in ecology.

    Science.gov (United States)

    Cutler, D Richard; Edwards, Thomas C; Beard, Karen H; Cutler, Adele; Hess, Kyle T; Gibson, Jacob; Lawler, Joshua J

    2007-11-01

    Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature.

  8. Myoelectric walking mode classification for transtibial amputees.

    Science.gov (United States)

    Miller, Jason D; Beazer, Mahyo Seyedali; Hahn, Michael E

    2013-10-01

    Myoelectric control algorithms have the potential to detect an amputee's motion intent and allow the prosthetic to adapt to changes in walking mode. The development of a myoelectric walking mode classifier for transtibial amputees is outlined. Myoelectric signals from four muscles (tibialis anterior, medial gastrocnemius (MG), vastus lateralis, and biceps femoris) were recorded for five nonamputee subjects and five transtibial amputees over a variety of walking modes: level ground at three speeds, ramp ascent/descent, and stair ascent/descent. These signals were decomposed into relevant features (mean absolute value, variance, wavelength, number of slope sign changes, number of zero crossings) over three subwindows from the gait cycle and used to test the ability of classification algorithms for transtibial amputees using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Detection of all seven walking modes had an accuracy of 97.9% for the amputee group and 94.7% for the nonamputee group. Misclassifications occurred most frequently between different walking speeds due to the similar nature of the gait pattern. Stair ascent/descent had the best classification accuracy with 99.8% for the amputee group and 100.0% for the nonamputee group. Stability of the developed classifier was explored using an electrode shift disturbance for each muscle. Shifting the electrode placement of the MG had the most pronounced effect on the classification accuracy for both samples. No increase in classification accuracy was observed when using SVM compared to LDA for the current dataset.

  9. On music genre classification via compressive sampling

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    Recent work \\cite{Chang2010} combines low-level acoustic features and random projection (referred to as ``compressed sensing'' in \\cite{Chang2010}) to create a music genre classification system showing an accuracy among the highest reported for a benchmark dataset. This not only contradicts...

  10. Sobredentadura total superior implantosoportada Superior total overdenture on implants

    Directory of Open Access Journals (Sweden)

    Luis Orlando Rodríguez García

    2010-06-01

    Full Text Available Se presenta un caso de un paciente desdentado total superior, rehabilitado en la consulta de implantología de la Clínica "Pedro Ortiz" del municipio Habana del Este en Ciudad de La Habana, Cuba, en el año 2009, mediante prótesis sobre implantes osteointegrados, técnica que se ha incorporado a la práctica estomatológica en Cuba como alternativa al tratamiento convencional en los pacientes desdentados totales. Se siguió un protocolo que comprendió una fase quirúrgica, procedimiento con o sin realización de colgajo y carga precoz o inmediata. Se presenta un paciente masculino de 56 años de edad, que acudió a la consulta multidisciplinaria, preocupado, porque se le habían elaborado tres prótesis en los últimos dos años y ninguna reunía los requisitos de retención que él necesitaba para sentirse seguro y cómodo con las mismas. El resultado final fue la satisfacción total del paciente, con el mejoramiento de la calidad estética y funcional.This is the case of a total maxilla edentulous patient seen in consultation of the "Pedro Ortíz" Clinic Implant of Habana del Este municipality in 2009 and con rehabilitation by prosthesis over osteointegration implants added to stomatology practice in Cuba as an alternative to conventional treatment in patients totally edentulous. We follow a protocol including a surgery or surgical phase, technique without or with flap creation and early or immediate load. This is a male patient aged 56 came to our multidisciplinary consultation worried because he had three prostheses in last two years and any fulfilled the requirements of retention to feel safe and comfortable with prostheses. The final result was the total satisfaction of rehabilitated patient improving its aesthetic and functional quality.

  11. Galaxy Classifications with Deep Learning

    Science.gov (United States)

    Lukic, Vesna; Brüggen, Marcus

    2017-06-01

    Machine learning techniques have proven to be increasingly useful in astronomical applications over the last few years, for example in object classification, estimating redshifts and data mining. One example of object classification is classifying galaxy morphology. This is a tedious task to do manually, especially as the datasets become larger with surveys that have a broader and deeper search-space. The Kaggle Galaxy Zoo competition presented the challenge of writing an algorithm to find the probability that a galaxy belongs in a particular class, based on SDSS optical spectroscopy data. The use of convolutional neural networks (convnets), proved to be a popular solution to the problem, as they have also produced unprecedented classification accuracies in other image databases such as the database of handwritten digits (MNIST †) and large database of images (CIFAR ‡). We experiment with the convnets that comprised the winning solution, but using broad classifications. The effect of changing the number of layers is explored, as well as using a different activation function, to help in developing an intuition of how the networks function and to see how they can be applied to radio galaxy images.

  12. Autonomous Ship Classification By Moment Invariants

    Science.gov (United States)

    Zvolanek, Budimir

    1981-12-01

    An algorithm to classify ships from images generated by an infrared (IR) imaging sensor is described. The algorithm is based on decision-theoretic classification of Moment Invariant Functions (MIFs). The MIFs are computed from two-dimensional gray-level images to form a feature vector uniquely describing the ship. The MIF feature vector is classified by a Distance-Weighted k-Nearest Neighbor (D-W k-NN) decision rule to identify the ship type. Significant advantage of the MIF feature extraction coupled with D-W k-NN classification is the invariance of the classification accuracies to ship/sensor orienta-tion - aspect, depression, roll angles and range. The accuracy observed from a set of simulated IR test images reveals a good potential of the classifier algorithm for ship screening.

  13. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  14. On the accuracy of language trees.

    Directory of Open Access Journals (Sweden)

    Simone Pompei

    Full Text Available Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve

  15. Modulation classification based on spectrogram

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The aim of modulation classification (MC) is to identify the modulation type of a communication signal. It plays an important role in many cooperative or noncooperative communication applications. Three spectrogram-based modulation classification methods are proposed. Their reccgnition scope and performance are investigated or evaluated by theoretical analysis and extensive simulation studies. The method taking moment-like features is robust to frequency offset while the other two, which make use of principal component analysis (PCA) with different transformation inputs,can achieve satisfactory accuracy even at low SNR (as low as 2 dB). Due to the properties of spectrogram, the statistical pattern recognition techniques, and the image preprocessing steps, all of our methods are insensitive to unknown phase and frequency offsets, timing errors, and the arriving sequence of symbols.

  16. Reconceptualizing metacomprehension calibration accuracy

    OpenAIRE

    Egan, Rylan Graham

    2012-01-01

    Accurate judgment of text comprehension is compulsory for learners to effectively self-regulate learning from text. Unfortunately, until relatively recently the literature on text comprehension judgment, termed metacomprehension, has shown learners to be inaccurate in their judgments. Over the last decade researchers have discovered that when learners use delayed summaries of text to make judgments metacomprehension accuracy increases. In contrast, when learners use individual differences (e....

  17. Geoid undulation accuracy

    Science.gov (United States)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  18. Kaloko-Honokohau National Historical Park Vegetation Mapping Project - Field Plots, Observation and Accuracy Assessment Points

    Data.gov (United States)

    National Park Service, Department of the Interior — This metadata is for the 2008 vegetation (classification) field plots (spatial database) and 2010 accuracy assessment points (spatial database) created from the...

  19. Regional manifold learning for disease classification.

    Science.gov (United States)

    Ye, Dong Hye; Desjardins, Benoit; Hamm, Jihun; Litt, Harold; Pohl, Kilian M

    2014-06-01

    While manifold learning from images itself has become widely used in medical image analysis, the accuracy of existing implementations suffers from viewing each image as a single data point. To address this issue, we parcellate images into regions and then separately learn the manifold for each region. We use the regional manifolds as low-dimensional descriptors of high-dimensional morphological image features, which are then fed into a classifier to identify regions affected by disease. We produce a single ensemble decision for each scan by the weighted combination of these regional classification results. Each weight is determined by the regional accuracy of detecting the disease. When applied to cardiac magnetic resonance imaging of 50 normal controls and 50 patients with reconstructive surgery of Tetralogy of Fallot, our method achieves significantly better classification accuracy than approaches learning a single manifold across the entire image domain.

  20. Classification for Inconsistent Decision Tables

    KAUST Repository

    Azad, Mohammad

    2016-09-28

    Decision trees have been used widely to discover patterns from consistent data set. But if the data set is inconsistent, where there are groups of examples with equal values of conditional attributes but different labels, then to discover the essential patterns or knowledge from the data set is challenging. Three approaches (generalized, most common and many-valued decision) have been considered to handle such inconsistency. The decision tree model has been used to compare the classification results among three approaches. Many-valued decision approach outperforms other approaches, and M_ws_entM greedy algorithm gives faster and better prediction accuracy.

  1. Object Based and Pixel Based Classification Using Rapideye Satellite Imager of ETI-OSA, Lagos, Nigeria

    Directory of Open Access Journals (Sweden)

    Esther Oluwafunmilayo Makinde

    2016-12-01

    Full Text Available Several studies have been carried out to find an appropriate method to classify the remote sensing data. Traditional classification approaches are all pixel-based, and do not utilize the spatial information within an object which is an important source of information to image classification. Thus, this study compared the pixel based and object based classification algorithms using RapidEye satellite image of Eti-Osa LGA, Lagos. In the object-oriented approach, the image was segmented to homogenous area by suitable parameters such as scale parameter, compactness, shape etc. Classification based on segments was done by a nearest neighbour classifier. In the pixel-based classification, the spectral angle mapper was used to classify the images. The user accuracy for each class using object based classification were 98.31% for waterbody, 92.31% for vegetation, 86.67% for bare soil and 90.57% for Built up while the user accuracy for the pixel based classification were 98.28% for waterbody, 84.06% for Vegetation 86.36% and 79.41% for Built up. These classification techniques were subjected to accuracy assessment and the overall accuracy of the Object based classification was 94.47%, while that of Pixel based classification yielded 86.64%. The result of classification and accuracy assessment show that the object-based approach gave more accurate and satisfying results

  2. Sugarcane Land Classification with Satellite Imagery using Logistic Regression Model

    Science.gov (United States)

    Henry, F.; Herwindiati, D. E.; Mulyono, S.; Hendryli, J.

    2017-03-01

    This paper discusses the classification of sugarcane plantation area from Landsat-8 satellite imagery. The classification process uses binary logistic regression method with time series data of normalized difference vegetation index as input. The process is divided into two steps: training and classification. The purpose of training step is to identify the best parameter of the regression model using gradient descent algorithm. The best fit of the model can be utilized to classify sugarcane and non-sugarcane area. The experiment shows high accuracy and successfully maps the sugarcane plantation area which obtained best result of Cohen’s Kappa value 0.7833 (strong) with 89.167% accuracy.

  3. FUSION OF WAVELET AND CURVELET COEFFICIENTS FOR GRAY TEXTURE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    M. Santhanalakshmi

    2014-05-01

    Full Text Available This study presents a framework for gray texture classification based on the fusion of wavelet and curvelet features. The two main frequency domain transformations Discrete Wavelet Transform (DWT and Discrete Curvelet Transform (DCT are analyzed. The features are extracted from the DWT and DCT decomposed image separately and their performance is evaluated independently. Then feature fusion technique is applied to increase the classification accuracy of the proposed approach. Brodatz texture images are used for this study. The results show that, only two texture images D105 and D106 are misclassified by the fusion approach and 99.74% classification accuracy is obtained.

  4. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data

    Science.gov (United States)

    Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2010-01-01

    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.

  5. Sensitivity of Support Vector Machine Classification to Various Training Features

    Directory of Open Access Journals (Sweden)

    Fuling Bian

    2013-07-01

    Full Text Available Remote sensing image classification is one of the most important techniques in image interpretation, which can be used for environmental monitoring, evaluation and prediction. Many algorithms have been developed for image classification in the literature. Support vector machine (SVM is a kind of supervised classification that has been widely used recently. The classification accuracy produced by SVM may show variation depending on the choice of training features. In this paper, SVM was used for land cover classification using Quickbird images. Spectral and textural features were extracted for the classification and the results were analyzed thoroughly. Results showed that the number of features employed in SVM was not the more the better. Different features are suitable for different type of land cover extraction. This study verifies the effectiveness and robustness of SVM in the classification of high spatial resolution remote sensing images.    

  6. Generating Interpretable Fuzzy Systems for Classification Problems

    Directory of Open Access Journals (Sweden)

    Juan A. Contreras-Montes

    2009-12-01

    Full Text Available This paper presents a new method to generate interpretable fuzzy systems from training data to deal with classification problems. The antecedent partition uses triangular sets with 0.5 interpolations avoiding the presence of complex overlapping that happens in another method. Singleton consequents are generated form the projection of the modal values of each triangular membership function into the output space. Least square method is used to adjust the consequents. The proposed method gets a higher average classification accuracy rate than the existing methods with a reduced number of rules andparameters and without sacrificing the fuzzy system interpretability. The proposed approach is applied to two classical classification problems: Iris data and the Wisconsin Breast Cancer classification problem.

  7. Classification of cognitive states using functional MRI data

    Science.gov (United States)

    Yang, Ye; Pal, Ranadip; O'Boyle, Michael

    2010-03-01

    A fundamental goal of the analysis of fMRI data is to locate areas of brain activation that can differentiate various cognitive tasks. Traditionally, researchers have approached fMRI analysis through characterizing the relationship between cognitive variables and individual brain voxels. In recent years, multivariate approaches (analyze more than one voxel at once) to fMRI data analysis have gained importance. But in majority of the multivariate approaches, the voxels used for classification are selected based on prior biological knowledge or discriminating power of individual voxels. We used sequential floating forward search (SFFS) feature selection approach for selecting the voxels and applied it to distinguish the cognitive states of whether a subject is doing a reasoning or a counting task. We obtained superior classifier performance by using the sequential approach as compared to selecting the features with best individual classifier performance. We analyzed the problem of over-fitting in this extremely high dimensional feature space with limited training samples. For estimating the accuracy of the classifier, we employed various estimation methods and discussed their importance in this small sample scenario. Also we modified the feature selection algorithm by adding spatial information to incorporate the biological constraint that spatially nearby voxels tends to represent similar things.

  8. Paso superior en una ladera

    Directory of Open Access Journals (Sweden)

    Bender, O.

    1965-07-01

    Full Text Available The Redwood highway, through the Californian forest, runs on a viaduct, as it crosses a mountain slope of about 45° inclination. The firm ground is fairly deep, and as an additional constructional difficulty, it was necessary to respect the natural beauty of the countryside. A structure of portal frames were built, forming a number of short spans. These spans were bridged with metal girders, on which a 19 m wide deck was placed. The columns are hollow and have a transversal cross beam, to join each pair. There was difficulty in excavating the foundations for the columns, as it was necessary to dig through the soft top soil, and also prevent this soil from hurting the trunks of the forest trees. Another significant difficulty in the construction of this viaduct was the access to the working site, since there were no suitable platforms from which to operate the appropriate machinery. This made it necessary to do a lot of the work by manual operation. As one of the edges of the deck is very close to the mountain side, a supporting beam was erected on this side. It was made of concrete, on metal piles. The formwork for the deck structure was placed on the concrete stems of the supporting piles.La autopista denominada Redwood (California salva, con un paso superior, la ladera de un bosque cuya pendiente es del 1/1. El terreno firme se halla a bastante profundidad, añadiéndose, a los naturales problemas de la construcción, el imperativo de respetar la belleza agreste del paraje. La solución adoptada consiste en una estructura porticada, con varios tramos de pequeñas luces, salvados con vigas metálicas, sobre los que se coloca la losa del tablero, de 19 m de anchura total. Los soportes están constituidos por pórticos de dos montantes huecos (con bases de hormigón en masa por debajo del suelo, hasta el firme coronados por un cabezal. La perforación de pozos para el hormigonado de los montantes presentaba la dificultad de atravesar el terreno

  9. Automated Decision Tree Classification of Corneal Shape

    Science.gov (United States)

    Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.

    2011-01-01

    Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification

  10. An applied research on remote sensing classification in the Loess Plateau

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Due to complex terrain of the Loess Plateau, the classification accuracy is unsatisfactory when a single supervised classification is used in the remote sensing investigation of the sloping field. Taking the loess hill and gully area of northern Shaanxi Province as a test area, a research was conducted to extract sloping field and other land use categories by applying an integrated classification. Based on an integration of supervised classification and unsupervised classification, sampling method is remarkably improved. The results show that the classification accuracy is satisfactory by the method and is of critical significance in obtaining up-to-date information of the sloping field, which should be helpful in the state key project of converting fiumland to forest and grassland on slope land in this area. This research sought to improve the appfication accuracy of image classification in complex terrain areas.

  11. Land Cover Classification Using ALOS Imagery For Penang, Malaysia

    Science.gov (United States)

    Sim, C. K.; Abdullah, K.; MatJafri, M. Z.; Lim, H. S.

    2014-02-01

    This paper presents the potential of integrating optical and radar remote sensing data to improve automatic land cover mapping. The analysis involved standard image processing, and consists of spectral signature extraction and application of a statistical decision rule to identify land cover categories. A maximum likelihood classifier is utilized to determine different land cover categories. Ground reference data from sites throughout the study area are collected for training and validation. The land cover information was extracted from the digital data using PCI Geomatica 10.3.2 software package. The variations in classification accuracy due to a number of radar imaging processing techniques are studied. The relationship between the processing window and the land classification is also investigated. The classification accuracies from the optical and radar feature combinations are studied. Our research finds that fusion of radar and optical significantly improved classification accuracies. This study indicates that the land cover/use can be mapped accurately by using this approach.

  12. Multi-Level Audio Classification Architecture

    Directory of Open Access Journals (Sweden)

    Jozef Vavrek

    2015-01-01

    Full Text Available A multi-level classification architecture for solving binary discrimination problem is proposed in this paper. The main idea of proposed solution is derived from the fact that solving one binary discrimination problem multiple times can reduce the overall miss-classification error. We aimed our effort towards building the classification architecture employing the combination of multiple binary SVM (Support Vector Machine classifiers for solving two-class discrimination problem. Therefore, we developed a binary discrimination architecture employing the SVM classifier (BDASVM with intention to use it for classification of broadcast news (BN audio data. The fundamental element of BDASVM is the binary decision (BD algorithm that performs discrimination between each pair of acoustic classes utilizing decision function modeled by separating hyperplane. The overall classification accuracy is conditioned by finding the optimal parameters for discrimination function resulting in higher computational complexity. The final form of proposed BDASVM is created by combining four BDSVM discriminators supplemented by decision table. Experimental results show that the proposed classification architecture can decrease the overall classification error in comparison with binary decision trees SVM (BDTSVM architecture.

  13. Pseudodisplacements of superior vena cava catheter in the persistent left superior vena cava

    Energy Technology Data Exchange (ETDEWEB)

    Jantsch, H.; Draxler, V.; Muhar, U.; Schlemmer, M.; Waneck, R.

    1983-01-01

    Pseudodisplacement of a left sided superior vena cava catheter in a persistent superior vena cava may be expected in adults in 0,37% and in a group of children with congenital heart disease in 2,5%. Embryology, anatomy and clinical implications is discussed on the basis of our own cases. The vena cava superior sinistra persitents is depending on a sufficient calibre a suitable vessel for a superior cava catheter.

  14. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases th...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  15. Text Classification Retrieval Based on Complex Network and ICA Algorithm

    Directory of Open Access Journals (Sweden)

    Hongxia Li

    2013-08-01

    Full Text Available With the development of computer science and information technology, the library is developing toward information and network. The library digital process converts the book into digital information. The high-quality preservation and management are achieved by computer technology as well as text classification techniques. It realizes knowledge appreciation. This paper introduces complex network theory in the text classification process and put forwards the ICA semantic clustering algorithm. It realizes the independent component analysis of complex network text classification. Through the ICA clustering algorithm of independent component, it realizes character words clustering extraction of text classification. The visualization of text retrieval is improved. Finally, we make a comparative analysis of collocation algorithm and ICA clustering algorithm through text classification and keyword search experiment. The paper gives the clustering degree of algorithm and accuracy figure. Through simulation analysis, we find that ICA clustering algorithm increases by 1.2% comparing with text classification clustering degree. Accuracy can be improved by 11.1% at most. It improves the efficiency and accuracy of text classification retrieval. It also provides a theoretical reference for text retrieval classification of eBook

  16. Aircraft Operations Classification System

    Science.gov (United States)

    Harlow, Charles; Zhu, Weihong

    2001-01-01

    Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events.

  17. Machine Learning Algorithms for Automatic Classification of Marmoset Vocalizations

    Science.gov (United States)

    Ribeiro, Sidarta; Pereira, Danillo R.; Papa, João P.; de Albuquerque, Victor Hugo C.

    2016-01-01

    Automatic classification of vocalization type could potentially become a useful tool for acoustic the monitoring of captive colonies of highly vocal primates. However, for classification to be useful in practice, a reliable algorithm that can be successfully trained on small datasets is necessary. In this work, we consider seven different classification algorithms with the goal of finding a robust classifier that can be successfully trained on small datasets. We found good classification performance (accuracy > 0.83 and F1-score > 0.84) using the Optimum Path Forest classifier. Dataset and algorithms are made publicly available. PMID:27654941

  18. Distance-based features in pattern classification

    Directory of Open Access Journals (Sweden)

    Lin Wei-Yang

    2011-01-01

    Full Text Available Abstract In data mining and pattern classification, feature extraction and representation methods are a very important step since the extracted features have a direct and significant impact on the classification accuracy. In literature, numbers of novel feature extraction and representation methods have been proposed. However, many of them only focus on specific domain problems. In this article, we introduce a novel distance-based feature extraction method for various pattern classification problems. Specifically, two distances are extracted, which are based on (1 the distance between the data and its intra-cluster center and (2 the distance between the data and its extra-cluster centers. Experiments based on ten datasets containing different numbers of classes, samples, and dimensions are examined. The experimental results using naïve Bayes, k-NN, and SVM classifiers show that concatenating the original features provided by the datasets to the distance-based features can improve classification accuracy except image-related datasets. In particular, the distance-based features are suitable for the datasets which have smaller numbers of classes, numbers of samples, and the lower dimensionality of features. Moreover, two datasets, which have similar characteristics, are further used to validate this finding. The result is consistent with the first experiment result that adding the distance-based features can improve the classification performance.

  19. LDA boost classification: boosting by topics

    Science.gov (United States)

    Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li

    2012-12-01

    AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.

  20. Enhancement of galaxy images for improved classification

    Science.gov (United States)

    Jenkinson, John; Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, the classification accuracy of galaxy images is demonstrated to be improved by enhancing the galaxy images. Galaxy images often contain faint regions that are of similar intensity to stars and the image background, resulting in data loss during background subtraction and galaxy segmentation. Enhancement darkens these faint regions, enabling them to be distinguished from other objects in the image and the image background, relative to their original intensities. The heap transform is employed for the purpose of enhancement. Segmentation then produces a galaxy image which closely resembles the structure of the original galaxy image, and one that is suitable for further processing and classification. 6 Morphological feature descriptors are applied to the segmented images after a preprocessing stage and used to extract the galaxy image structure for use in training the classifier. The support vector machine learning algorithm performs training and validation of the original and enhanced data, and a comparison between the classification accuracy of each data set is included. Principal component analysis is used to compress the data sets for the purpose of classification visualization and a comparison between the reduced and original feature spaces. Future directions for this research include galaxy image enhancement by various methods, and classification performed with the use of a sparse dictionary. Both future directions are introduced.

  1. Towards Understanding Spontaneous Speech Word Accuracy vs. Concept Accuracy

    CERN Document Server

    Boros, M; Gallwitz, F; Goerz, G; Hanrieder, G; Niemann, H

    1996-01-01

    In this paper we describe an approach to automatic evaluation of both the speech recognition and understanding capabilities of a spoken dialogue system for train time table information. We use word accuracy for recognition and concept accuracy for understanding performance judgement. Both measures are calculated by comparing these modules' output with a correct reference answer. We report evaluation results for a spontaneous speech corpus with about 10000 utterances. We observed a nearly linear relationship between word accuracy and concept accuracy.

  2. Classification of cultivated plants.

    NARCIS (Netherlands)

    Brandenburg, W.A.

    1986-01-01

    Agricultural practice demands principles for classification, starting from the basal entity in cultivated plants: the cultivar. In establishing biosystematic relationships between wild, weedy and cultivated plants, the species concept needs re-examination. Combining of botanic classification, based

  3. Classification of Induced Magnetic Field Signals for the Microstructural Characterization of Sigma Phase in Duplex Stainless Steels

    Directory of Open Access Journals (Sweden)

    Edgard M. Silva

    2016-07-01

    Full Text Available Duplex stainless steels present excellent mechanical and corrosion resistance properties. However, when heat treated at temperatures above 600 ∘ C, the undesirable tertiary sigma phase is formed. This phase presents high hardness, around 900 HV, and it is rich in chromium, the material toughness being compromised when the amount of this phase is not less than 4%. This work aimed to develop a solution for the detection of this phase in duplex stainless steels through the computational classification of induced magnetic field signals. The proposed solution is based on an Optimum Path Forest classifier, which was revealed to be more robust and effective than Bayes, Artificial Neural Network and Support Vector Machine based classifiers. The induced magnetic field was produced by the interaction between an applied external field and the microstructure. Samples of the 2205 duplex stainless steel were thermal aged in order to obtain different amounts of sigma phases (up to 18% in content. The obtained classification results were compared against the ones obtained by Charpy impact energy test, amount of sigma phase, and analysis of the fracture surface by scanning electron microscopy and X-ray diffraction. The proposed solution achieved a classification accuracy superior to 95% and was revealed to be robust to signal noise, being therefore a valid testing tool to be used in this domain.

  4. Distributed multi-dimensional hidden Markov model: theory and application in multiple-object trajectory classification and recognition

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq

    2008-01-01

    In this paper, we propose a novel distributed causal multi-dimensional hidden Markov model (DHMM). The proposed model can represent, for example, multiple motion trajectories of objects and their interaction activities in a scene; it is capable of conveying not only dynamics of each trajectory, but also interactions information between multiple trajectories, which can be critical in many applications. We firstly provide a solution for non-causal, multi-dimensional hidden Markov model (HMM) by distributing the non-causal model into multiple distributed causal HMMs. We approximate the simultaneous solution of multiple HMMs on a sequential processor by an alternate updating scheme. Subsequently we provide three algorithms for the training and classification of our proposed model. A new Expectation-Maximization (EM) algorithm suitable for estimation of the new model is derived, where a novel General Forward-Backward (GFB) algorithm is proposed for recursive estimation of the model parameters. A new conditional independent subset-state sequence structure decomposition of state sequences is proposed for the 2D Viterbi algorithm. The new model can be applied to many other areas such as image segmentation and image classification. Simulation results in classification of multiple interacting trajectories demonstrate the superior performance and higher accuracy rate of our distributed HMM in comparison to previous models.

  5. Cooperative strategy for a dynamic ensemble of classification models in clinical applications: the case of MRI vertebral compression fractures.

    Science.gov (United States)

    Casti, Paola; Mencattini, Arianna; Nogueira-Barbosa, Marcello H; Frighetto-Pereira, Lucas; Azevedo-Marques, Paulo Mazzoncini; Martinelli, Eugenio; Di Natale, Corrado

    2017-06-14

    In clinical practice, the constructive consultation among experts improves the reliability of the diagnosis and leads to the definition of the treatment plan for the patient. Aggregation of the different opinions collected by many experts can be performed at the level of patient information, abnormality delineation, or final assessment. In this study, we present a novel cooperative strategy that exploits the dynamic contribution of the classification models composing the ensemble to make the final class assignment. As a proof of concept, we applied the proposed approach to the assessment of malignant infiltration in 103 vertebral compression fractures in magnetic resonance images. The results obtained with repeated random subsampling and receiver operating characteristic analysis indicate that the cooperative system statistically improved ([Formula: see text]) the classification accuracy of individual modules as well as of that based on the manual segmentation of the fractures provided by the experts. The performances have been also compared with those obtained with those of standard ensemble classification algorithms showing superior results.

  6. Superiority in value and the repugnant conclusion

    DEFF Research Database (Denmark)

    Jensen, Karsten Klint

    2007-01-01

    James Griffin has considered a weak form of superiority in value a possible remedy to the Repugnant Conclusion. In this paper, I demonstrate that, in a context where value is additive, this weaker form collapses into a stronger form of superiority. And in a context where value is non-additive, weak...... superiority does not amount to a radical value difference at all. I then spell out the consequences of these results for different interpretations of Griffin's suggestion regarding population ethics. None of them comes out very successful, but perhaps they nevertheless retain some interest....

  7. Hyperspectral image classification based on spatial and spectral features and sparse representation

    Institute of Scientific and Technical Information of China (English)

    Yang Jing-Hui; Wang Li-Guo; Qian Jin-Xi

    2014-01-01

    To minimize the low classification accuracy and low utilization of spatial information in traditional hyperspectral image classification methods, we propose a new hyperspectral image classification method, which is based on the Gabor spatial texture features and nonparametric weighted spectral features, and the sparse representation classification method (Gabor–NWSF and SRC), abbreviated GNWSF–SRC. The proposed (GNWSF–SRC) method first combines the Gabor spatial features and nonparametric weighted spectral features to describe the hyperspectral image, and then applies the sparse representation method. Finally, the classification is obtained by analyzing the reconstruction error. We use the proposed method to process two typical hyperspectral data sets with different percentages of training samples. Theoretical analysis and simulation demonstrate that the proposed method improves the classification accuracy and Kappa coefficient compared with traditional classification methods and achieves better classification performance.

  8. Accuracy of tablet splitting.

    Science.gov (United States)

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant.

  9. A NEW SVM BASED EMOTIONAL CLASSIFICATION OF IMAGE

    Institute of Scientific and Technical Information of China (English)

    Wang Weining; Yu Yinglin; Zhang Jianchao

    2005-01-01

    How high-level emotional representation of art paintings can be inferred from percep tual level features suited for the particular classes (dynamic vs. static classification)is presented. The key points are feature selection and classification. According to the strong relationship between notable lines of image and human sensations, a novel feature vector WLDLV (Weighted Line Direction-Length Vector) is proposed, which includes both orientation and length information of lines in an image. Classification is performed by SVM (Support Vector Machine) and images can be classified into dynamic and static. Experimental results demonstrate the effectiveness and superiority of the algorithm.

  10. Genetic Feature Selection for Texture Classification

    Institute of Scientific and Technical Information of China (English)

    PAN Li; ZHENG Hong; ZHANG Zuxun; ZHANG Jianqing

    2004-01-01

    This paper presents a novel approach to feature subset selection using genetic algorithms. This approach has the ability to accommodate multiple criteria such as the accuracy and cost of classification into the process of feature selection and finds the effective feature subset for texture classification. On the basis of the effective feature subset selected, a method is described to extract the objects which are higher than their surroundings, such as trees or forest, in the color aerial images. The methodology presented in this paper is illustrated by its application to the problem of trees extraction from aerial images.

  11. Delineation of sympatric morphotypes of lake trout in Lake Superior

    Science.gov (United States)

    Moore, Seth A.; Bronte, Charles R.

    2001-01-01

    Three morphotypes of lake trout Salvelinus namaycush are recognized in Lake Superior: lean, siscowet, and humper. Absolute morphotype assignment can be difficult. We used a size-free, whole-body morphometric analysis (truss protocol) to determine whether differences in body shape existed among lake trout morphotypes. Our results showed discrimination where traditional morphometric characters and meristic measurements failed to detect differences. Principal components analysis revealed some separation of all three morphotypes based on head and caudal peduncle shape, but it also indicated considerable overlap in score values. Humper lake trout have smaller caudal peduncle widths to head length and depth characters than do lean or siscowet lake trout. Lean lake trout had larger head measures to caudal widths, whereas siscowet had higher caudal peduncle to head measures. Backward stepwise discriminant function analysis retained two head measures, three midbody measures, and four caudal peduncle measures; correct classification rates when using these variables were 83% for leans, 80% for siscowets, and 83% for humpers, which suggests the measures we used for initial classification were consistent. Although clear ecological reasons for these differences are not readily apparent, patterns in misclassification rates may be consistent with evolutionary hypotheses for lake trout within the Laurentian Great Lakes.

  12. Millian superiorities and the repugnant conclusion

    DEFF Research Database (Denmark)

    Jensen, Karsten Klint

    2008-01-01

    James Griffin has considered a form of superiority in value that is weaker than lexical priority as a possible remedy to the Repugnant Conclusion. In this article, I demonstrate that, in a context where value is additive, this weaker form collapses into the stronger form of superiority. And in a ......James Griffin has considered a form of superiority in value that is weaker than lexical priority as a possible remedy to the Repugnant Conclusion. In this article, I demonstrate that, in a context where value is additive, this weaker form collapses into the stronger form of superiority...... of these results for different interpretations of Griffin's suggestion regarding population ethics. None of them comes out very successful, but perhaps they nevertheless retain some interest....

  13. Measuring Financial Gains from Genetically Superior Trees

    Science.gov (United States)

    George Dutrow; Clark Row

    1976-01-01

    Planting genetically superior loblolly pines will probably yield high profits.Forest economists have made computer simulations that predict financial gains expected from a tree improvement program under actual field conditions.

  14. Superior mesenteric artery syndrome causing growth retardation

    Directory of Open Access Journals (Sweden)

    Halil İbrahim Taşcı

    2013-03-01

    Full Text Available Superior mesenteric artery syndrome is a rare and lifethreateningclinical condition caused by the compressionof the third portion of the duodenum between the aortaand the superior mesenteric artery’s proximal part. Thiscompression may lead to chronic intermittent, acute totalor partial obstruction. Sudden weight-loss and the relateddecrease in the fat tissue are considered to be the etiologicalreason of acute stenosis. Weight-loss accompaniedby nausea, vomiting, anorexia, epigastric pain, andbloating are the leading complaints. Barium radiographs,computerized tomography, conventional angiography,tomographic and magnetic resonance angiography areused in the diagnosis. There are medical and surgical approachesto treatment. We hereby present the case ofa patient with superior mesenteric artery syndrome withdelayed diagnosis.Key words: superior mesenteric artery syndrome, nausea-vomiting, anorexia

  15. Classification of cirrhotic patients with or without minimal hepatic encephalopathy and healthy subjects using resting-state attention-related network analysis.

    Directory of Open Access Journals (Sweden)

    Hua-Jun Chen

    Full Text Available BACKGROUND: Attention deficit is an early and key characteristic of minimal hepatic encephalopathy (MHE and has been used as indicator for MHE detection. The aim of this study is to classify the cirrhotic patients with or without MHE (NMHE and healthy controls (HC using the resting-state attention-related brain network analysis. METHODS AND FINDINGS: Resting-state fMRI was administrated to 20 MHE patients, 21 NMHE patients, and 17 HCs. Three attention-related networks, including dorsal attention network (DAN, ventral attention network (VAN, and default mode network (DMN, were obtained by independent component analysis. One-way analysis of covariance was performed to determine the regions of interest (ROIs showing significant functional connectivity (FC change. With FC strength of ROIs as indicators, Linear Discriminant Analysis (LDA was conducted to differentiate MHE from HC or NMHE. Across three groups, significant FC differences were found within DAN (left superior/inferior parietal lobule and right inferior parietal lobule, VAN (right superior parietal lobule, and DMN (bilateral posterior cingulate gyrus and precuneus, and left inferior parietal lobule. With FC strength of ROIs from three networks as indicators, LDA yielded 94.6% classification accuracy between MHE and HC (100% sensitivity and 88.2% specificity and 85.4% classification accuracy between MHE and NMHE (90.0% sensitivity and 81.0% specificity. CONCLUSIONS: Our results suggest that the resting-state attention-related brain network analysis can be useful in classification of subjects with MHE, NMHE, and HC and may provide a new insight into MHE detection.

  16. Automatic classification of blank substrate defects

    Science.gov (United States)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  17. Leiomyosarcoma of the superior vena cava.

    Science.gov (United States)

    de Chaumont, Arthus; Pierret, Charles; de Kerangal, Xavier; Le Moulec, Sylvestre; Laborde, François

    2014-08-01

    Leiomyosarcoma of the superior vena cava is a very rare tumor and only a few cases have been reported, with various techniques of vascular reconstruction. We describe a new case of leiomyosarcoma of the superior vena cava in a 61-year-old woman with extension to the brachiocephalic arterial trunk. Resection and vascular reconstruction were performed using, respectively, polytetrafluoroethylene and polyethylene terephtalate vascular grafts.

  18. Superior mesenteric artery compression syndrome - case report

    OpenAIRE

    Paulo Rocha França Neto; Rodrigo de Almeida Paiva; Antônio Lacerda Filho; Fábio Lopes de Queiroz; Teon Noronha

    2011-01-01

    Superior mesenteric artery syndrome is an entity generally caused by the loss of the intervening mesenteric fat pad, resulting in compression of the third portion of the duodenum by the superior mesenteric artery. This article reports the case of a patient with irremovable metastatic adenocarcinoma in the sigmoid colon, that evolved with intense vomiting. Intestinal transit was carried out, which showed important gastric dilation extended until the third portion of the duodenum, compatible wi...

  19. A targeted change-detection procedure by combining change vector analysis and post-classification approach

    Science.gov (United States)

    Ye, Su; Chen, Dongmei; Yu, Jie

    2016-04-01

    In remote sensing, conventional supervised change-detection methods usually require effective training data for multiple change types. This paper introduces a more flexible and efficient procedure that seeks to identify only the changes that users are interested in, here after referred to as "targeted change detection". Based on a one-class classifier "Support Vector Domain Description (SVDD)", a novel algorithm named "Three-layer SVDD Fusion (TLSF)" is developed specially for targeted change detection. The proposed algorithm combines one-class classification generated from change vector maps, as well as before- and after-change images in order to get a more reliable detecting result. In addition, this paper introduces a detailed workflow for implementing this algorithm. This workflow has been applied to two case studies with different practical monitoring objectives: urban expansion and forest fire assessment. The experiment results of these two case studies show that the overall accuracy of our proposed algorithm is superior (Kappa statistics are 86.3% and 87.8% for Case 1 and 2, respectively), compared to applying SVDD to change vector analysis and post-classification comparison.

  20. Classification of refrigerants; Classification des fluides frigorigenes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  1. Random forest algorithm for classification of multiwavelength data

    Institute of Scientific and Technical Information of China (English)

    Dan Gao; Yan-Xia Zhang; Yong-Heng Zhao

    2009-01-01

    We introduced a decision tree method called Random Forests for multiwavelength data classification. The data were adopted from different databases, including the Sloan Digital Sky Survey (SDSS) Data Release five, USNO, FIRST and ROSAT.We then studied the discrimination of quasars from stars and the classification of quasars,stars and galaxies with the sample from optical and radio bands and with that from optical and X-ray bands. Moreover, feature selection and feature weighting based on Random Forests were investigated. The performances based on different input patterns were compared. The experimental results show that the random forest method is an effective method for astronomical object classification and can be applied to other classification problems faced in astronomy. In addition, Random Forests will show its superiorities due to its own merits, e.g. classification, feature selection, feature weighting as well as outlier detection.

  2. Spectral-Spatial Hyperspectral Image Classification Based on KNN

    Science.gov (United States)

    Huang, Kunshan; Li, Shutao; Kang, Xudong; Fang, Leyuan

    2016-12-01

    Fusion of spectral and spatial information is an effective way in improving the accuracy of hyperspectral image classification. In this paper, a novel spectral-spatial hyperspectral image classification method based on K nearest neighbor (KNN) is proposed, which consists of the following steps. First, the support vector machine is adopted to obtain the initial classification probability maps which reflect the probability that each hyperspectral pixel belongs to different classes. Then, the obtained pixel-wise probability maps are refined with the proposed KNN filtering algorithm that is based on matching and averaging nonlocal neighborhoods. The proposed method does not need sophisticated segmentation and optimization strategies while still being able to make full use of the nonlocal principle of real images by using KNN, and thus, providing competitive classification with fast computation. Experiments performed on two real hyperspectral data sets show that the classification results obtained by the proposed method are comparable to several recently proposed hyperspectral image classification methods.

  3. Enhanced material classification using turbulence-degraded polarimetric imagery.

    Science.gov (United States)

    Hyde, Milo W; Schmidt, Jason D; Havrilla, Michael J; Cain, Stephen C

    2010-11-01

    An enhanced material-classification algorithm using turbulence-degraded polarimetric imagery is presented. The proposed technique improves upon an existing dielectric/metal material-classification algorithm by providing a more detailed object classification. This is accomplished by redesigning the degree-of-linear-polarization priors in the blind-deconvolution algorithm to include two subclasses of metals--an aluminum group classification (includes aluminum, copper, gold, and silver) and an iron group classification (includes iron, titanium, nickel, and chromium). This new classification provides functional information about the object that is not provided by existing dielectric/metal material classifiers. A discussion of the design of these new degree-of-linear-polarization priors is provided. Experimental results of two painted metal samples are also provided to verify the algorithm's accuracy.

  4. Quality-Oriented Classification of Aircraft Material Based on SVM

    Directory of Open Access Journals (Sweden)

    Hongxia Cai

    2014-01-01

    Full Text Available The existing material classification is proposed to improve the inventory management. However, different materials have the different quality-related attributes, especially in the aircraft industry. In order to reduce the cost without sacrificing the quality, we propose a quality-oriented material classification system considering the material quality character, Quality cost, and Quality influence. Analytic Hierarchy Process helps to make feature selection and classification decision. We use the improved Kraljic Portfolio Matrix to establish the three-dimensional classification model. The aircraft materials can be divided into eight types, including general type, key type, risk type, and leveraged type. Aiming to improve the classification accuracy of various materials, the algorithm of Support Vector Machine is introduced. Finally, we compare the SVM and BP neural network in the application. The results prove that the SVM algorithm is more efficient and accurate and the quality-oriented material classification is valuable.

  5. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  6. Multi-channel EEG-based sleep stage classification with joint collaborative representation and multiple kernel learning.

    Science.gov (United States)

    Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui

    2015-10-30

    Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  8. Reticence, Accuracy and Efficacy

    Science.gov (United States)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  9. Compensatory neurofuzzy model for discrete data classification in biomedical

    Science.gov (United States)

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  10. Extreme learning machine and adaptive sparse representation for image classification.

    Science.gov (United States)

    Cao, Jiuwen; Zhang, Kai; Luo, Minxia; Yin, Chun; Lai, Xiaoping

    2016-09-01

    Recent research has shown the speed advantage of extreme learning machine (ELM) and the accuracy advantage of sparse representation classification (SRC) in the area of image classification. Those two methods, however, have their respective drawbacks, e.g., in general, ELM is known to be less robust to noise while SRC is known to be time-consuming. Consequently, ELM and SRC complement each other in computational complexity and classification accuracy. In order to unify such mutual complementarity and thus further enhance the classification performance, we propose an efficient hybrid classifier to exploit the advantages of ELM and SRC in this paper. More precisely, the proposed classifier consists of two stages: first, an ELM network is trained by supervised learning. Second, a discriminative criterion about the reliability of the obtained ELM output is adopted to decide whether the query image can be correctly classified or not. If the output is reliable, the classification will be performed by ELM; otherwise the query image will be fed to SRC. Meanwhile, in the stage of SRC, a sub-dictionary that is adaptive to the query image instead of the entire dictionary is extracted via the ELM output. The computational burden of SRC thus can be reduced. Extensive experiments on handwritten digit classification, landmark recognition and face recognition demonstrate that the proposed hybrid classifier outperforms ELM and SRC in classification accuracy with outstanding computational efficiency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Superior oblique surgery: when and how?

    Directory of Open Access Journals (Sweden)

    Taylan Şekeroğlu H

    2013-08-01

    Full Text Available Hande Taylan Şekeroğlu,1 Ali Sefik Sanac,1 Umut Arslan,2 Emin Cumhur Sener11Department of Ophthalmology, 2Department of Biostatistics, Hacettepe University Faculty of Medicine, Ankara, TurkeyBackground: The purpose of this paper is to review different types of superior oblique muscle surgeries, to describe the main areas in clinical practice where superior oblique surgery is required or preferred, and to discuss the preferred types of superior oblique surgery with respect to their clinical outcomes.Methods: A consecutive nonrandomized retrospective series of patients who had undergone superior oblique muscle surgery as a single procedure were enrolled in the study. The diagnosis, clinical features, preoperative and postoperative vertical deviations in primary position, type of surgery, complications, and clinical outcomes were reviewed. The primary outcome measures were the type of strabismus and the type of superior oblique muscle surgery. The secondary outcome measure was the results of the surgeries.Results: The review identified 40 (20 male, 20 female patients with a median age of 6 (2–45 years. Nineteen patients (47.5% had Brown syndrome, eleven (27.5% had fourth nerve palsy, and ten (25.0% had horizontal deviations with A pattern. The most commonly performed surgery was superior oblique tenotomy in 29 (72.5% patients followed by superior oblique tuck in eleven (27.5% patients. The amount of vertical deviation in the fourth nerve palsy and Brown syndrome groups (P = 0.01 for both and the amount of A pattern in the A pattern group were significantly reduced postoperatively (P = 0.02.Conclusion: Surgery for the superior oblique muscle requires experience and appropriate preoperative evaluation in view of its challenging nature. The main indications are Brown syndrome, fourth nerve palsy, and A pattern deviations. Superior oblique surgery may be effective in terms of pattern collapse and correction of vertical deviations in primary

  12. Fast L1-based sparse representation of EEG for motor imagery signal classification.

    Science.gov (United States)

    Younghak Shin; Heung-No Lee; Balasingham, Ilangko

    2016-08-01

    Improvement of classification performance is one of the key challenges in electroencephalogram (EEG) based motor imagery brain-computer interface (BCI). Recently, sparse representation based classification (SRC) method has been shown to provide satisfactory classification accuracy in motor imagery classification. In this paper, we aim to evaluate the performance of the SRC method in terms of not only its classification accuracy but also of its computation time. For this purpose, we investigate the performance of recently developed fast L1 minimization methods for their use in SRC, such as homotopy and fast iterative soft-thresholding algorithm (FISTA). From experimental analysis, we note that the SRC method with the fast L1 minimization algorithms is shown to provide robust classification performance, compared to support vector machine (SVM), both in time and accuracy.

  13. The Performance of EEG-P300 Classification using Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Arjon Turnip

    2013-12-01

    Full Text Available Electroencephalogram (EEG recordings signal provide an important function of brain-computer communication, but the accuracy of their classification is very limited in unforeseeable signal variations relating to artifacts. In this paper, we propose a classification method entailing time-series EEG-P300 signals using backpropagation neural networks to predict the qualitative properties of a subject’s mental tasks by extracting useful information from the highly multivariate non-invasive recordings of brain activity. To test the improvement in the EEG-P300 classification performance (i.e., classification accuracy and transfer rate with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis (BLDA. Finally, the result of the experiment showed that the average of the classification accuracy was 97% and the maximum improvement of the average transfer rate is 42.4%, indicating the considerable potential of the using of EEG-P300 for the continuous classification of mental tasks.

  14. Comparison of observer variability and accuracy of different criteria for lung scan interpretation.

    Science.gov (United States)

    Hagen, Petronella J; Hartmann, Ieneke J C; Hoekstra, Otto S; Stokkel, Marcel P M; Postmus, Pieter E; Prins, Martin H

    2003-05-01

    Different criteria have been advocated for the interpretation of ventilation/perfusion (V/Q) lung scans in patients with suspected pulmonary embolism (PE). Besides these predefined criteria, many physicians use an integration of the different sets of criteria and their own experience-the so-called Gestalt interpretation. The purpose of this study was to evaluate interobserver variability and accuracy of 3 sets of criteria: the Hull and PIOPED (Prospective Investigation of Pulmonary Embolism Diagnosis) criteria and the Gestalt interpretation. Two experienced observers interpreted V/Q scans of all 328 patients according to the 3 different schemes. The diagnostic classification obtained for the different sets of criteria was analyzed against the presence or absence of PE. The interobserver variabilities as assessed by the kappa statistics of the PIOPED and Hull criteria and for the Gestalt interpretation were 0.70 (95% confidence interval [CI], 0.64-0.76), 0.79 (95% CI, 0.73-0.85), and 0.65 (95% CI, 0.58-0.72), respectively. The differences in kappa values between the Hull and PIOPED criteria and between the Hull criteria and Gestalt interpretation were statistically significant (P PIOPED criteria was low probability. For 21 patients (12 with PE), the scans were intermediate probability according to the PIOPED criteria, whereas the result with the Hull criteria was high probability. Analysis of receiver-operating-characteristic curves yielded a comparable area under the curve for all sets of criteria (0.87-0.90). The Hull, PIOPED, and Gestalt interpretation of V/Q lung scans all have a good accuracy and interobserver variability. However, the reproducibility of the Hull criteria is superior in comparison with that of the other sets of criteria.

  15. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  16. A novel approach for three dimensional dendrite spine segmentation and classification

    Science.gov (United States)

    He, Tiancheng; Xue, Zhong; Wong, Stephen T. C.

    2012-02-01

    Dendritic spines are small, bulbous cellular compartments that carry synapses. Biologists have been studying the biochemical and genetic pathways by examining the morphological changes of the dendritic spines at the intracellular level. Automatic dendritic spine detection from high resolution microscopic images is an important step for such morphological studies. In this paper, a novel approach to automated dendritic spine detection is proposed based on a nonlinear degeneration model. Dendritic spines are recognized as small objects with variable shapes attached to dendritic backbones. We explore the problem of dendritic spine detection from a different angle, i.e., the nonlinear degeneration equation (NDE) is utilized to enhance the morphological differences between the dendrite and spines. Using NDE, we simulated degeneration for dendritic spine detection. Based on the morphological features, the shrinking rate on dendrite pixels is different from that on spines, so that spines can be detected and segmented after degeneration simulation. Then, to separate spines into different types, Gaussian curvatures were employed, and the biomimetic pattern recognition theory was applied for spine classification. In the experiments, we compared quantitatively the spine detection accuracy with previous methods, and the results showed the accuracy and superiority of our methods.

  17. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  18. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1989-09-01

    Certain governmental information must be classified for national security reasons. However, the national security benefits from classifying information are usually accompanied by significant costs -- those due to a citizenry not fully informed on governmental activities, the extra costs of operating classified programs and procuring classified materials (e.g., weapons), the losses to our nation when advances made in classified programs cannot be utilized in unclassified programs. The goal of a classification system should be to clearly identify that information which must be protected for national security reasons and to ensure that information not needing such protection is not classified. This document was prepared to help attain that goal. This document is the first of a planned four-volume work that comprehensively discusses the security classification of information. Volume 1 broadly describes the need for classification, the basis for classification, and the history of classification in the United States from colonial times until World War 2. Classification of information since World War 2, under Executive Orders and the Atomic Energy Acts of 1946 and 1954, is discussed in more detail, with particular emphasis on the classification of atomic energy information. Adverse impacts of classification are also described. Subsequent volumes will discuss classification principles, classification management, and the control of certain unclassified scientific and technical information. 340 refs., 6 tabs.

  19. Fast Wavelet-Based Visual Classification

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a simple attention-like feedback mechanism, significantly improving recognition and robustness in multiple-object scenes. In experiments, the proposed algorithm achieves or exceeds state-of-the-art success rate on object recognition, texture and satellite image classification, language identification and sound classification.

  20. Automated Classification of Seedlings Using Computer Vision

    DEFF Research Database (Denmark)

    Dyrmann, Mads; Christiansen, Peter

    on seven different species. The segmentation process finds plant elements through a colour segmentation method combining excessive green and excessive red and the Plant Stem Emerging Point algorithm to separate leaves from plants. These plant elements are then described by 50 different feature descriptors...... Fourier descriptor, and the proposed feature, that measures the distance between the adjacent Fourier approximations of contours. A subset of the features are selected through different selection methods in order to improve the classification accuracy for three different classifiers; the Multivariate...... Gaussian classifier, the k-Nearest Neighbour classifier and the Support Vector Machine classifier. Finally, classifier fusion is performed by using Bayes Belief Integration to combine the classification for the whole plant with the individual classifications of the leaves of the plant in order to identify...

  1. Unsupervised classification of remote multispectral sensing data

    Science.gov (United States)

    Su, M. Y.

    1972-01-01

    The new unsupervised classification technique for classifying multispectral remote sensing data which can be either from the multispectral scanner or digitized color-separation aerial photographs consists of two parts: (a) a sequential statistical clustering which is a one-pass sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. Applications of the technique using an IBM-7094 computer on multispectral data sets over Purdue's Flight Line C-1 and the Yellowstone National Park test site have been accomplished. Comparisons between the classification maps by the unsupervised technique and the supervised maximum liklihood technique indicate that the classification accuracies are in agreement.

  2. Graduates employment classification using data mining approach

    Science.gov (United States)

    Aziz, Mohd Tajul Rizal Ab; Yusof, Yuhanis

    2016-08-01

    Data Mining is a platform to extract hidden knowledge in a collection of data. This study investigates the suitable classification model to classify graduates employment for one of the MARA Professional College (KPM) in Malaysia. The aim is to classify the graduates into either as employed, unemployed or further study. Five data mining algorithms offered in WEKA were used; Naïve Bayes, Logistic regression, Multilayer perceptron, k-nearest neighbor and Decision tree J48. Based on the obtained result, it is learned that the Logistic regression produces the highest classification accuracy which is at 92.5%. Such result was obtained while using 80% data for training and 20% for testing. The produced classification model will benefit the management of the college as it provides insight to the quality of graduates that they produce and how their curriculum can be improved to cater the needs from the industry.

  3. Semantic Annotation to Support Automatic Taxonomy Classification

    DEFF Research Database (Denmark)

    Kim, Sanghee; Ahmed, Saeema; Wallace, Ken

    2006-01-01

    , the annotations identify which parts of a text are more important for understanding its contents. The extraction of salient sentences is a major issue in text summarisation. Commonly used methods are based on statistical analysis, but for subject-matter type texts, linguistically motivated natural language...... processing techniques, like semantic annotations, are preferred. An experiment to test the method using 140 documents collected from industry demonstrated that classification accuracy can be improved by up to 16%....

  4. Whisker-related afferents in superior colliculus.

    Science.gov (United States)

    Castro-Alamancos, Manuel A; Favero, Morgana

    2016-05-01

    Rodents use their whiskers to explore the environment, and the superior colliculus is part of the neural circuits that process this sensorimotor information. Cells in the intermediate layers of the superior colliculus integrate trigeminotectal afferents from trigeminal complex and corticotectal afferents from barrel cortex. Using histological methods in mice, we found that trigeminotectal and corticotectal synapses overlap somewhat as they innervate the lower and upper portions of the intermediate granular layer, respectively. Using electrophysiological recordings and optogenetics in anesthetized mice in vivo, we showed that, similar to rats, whisker deflections produce two successive responses that are driven by trigeminotectal and corticotectal afferents. We then employed in vivo and slice experiments to characterize the response properties of these afferents. In vivo, corticotectal responses triggered by electrical stimulation of the barrel cortex evoke activity in the superior colliculus that increases with stimulus intensity and depresses with increasing frequency. In slices from adult mice, optogenetic activation of channelrhodopsin-expressing trigeminotectal and corticotectal fibers revealed that cells in the intermediate layers receive more efficacious trigeminotectal, than corticotectal, synaptic inputs. Moreover, the efficacy of trigeminotectal inputs depresses more strongly with increasing frequency than that of corticotectal inputs. The intermediate layers of superior colliculus appear to be tuned to process strong but infrequent trigeminal inputs and weak but more persistent cortical inputs, which explains features of sensory responsiveness, such as the robust rapid sensory adaptation of whisker responses in the superior colliculus. Copyright © 2016 the American Physiological Society.

  5. Hyperspectral remote sensing image classification based on decision level fusion

    Institute of Scientific and Technical Information of China (English)

    Peijun Du; Wei Zhang; Junshi Xia

    2011-01-01

    @@ To apply decision level fusion to hyperspectral remote sensing (HRS) image classification, three decision level fusion strategies are experimented on and compared, namely, linear consensus algorithm, improved evidence theory, and the proposed support vector machine (SVM) combiner.To evaluate the effects of the input features on classification performance, four schemes are used to organize input features for member classifiers.In the experiment, by using the operational modular imaging spectrometer (OMIS) II HRS image, the decision level fusion is shown as an effective way for improving the classification accuracy of the HRS image, and the proposed SVM combiner is especially suitable for decision level fusion.The results also indicate that the optimization of input features can improve the classification performance.%To apply decision level fusion to hyperspectral remote sensing (HRS) image classification, three decision level fusion strategies are experimented on and compared, namely, linear consensus algorithm, improved evidence theory, and the proposed support vector machine (SVM) combiner. To evaluate the effects of the input features on classification performance, four schemes are used to organize input features for member classifiers. In the experiment, by using the operational modular imaging spectrometer (OMIS) Ⅱ HRS image, the decision level fusion is shown as an effective way for improving the classification accuracy of the HRS image, and the proposed SVM combiner is especially suitable for decision level fusion. The results also indicate that the optimization of input features can improve the classification performance.

  6. Classification of THz pulse signals using two-dimensional cross-correlation feature extraction and non-linear classifiers.

    Science.gov (United States)

    Siuly; Yin, Xiaoxia; Hadjiloucas, Sillas; Zhang, Yanchun

    2016-04-01

    This work provides a performance comparison of four different machine learning classifiers: multinomial logistic regression with ridge estimators (MLR) classifier, k-nearest neighbours (KNN), support vector machine (SVM) and naïve Bayes (NB) as applied to terahertz (THz) transient time domain sequences associated with pixelated images of different powder samples. The six substances considered, although have similar optical properties, their complex insertion loss at the THz part of the spectrum is significantly different because of differences in both their frequency dependent THz extinction coefficient as well as differences in their refractive index and scattering properties. As scattering can be unquantifiable in many spectroscopic experiments, classification solely on differences in complex insertion loss can be inconclusive. The problem is addressed using two-dimensional (2-D) cross-correlations between background and sample interferograms, these ensure good noise suppression of the datasets and provide a range of statistical features that are subsequently used as inputs to the above classifiers. A cross-validation procedure is adopted to assess the performance of the classifiers. Firstly the measurements related to samples that had thicknesses of 2mm were classified, then samples at thicknesses of 4mm, and after that 3mm were classified and the success rate and consistency of each classifier was recorded. In addition, mixtures having thicknesses of 2 and 4mm as well as mixtures of 2, 3 and 4mm were presented simultaneously to all classifiers. This approach provided further cross-validation of the classification consistency of each algorithm. The results confirm the superiority in classification accuracy and robustness of the MLR (least accuracy 88.24%) and KNN (least accuracy 90.19%) algorithms which consistently outperformed the SVM (least accuracy 74.51%) and NB (least accuracy 56.86%) classifiers for the same number of feature vectors across all studies

  7. Scalable active learning for multiclass image classification.

    Science.gov (United States)

    Joshi, Ajay J; Porikli, Fatih; Papanikolopoulos, Nikolaos P

    2012-11-01

    Machine learning techniques for computer vision applications like object recognition, scene classification, etc., require a large number of training samples for satisfactory performance. Especially when classification is to be performed over many categories, providing enough training samples for each category is infeasible. This paper describes new ideas in multiclass active learning to deal with the training bottleneck, making it easier to train large multiclass image classification systems. First, we propose a new interaction modality for training which requires only yes-no type binary feedback instead of a precise category label. The modality is especially powerful in the presence of hundreds of categories. For the proposed modality, we develop a Value-of-Information (VOI) algorithm that chooses informative queries while also considering user annotation cost. Second, we propose an active selection measure that works with many categories and is extremely fast to compute. This measure is employed to perform a fast seed search before computing VOI, resulting in an algorithm that scales linearly with dataset size. Third, we use locality sensitive hashing to provide a very fast approximation to active learning, which gives sublinear time scaling, allowing application to very large datasets. The approximation provides up to two orders of magnitude speedups with little loss in accuracy. Thorough empirical evaluation of classification accuracy, noise sensitivity, imbalanced data, and computational performance on a diverse set of image datasets demonstrates the strengths of the proposed algorithms.

  8. Superior-subordinate relations as organizational processes

    DEFF Research Database (Denmark)

    Asmuss, Birte; Aggerholm, Helle Kryger; Oshima, Sae

    Since the emergence of the practice turn in social sciences (Golsorkhi et al. 2010), studies have shown a number of institutionally relevant aspects as achievements across time and by means of various resources (human and non-human) (Taylor & van Every 2000, Cooren et al. 2006). Such a process view...... on organizational practices relates closely to an increased focus on communication as being constitutive of the organization in general and the superior-subordinate relationship in specific. The current study aims to contribute to this line of research by investigating micro-practices involved in establishing...... superior-subordinate relations in a specific institutionalized setting: performance appraisal interviews (PAIs). While one main task of PAIs is to manage and integrate organizational and employee performance (Fletcher, 2001:473), PAIs are also organizational practices where superior-subordinate relations...

  9. Ontologies vs. Classification Systems

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2009-01-01

    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta d...... classification systems and meta data taxonomies, should be based on ontologies.......What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...

  10. Comparing ecoregional classifications for natural areas management in the Klamath Region, USA

    Science.gov (United States)

    Sarr, Daniel A.; Duff, Andrew; Dinger, Eric C.; Shafer, Sarah L.; Wing, Michael; Seavy, Nathaniel E.; Alexander, John D.

    2015-01-01

    We compared three existing ecoregional classification schemes (Bailey, Omernik, and World Wildlife Fund) with two derived schemes (Omernik Revised and Climate Zones) to explore their effectiveness in explaining species distributions and to better understand natural resource geography in the Klamath Region, USA. We analyzed presence/absence data derived from digital distribution maps for trees, amphibians, large mammals, small mammals, migrant birds, and resident birds using three statistical analyses of classification accuracy (Analysis of Similarity, Canonical Analysis of Principal Coordinates, and Classification Strength). The classifications were roughly comparable in classification accuracy, with Omernik Revised showing the best overall performance. Trees showed the strongest fidelity to the classifications, and large mammals showed the weakest fidelity. We discuss the implications for regional biogeography and describe how intermediate resolution ecoregional classifications may be appropriate for use as natural areas management domains.

  11. [Hard and soft classification method of multi-spectral remote sensing image based on adaptive thresholds].

    Science.gov (United States)

    Hu, Tan-Gao; Xu, Jun-Feng; Zhang, Deng-Rong; Wang, Jie; Zhang, Yu-Zhou

    2013-04-01

    Hard and soft classification techniques are the conventional methods of image classification for satellite data, but they have their own advantages and drawbacks. In order to obtain accurate classification results, we took advantages of both traditional hard classification methods (HCM) and soft classification models (SCM), and developed a new method called the hard and soft classification model (HSCM) based on adaptive threshold calculation. The authors tested the new method in land cover mapping applications. According to the results of confusion matrix, the overall accuracy of HCM, SCM, and HSCM is 71.06%, 67.86%, and 71.10%, respectively. And the kappa coefficient is 60.03%, 56.12%, and 60.07%, respectively. Therefore, the HSCM is better than HCM and SCM. Experimental results proved that the new method can obviously improve the land cover and land use classification accuracy.

  12. Intelligent Hybrid Cluster Based Classification Algorithm for Social Network Analysis

    Directory of Open Access Journals (Sweden)

    S. Muthurajkumar

    2014-05-01

    Full Text Available In this paper, we propose an hybrid clustering based classification algorithm based on mean approach to effectively classify to mine the ordered sequences (paths from weblog data in order to perform social network analysis. In the system proposed in this work for social pattern analysis, the sequences of human activities are typically analyzed by switching behaviors, which are likely to produce overlapping clusters. In this proposed system, a robust Modified Boosting algorithm is proposed to hybrid clustering based classification for clustering the data. This work is useful to provide connection between the aggregated features from the network data and traditional indices used in social network analysis. Experimental results show that the proposed algorithm improves the decision results from data clustering when combined with the proposed classification algorithm and hence it is proved that of provides better classification accuracy when tested with Weblog dataset. In addition, this algorithm improves the predictive performance especially for multiclass datasets which can increases the accuracy.

  13. Lake Superior Aquatic Invasive Species Complete Prevention Plan

    Science.gov (United States)

    The Lake Superior Aquatic Invasive Species Complete Prevention Plan is an expression of the best professional judgment of the members of the Lake Superior Task Force as to what is necessary to protect Lake Superior from new aquatic invasive species.

  14. Determining Geometric Accuracy in Turning

    Institute of Scientific and Technical Information of China (English)

    Kwong; Chi; Kit; A; Geddam

    2002-01-01

    Mechanical components machined to high levels of ac cu racy are vital to achieve various functional requirements in engineering product s. In particular, the geometric accuracy of turned components play an important role in determining the form, fit and function of mechanical assembly requiremen ts. The geometric accuracy requirements of turned components are usually specifi ed in terms of roundness, straightness, cylindricity and concentricity. In pract ice, the accuracy specifications achievable are infl...

  15. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  16. Endovascular treatment of superior vena cava syndrome

    DEFF Research Database (Denmark)

    Duvnjak, Stevo; Andersen, Poul Erik

    2011-01-01

    Abstract AIM: The aim of this study was to report our experience with palliative stent treatment of superior vena cava syndrome. METHODS: Between January 2008 and December 2009, 30 patients (mean age 60.7 years) were treated with stents because of stenosed superior vena cava. All patients presented...... there was an immediate clinical improvement with considerable reduction in the edema of upper extremities and head. There was, however, continous dyspnea in five patients (17%) and two patients (7%) had persistent visible collateral venous circulations on the upper chest. There were no stent associated complications...

  17. Information gathering for CLP classification

    OpenAIRE

    Ida Marcello; Felice Giordano; Francesca Marina Costamagna

    2011-01-01

    Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP). If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requireme...

  18. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches

    Directory of Open Access Journals (Sweden)

    Ufuk Çelik

    2015-01-01

    Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.

  19. Modeling uncertainty in classification design of a computer-aided detection system

    Science.gov (United States)

    Hosseini, Rahil; Dehmeshki, Jamshid; Barman, Sarah; Mazinani, Mahdi; Qanadli, Salah

    2010-03-01

    A computerized image analysis technology suffers from imperfection, imprecision and vagueness of the input data and its propagation in all individual components of the technology including image enhancement, segmentation and pattern recognition. Furthermore, a Computerized Medical Image Analysis System (CMIAS) such as computer aided detection (CAD) technology deals with another source of uncertainty that is inherent in image-based practice of medicine. While there are several technology-oriented studies reported in developing CAD applications, no attempt has been made to address, model and integrate these types of uncertainty in the design of the system components, even though uncertainty issues directly affect the performance and its accuracy. In this paper, the main uncertainty paradigms associated with CAD technologies are addressed. The influence of the vagueness and imprecision in the classification of the CAD, as a second reader, on the validity of ROC analysis results is defined. In order to tackle the problem of uncertainty in the classification design of the CAD, two fuzzy methods are applied and evaluated for a lung nodule CAD application. Type-1 fuzzy logic system (T1FLS) and an extension of it, interval type-2 fuzzy logic system (IT2FLS) are employed as methods with high potential for managing uncertainty issues. The novelty of the proposed classification methods is to address and handle all sources of uncertainty associated with a CAD system. The results reveal that IT2FLS is superior to T1FLS for tackling all sources of uncertainty and significantly, the problem of inter and intra operator observer variability.

  20. Automatic classification of protein structures using physicochemical parameters.

    Science.gov (United States)

    Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam

    2014-09-01

    Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.

  1. Magnetic resonance imaging evaluation of meniscoid superior labrum: normal variant or superior labral tear*

    Science.gov (United States)

    Simão, Marcelo Novelino; Vinson, Emily N.; Spritzer, Charles E.

    2016-01-01

    Objective The objective of this study was to determine the incidence of a "meniscoid" superior labrum. Materials and Methods This was a retrospective analysis of 582 magnetic resonance imaging examinations of shoulders. Of those 582 examinations, 110 were excluded, for a variety of reasons, and the final analysis therefore included 472 cases. Consensus readings were performed by three musculoskeletal radiologists using specific criteria to diagnose meniscoid labra. Results A meniscoid superior labrum was identified in 48 (10.2%) of the 472 cases evaluated. Arthroscopic proof was available in 21 cases (43.8%). In 10 (47.6%) of those 21 cases, the operative report did not include the mention a superior labral tear, thus suggesting the presence of a meniscoid labrum. In only one of those cases were there specific comments about a mobile superior labrum (i.e., meniscoid labrum). In the remaining 11 (52.4%), surgical correlation demonstrated superior labral tears. Conclusion A meniscoid superior labrum is not an infrequent finding. Depending upon assumptions and the requirement of surgical proof, the prevalence of a meniscoid superior labrum in this study was between 2.1% (surgically proven) and 4.8% (projected). However, superior labral tears are just as common and are often confused with meniscoid labra. PMID:27777474

  2. Cirrhosis Classification Based on Texture Classification of Random Features

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2014-01-01

    Full Text Available Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage. CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  3. Cirrhosis classification based on texture classification of random features.

    Science.gov (United States)

    Liu, Hui; Shao, Ying; Guo, Dongmei; Zheng, Yuanjie; Zhao, Zuowei; Qiu, Tianshuang

    2014-01-01

    Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD) can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM) features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage). CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  4. Thematic accuracy of the National Land Cover Database (NLCD) 2001 land cover for Alaska

    Science.gov (United States)

    Selkowitz, D.J.; Stehman, S.V.

    2011-01-01

    The National Land Cover Database (NLCD) 2001 Alaska land cover classification is the first 30-m resolution land cover product available covering the entire state of Alaska. The accuracy assessment of the NLCD 2001 Alaska land cover classification employed a geographically stratified three-stage sampling design to select the reference sample of pixels. Reference land cover class labels were determined via fixed wing aircraft, as the high resolution imagery used for determining the reference land cover classification in the conterminous U.S. was not available for most of Alaska. Overall thematic accuracy for the Alaska NLCD was 76.2% (s.e. 2.8%) at Level II (12 classes evaluated) and 83.9% (s.e. 2.1%) at Level I (6 classes evaluated) when agreement was defined as a match between the map class and either the primary or alternate reference class label. When agreement was defined as a match between the map class and primary reference label only, overall accuracy was 59.4% at Level II and 69.3% at Level I. The majority of classification errors occurred at Level I of the classification hierarchy (i.e., misclassifications were generally to a different Level I class, not to a Level II class within the same Level I class). Classification accuracy was higher for more abundant land cover classes and for pixels located in the interior of homogeneous land cover patches. ?? 2011.

  5. Concepts of Classification and Taxonomy. Phylogenetic Classification

    CERN Document Server

    Fraix-Burnet, Didier

    2016-01-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 1 Why phylogenetic tools in astrophysics? 1.1 History of classification The need for classifying living organisms is very ancient, and the first classification system can be dated back to the Greeks. The goal was very practical since it was intended to distinguish between eatable and toxic aliments, or kind and dangerous animals. Simple resemblance was used and has been used for centuries. Basically, until the XVIIIth...

  6. Classification of Ultra-High Resolution Orthophotos Combined with DSM Using a Dual Morphological Top Hat Profile

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2015-12-01

    Full Text Available New aerial sensors and platforms (e.g., unmanned aerial vehicles (UAVs are capable of providing ultra-high resolution remote sensing data (less than a 30-cm ground sampling distance (GSD. This type of data is an important source for interpreting sub-building level objects; however, it has not yet been explored. The large-scale differences of urban objects, the high spectral variability and the large perspective effect bring difficulties to the design of descriptive features. Therefore, features representing the spatial information of the objects are essential for dealing with the spectral ambiguity. In this paper, we proposed a dual morphology top-hat profile (DMTHP using both morphology reconstruction and erosion with different granularities. Due to the high dimensional feature space, we have proposed an adaptive scale selection procedure to reduce the feature dimension according to the training samples. The DMTHP is extracted from both images and Digital Surface Models (DSM to obtain complimentary information. The random forest classifier is used to classify the features hierarchically. Quantitative experimental results on aerial images with 9-cm and UAV images with 5-cm GSD are performed. Under our experiments, improvements of 10% and 2% in overall accuracy are obtained in comparison with the well-known differential morphological profile (DMP feature, and superior performance is observed over other tested features. Large format data with 20,000 × 20,000 pixels are used to perform a qualitative experiment using the proposed method, which shows its promising potential. The experiments also demonstrate that the DSM information has greatly enhanced the classification accuracy. In the best case in our experiment, it gives rise to a classification accuracy from 63.93% (spectral information only to 94.48% (the proposed method.

  7. Pu`ukohola Heiau National Historic Site Vegetation Mapping Project - Field Plots, Observation and Accuracy Assessment Points

    Data.gov (United States)

    National Park Service, Department of the Interior — This metadata is for the 2008 vegetation (classification) field plots (spatial database) and 2010 accuracy assessment points (spatial database) created from the...

  8. Incrementally Exploiting Sentential Association for Email Classification

    Institute of Scientific and Technical Information of China (English)

    Li Qu; He Yu; Feng Jianlin; Feng Yucai

    2006-01-01

    A novel association-based algorithm EmailInClass is proposed for incremental Email classification. In view of the fact that the basic semantic unit in an Email is actually a sentence, and the words within the same sentence are typically more semantically related than the words that just appear in the same Email, EmailInClass views a sentence rather than an Email as a transaction. Extensive experiments conducted on benchmark corpora Enron reveal that the effectiveness of EmailInClass is superior to the non-incremental alternatives such as NaiveBayes and SAT-MOD. In addition, the classification rules generated by EmailInClass are human readable and revisable.

  9. PERFORMANCE EVALUATION OF DISTANCE MEASURES IN PROPOSED FUZZY TEXTURE MODEL FOR LAND COVER CLASSIFICATION OF REMOTELY SENSED IMAGE

    Directory of Open Access Journals (Sweden)

    S. Jenicka

    2014-04-01

    Full Text Available Land cover classification is a vital application area in satellite image processing domain. Texture is a useful feature in land cover classification. The classification accuracy obtained always depends on the effectiveness of the texture model, distance measure and classification algorithm used. In this work, texture features are extracted using the proposed multivariate descriptor, MFTM/MVAR that uses Multivariate Fuzzy Texture Model (MFTM supplemented with Multivariate Variance (MVAR. The K_Nearest Neighbour (KNN algorithm is used for classification due to its simplicity coupled with efficiency. The distance measures such as Log likelihood, Manhattan, Chi squared, Kullback Leibler and Bhattacharyya were used and the experiments were conducted on IRS P6 LISS-IV data. The classified images were evaluated based on error matrix, classification accuracy and Kappa statistics. From the experiments, it is found that log likelihood distance with MFTM/MVAR descriptor and KNN classifier gives 95.29% classification accuracy.

  10. D Land Cover Classification Based on Multispectral LIDAR Point Clouds

    Science.gov (United States)

    Zou, Xiaoliang; Zhao, Guihua; Li, Jonathan; Yang, Yuanxi; Fang, Yong

    2016-06-01

    Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.

  11. Tweet-based Target Market Classification Using Ensemble Method

    Directory of Open Access Journals (Sweden)

    Muhammad Adi Khairul Anshary

    2016-09-01

    Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.

  12. A Novel Vehicle Classification Using Embedded Strain Gauge Sensors

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2008-11-01

    Full Text Available Abstract: This paper presents a new vehicle classification and develops a traffic monitoring detector to provide reliable vehicle classification to aid traffic management systems. The basic principle of this approach is based on measuring the dynamic strain caused by vehicles across pavement to obtain the corresponding vehicle parameters – wheelbase and number of axles – to then accurately classify the vehicle. A system prototype with five embedded strain sensors was developed to validate the accuracy and effectiveness of the classification method. According to the special arrangement of the sensors and the different time a vehicle arrived at the sensors one can estimate the vehicle’s speed accurately, corresponding to the estimated vehicle wheelbase and number of axles. Because of measurement errors and vehicle characteristics, there is a lot of overlap between vehicle wheelbase patterns. Therefore, directly setting up a fixed threshold for vehicle classification often leads to low-accuracy results. Using the machine learning pattern recognition method to deal with this problem is believed as one of the most effective tools. In this study, support vector machines (SVMs were used to integrate the classification features extracted from the strain sensors to automatically classify vehicles into five types, ranging from small vehicles to combination trucks, along the lines of the Federal Highway Administration vehicle classification guide. Test bench and field experiments will be introduced in this paper. Two support vector machines classification algorithms (one-against-all, one-against-one are used to classify single sensor data and multiple sensor combination data. Comparison of the two classification method results shows that the classification accuracy is very close using single data or multiple data. Our results indicate that using multiclass SVM-based fusion multiple sensor data significantly improves

  13. 基于多重核的稀疏表示分类%Multiple Kernel Sparse Representation-Based Classification

    Institute of Scientific and Technical Information of China (English)

    陈思宝; 许立仙; 罗斌

    2014-01-01

    稀疏表示分类(SRC )及核方法在模式识别的很多问题中都得到了成功的运用。为了提高其分类精度,提出多重核稀疏表示及其分类(MKSRC )方法。提出一种快速求解稀疏系数的优化迭代方法并给出了其收敛到全局最优解的证明。对于多重核的权重给出了两种自动更新方式并进行了分析与比较。在不同的人脸图像库上的分类实验显示了所提出的多重核稀疏表示分类的优越性。%Sparse representation based classification (SRC) and kernel methods are applied in many pattern recognition prob-lems .In order to improve the classification accuracy ,we propose multiple kernel sparse representation based classification (MK-SRC) .A fast optimization iteration method to solve sparse coefficients and the associated convergence proof to global optimal solu-tion are given .In order to update the kernel weights of MKSRC ,two different updating methods and the associated comparison are given .The experimental results on three face image databases show the superiority of the proposed multiple kernel sparse representa-tion based classification .

  14. Automatic classification of athletes with residual functional deficits following concussion by means of EEG signal using support vector machine.

    Science.gov (United States)

    Cao, Cheng; Tutwiler, Richard Laurence; Slobounov, Semyon

    2008-08-01

    There is a growing body of knowledge indicating long-lasting residual electroencephalography (EEG) abnormalities in concussed athletes that may persist up to 10-year postinjury. Most often, these abnormalities are initially overlooked using traditional concussion assessment tools. Accordingly, premature return to sport participation may lead to recurrent episodes of concussion, increasing the risk of recurrent concussions with more severe consequences. Sixty-one athletes at high risk for concussion (i.e., collegiate rugby and football players) were recruited and underwent EEG baseline assessment. Thirty of these athletes suffered from concussion and were retested at day 30 postinjury. A number of task-related EEG recordings were conducted. A novel classification algorithm, the support vector machine (SVM), was applied as a classifier to identify residual functional abnormalities in athletes suffering from concussion using a multichannel EEG data set. The total accuracy of the classifier using the 10 features was 77.1%. The classifier has a high sensitivity of 96.7% (linear SVM), 80.0% (nonlinear SVM), and a relatively lower but acceptable selectivity of 69.1% (linear SVM) and 75.0% (nonlinear SVM). The major findings of this report are as follows: 1) discriminative features were observed at theta, alpha, and beta frequency bands, 2) the minimal redundancy relevance method was identified as being superior to the univariate t -test method in selecting features for the model calculation, 3) the EEG features selected for the classification model are linked to temporal and occipital areas, and 4) postural parameters influence EEG data set and can be used as discriminative features for the classification model. Overall, this report provides sufficient evidence that 10 EEG features selected for final analysis and SVM may be potentially used in clinical practice for automatic classification of athletes with residual brain functional abnormalities following a concussion

  15. Classification of Scenes into Indoor/Outdoor

    Directory of Open Access Journals (Sweden)

    R. Raja

    2014-12-01

    Full Text Available Effective model for scene classification is essential, to access the desired images from large scale databases. This study presents an efficient scene classification approach by integrating low level features, to reduce the semantic gap between the visual features and richness of human perception. The objective of the study is to categorize an image into indoor or outdoor scene using relevant low level features such as color and texture. The color feature from HSV color model, texture feature through GLCM and entropy computed from UV color space forms the feature vector. To support automatic scene classification, Support Vector Machine (SVM is implemented on low level features for categorizing a scene into indoor/outdoor. Since the combination of these image features exhibit a distinctive disparity between images containing indoor or outdoor scenes, the proposed method achieves better performance in terms of classification accuracy of about 92.44%. The proposed method has been evaluated on IITM- SCID2 (Scene Classification Image Database and dataset of 3442 images collected from the web.

  16. Test Expectancy Affects Metacomprehension Accuracy

    Science.gov (United States)

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  17. Classification of right-hand grasp movement based on EMOTIV Epoc+

    Science.gov (United States)

    Tobing, T. A. M. L.; Prawito, Wijaya, S. K.

    2017-07-01

    Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.

  18. Library Classification 2020

    Science.gov (United States)

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  19. Multiple sparse representations classification

    NARCIS (Netherlands)

    E. Plenge (Esben); S.K. Klein (Stefan); W.J. Niessen (Wiro); E. Meijering (Erik)

    2015-01-01

    textabstractSparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In t

  20. Library Classification 2020

    Science.gov (United States)

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  1. Multi-source remotely sensed data fusion for improving land cover classification

    Science.gov (United States)

    Chen, Bin; Huang, Bo; Xu, Bing

    2017-02-01

    Although many advances have been made in past decades, land cover classification of fine-resolution remotely sensed (RS) data integrating multiple temporal, angular, and spectral features remains limited, and the contribution of different RS features to land cover classification accuracy remains uncertain. We proposed to improve land cover classification accuracy by integrating multi-source RS features through data fusion. We further investigated the effect of different RS features on classification performance. The results of fusing Landsat-8 Operational Land Imager (OLI) data with Moderate Resolution Imaging Spectroradiometer (MODIS), China Environment 1A series (HJ-1A), and Advanced Spaceborne Thermal Emission and Reflection (ASTER) digital elevation model (DEM) data, showed that the fused data integrating temporal, spectral, angular, and topographic features achieved better land cover classification accuracy than the original RS data. Compared with the topographic feature, the temporal and angular features extracted from the fused data played more important roles in classification performance, especially those temporal features containing abundant vegetation growth information, which markedly increased the overall classification accuracy. In addition, the multispectral and hyperspectral fusion successfully discriminated detailed forest types. Our study provides a straightforward strategy for hierarchical land cover classification by making full use of available RS data. All of these methods and findings could be useful for land cover classification at both regional and global scales.

  2. Land cover classification using random forest with genetic algorithm-based parameter optimization

    Science.gov (United States)

    Ming, Dongping; Zhou, Tianning; Wang, Min; Tan, Tian

    2016-07-01

    Land cover classification based on remote sensing imagery is an important means to monitor, evaluate, and manage land resources. However, it requires robust classification methods that allow accurate mapping of complex land cover categories. Random forest (RF) is a powerful machine-learning classifier that can be used in land remote sensing. However, two important parameters of RF classification, namely, the number of trees and the number of variables tried at each split, affect classification accuracy. Thus, optimal parameter selection is an inevitable problem in RF-based image classification. This study uses the genetic algorithm (GA) to optimize the two parameters of RF to produce optimal land cover classification accuracy. HJ-1B CCD2 image data are used to classify six different land cover categories in Changping, Beijing, China. Experimental results show that GA-RF can avoid arbitrariness in the selection of parameters. The experiments also compare land cover classification results by using GA-RF method, traditional RF method (with default parameters), and support vector machine method. When the GA-RF method is used, classification accuracies, respectively, improved by 1.02% and 6.64%. The comparison results show that GA-RF is a feasible solution for land cover classification without compromising accuracy or incurring excessive time.

  3. Validation and Classification of Web Services using Equalization Validation Classification

    Directory of Open Access Journals (Sweden)

    ALAMELU MUTHUKRISHNAN

    2012-12-01

    Full Text Available In the business process world, web services present a managed and middleware to connect huge number of services. Web service transaction is a mechanism to compose services with their desired quality parameters. If enormous transactions occur, the provider could not acquire the accurate data at the correct time. So it is necessary to reduce the overburden of web service t ransactions. In order to reduce the excess of transactions form customers to providers, this paper propose a new method called Equalization Validation Classification. This method introduces a new weight - reducing algorithm called Efficient Trim Down algorit hm to reduce the overburden of the incoming client requests. When this proposed algorithm is compared with Decision tree algorithms of (J48, Random Tree, Random Forest, AD Tree it produces a better accuracy and Validation than the existing algorithms. The proposed trimming method was analyzed with the Decision tree algorithms and the results implementation shows that the ETD algorithm provides better performance in terms of improved accuracy with Effective Validation. Therefore, the proposed method provide s a good gateway to reduce the overburden of the client requests in web services. Moreover analyzing the requests arrived from a vast number of clients and preventing the illegitimate requests save the service provider time

  4. Research on Optimization of GLCM Parameter in Cell Classification

    Science.gov (United States)

    Zhang, Xi-Kun; Hou, Jie; Hu, Xin-Hua

    2016-05-01

    Real-time classification of biological cells according to their 3D morphology is highly desired in a flow cytometer setting. Gray level co-occurrence matrix (GLCM) algorithm has been developed to extract feature parameters from measured diffraction images ,which are too complicated to coordinate with the real-time system for a large amount of calculation. An optimization of GLCM algorithm is provided based on correlation analysis of GLCM parameters. The results of GLCM analysis and subsequent classification demonstrate optimized method can lower the time complexity significantly without loss of classification accuracy.

  5. Classification using least squares support vector machine for reliability analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-wei GUO; Guang-chen BAI

    2009-01-01

    In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.

  6. Scalable classification by clustering: Hybrid can be better than Pure

    Institute of Scientific and Technical Information of China (English)

    Deng Shengchun; He Zengyou; Xu Xiaofei

    2007-01-01

    The problem of scalable classification by clustering in large databases was discussed. Clustering based classification method first generates clusters using clustering algorithms . To classify new coming data points , it finds the k nearest clusters of the data point as neighbors , and assign each data point to the dominant class of these neighbors . Existing algorithms incorporated class information in making clustering decisions and produced pure clusters (each cluster associated with only one class) . We presented hybrid cluster based algorithms , which produce clusters by unsupervised clustering and allow each cluster associated with multiple classes . Experimental results show that hybrid cluster based algorithms outperform pure ones in both classification accuracy and training speed.

  7. Brain tumour classification using Gaussian decomposition and neural networks.

    Science.gov (United States)

    Arizmendi, Carlos; Sierra, Daniel A; Vellido, Alfredo; Romero, Enrique

    2011-01-01

    The development, implementation and use of computer-based medical decision support systems (MDSS) based on pattern recognition techniques holds the promise of substantially improving the quality of medical practice in diagnostic and prognostic tasks. In this study, the core of a decision support system for brain tumour classification from magnetic resonance spectroscopy (MRS) data is presented. It combines data pre-processing using Gaussian decomposition, dimensionality reduction using moving window with variance analysis, and classification using artificial neural networks (ANN). This combination of techniques is shown to yield high diagnostic classification accuracy in problems concerning diverse brain tumour pathologies, some of which have received little attention in the literature.

  8. Cancer classification based on gene expression using neural networks.

    Science.gov (United States)

    Hu, H P; Niu, Z J; Bai, Y P; Tan, X H

    2015-12-21

    Based on gene expression, we have classified 53 colon cancer patients with UICC II into two groups: relapse and no relapse. Samples were taken from each patient, and gene information was extracted. Of the 53 samples examined, 500 genes were considered proper through analyses by S-Kohonen, BP, and SVM neural networks. Classification accuracy obtained by S-Kohonen neural network reaches 91%, which was more accurate than classification by BP and SVM neural networks. The results show that S-Kohonen neural network is more plausible for classification and has a certain feasibility and validity as compared with BP and SVM neural networks.

  9. Comparison of Neural Networks and Tabular Nearest Neighbor Encoding for Hyperspectral Signature Classification in Unresolved Object Detection

    Science.gov (United States)

    Schmalz, M.; Ritter, G.; Key, R.

    Accurate and computationally efficient spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications using linear mixing models, signature classification accuracy depends on accurate spectral endmember discrimination [1]. If the endmembers cannot be classified correctly, then the signatures cannot be classified correctly, and object recognition from hyperspectral data will be inaccurate. In practice, the number of endmembers accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an comparison of emerging technologies for nonimaging spectral signature classfication based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3,4] and a neural network technology called Morphological Neural Networks (MNNs) [5]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. The open architecture and programmability of TNE's agreement map processing allows a TNE programmer or user to determine classification accuracy, as well as characterize in detail the signatures for which TNE did not obtain classification matches, and why such mis-matches occurred. In this study, we will compare TNE and MNN based endmember classification, using performance metrics such as probability of correct classification (Pd) and rate of false

  10. Towards Multi Label Text Classification through Label Propagation

    Directory of Open Access Journals (Sweden)

    Shweta C. Dharmadhikari

    2012-06-01

    Full Text Available Classifying text data has been an active area of research for a long time. Text document is multifaceted object and often inherently ambiguous by nature. Multi-label learning deals with such ambiguous object. Classification of such ambiguous text objects often makes task of classifier difficult while assigning relevant classes to input document. Traditional single label and multi class text classification paradigms cannot efficiently classify such multifaceted text corpus. Through our paper we are proposing a novel label propagation approach based on semi supervised learning for Multi Label Text Classification. Our proposed approach models the relationship between class labels and also effectively represents input text documents. We are using semi supervised learning technique for effective utilization of labeled and unlabeled data for classification. Our proposed approach promises better classification accuracy and handling of complexity and elaborated on the basis of standard datasets such as Enron, Slashdot and Bibtex.

  11. A New Classification Method to Overcome Over-Branching

    Institute of Scientific and Technical Information of China (English)

    ZHOU Aoying(周傲英); QIAN Weining(钱卫宁); QIAN Hailei(钱海蕾); JIN Wen(金文)

    2002-01-01

    Classification is an important technique in data mining. The decision trees built by most of the existing classification algorithms commonly feature over-branching, which will lead to poor efficiency in the subsequent classification period. In this paper, we present a new value-oriented classification method, which aims at building accurately proper-sized decision trees while reducing over-branching as much as possible, based on the concepts of frequentpattern-node and exceptive-child-node. The experiments show that while using relevant analysis as pre-processing, our classification method, without loss of accuracy, can eliminate the over-branching greatly in decision trees more effectively and efficiently than other algorithms do.

  12. An Efficient Audio Classification Approach Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Lhoucine Bahatti

    2016-05-01

    Full Text Available In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines. As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT approach and includes original audio features related to the musical context in which the notes appear. The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies. Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.

  13. A Method of Soil Salinization Information Extraction with SVM Classification Based on ICA and Texture Features

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fei; TASHPOLAT Tiyip; KUNG Hsiang-te; DING Jian-li; MAMAT.Sawut; VERNER Johnson; HAN Gui-hong; GUI Dong-wei

    2011-01-01

    Salt-affected soils classification using remotely sensed images is one of the most common applications in remote sensing,and many algorithms have been developed and applied for this purpose in the literature.This study takes the Delta Oasis of Weigan and Kuqa Rivers as a study area and discusses the prediction of soil salinization from ETM+ Landsat data.It reports the Support Vector Machine(SVM) classification method based on Independent Component Analysis(ICA) and Texture features.Meanwhile,the letter introduces the fundamental theory of SVM algorithm and ICA,and then incorporates ICA and texture features.The classification result is compared with ICA-SVM classification,single data source SVM classification,maximum likelihood classification(MLC) and neural network classification qualitatively and quantitatively.The result shows that this method can effectively solve the problem of low accuracy and fracture classification result in single data source classification.It has high spread ability toward higher array input.The overall accuracy is 98.64%,which increases by 10.2% compared with maximum likelihood classification,even increases by 12.94% compared with neural net classification,and thus acquires good effectiveness.Therefore,the classification method based on SVM and incorporating the ICA and texture features can be adapted to RS image classification and monitoring of soil salinization.

  14. Tree Crown Delineation on Vhr Aerial Imagery with Svm Classification Technique Optimized by Taguchi Method: a Case Study in Zagros Woodlands

    Science.gov (United States)

    Erfanifard, Y.; Behnia, N.; Moosavi, V.

    2013-09-01

    The Support Vector Machine (SVM) is a theoretically superior machine learning methodology with great results in classification of remotely sensed datasets. Determination of optimal parameters applied in SVM is still vague to some scientists. In this research, it is suggested to use the Taguchi method to optimize these parameters. The objective of this study was to detect tree crowns on very high resolution (VHR) aerial imagery in Zagros woodlands by SVM optimized by Taguchi method. A 30 ha plot of Persian oak (Quercus persica) coppice trees was selected in Zagros woodlands, Iran. The VHR aerial imagery of the plot with 0.06 m spatial resolution was obtained from National Geographic Organization (NGO), Iran, to extract the crowns of Persian oak trees in this study. The SVM parameters were optimized by Taguchi method and thereafter, the imagery was classified by the SVM with optimal parameters. The results showed that the Taguchi method is a very useful approach to optimize the combination of parameters of SVM. It was also concluded that the SVM method could detect the tree crowns with a KHAT coefficient of 0.961 which showed a great agreement with the observed samples and overall accuracy of 97.7% that showed the accuracy of the final map. Finally, the authors suggest applying this method to optimize the parameters of classification techniques like SVM.

  15. Classification of finger movements for the dexterous hand prosthesis control with surface electromyography.

    Science.gov (United States)

    Al-Timemy, Ali H; Bugmann, Guido; Escudero, Javier; Outram, Nicholas

    2013-05-01

    A method for the classification of finger movements for dexterous control of prosthetic hands is proposed. Previous research was mainly devoted to identify hand movements as these actions generate strong electromyography (EMG) signals recorded from the forearm. In contrast, in this paper, we assess the use of multichannel surface electromyography (sEMG) to classify individual and combined finger movements for dexterous prosthetic control. sEMG channels were recorded from ten intact-limbed and six below-elbow amputee persons. Offline processing was used to evaluate the classification performance. The results show that high classification accuracies can be achieved with a processing chain consisting of time domain-autoregression feature extraction, orthogonal fuzzy neighborhood discriminant analysis for feature reduction, and linear discriminant analysis for classification. We show that finger and thumb movements can be decoded accurately with high accuracy with latencies as short as 200 ms. Thumb abduction was decoded successfully with high accuracy for six amputee persons for the first time. We also found that subsets of six EMG channels provide accuracy values similar to those computed with the full set of EMG channels (98% accuracy over ten intact-limbed subjects for the classification of 15 classes of different finger movements and 90% accuracy over six amputee persons for the classification of 12 classes of individual finger movements). These accuracy values are higher than previous studies, whereas we typically employed half the number of EMG channels per identified movement.

  16. Flying insect detection and classification with inexpensive sensors.

    Science.gov (United States)

    Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn

    2014-10-15

    An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect's flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.

  17. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A multi-tier higher order Conditional Random Field for land cover classification of multi-temporal multi-spectral Landsat imagery

    CSIR Research Space (South Africa)

    Salmon, BP

    2015-07-01

    Full Text Available accuracy while keeping the computational costs tractable. We also expand the typical 1-tier protograph used in conventional CRFs to a 2-tier graph to encapsulate the temporal dimension. This further improves the classification accuracy by modeling...

  19. Meditation Experience Predicts Introspective Accuracy

    Science.gov (United States)

    Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina

    2012-01-01

    The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790

  20. Kappa Coefficients for Circular Classifications

    NARCIS (Netherlands)

    Warrens, Matthijs J.; Pratiwi, Bunga C.

    2016-01-01

    Circular classifications are classification scales with categories that exhibit a certain periodicity. Since linear scales have endpoints, the standard weighted kappas used for linear scales are not appropriate for analyzing agreement between two circular classifications. A family of kappa coefficie

  1. Classification of Motor Imagery EEG Signals with Support Vector Machines and Particle Swarm Optimization

    Science.gov (United States)

    Ma, Yuliang; Ding, Xiaohui; She, Qingshan; Luo, Zhizeng; Potter, Thomas; Zhang, Yingchun

    2016-01-01

    Support vector machines are powerful tools used to solve the small sample and nonlinear classification problems, but their ultimate classification performance depends heavily upon the selection of appropriate kernel and penalty parameters. In this study, we propose using a particle swarm optimization algorithm to optimize the selection of both the kernel and penalty parameters in order to improve the classification performance of support vector machines. The performance of the optimized classifier was evaluated with motor imagery EEG signals in terms of both classification and prediction. Results show that the optimized classifier can significantly improve the classification accuracy of motor imagery EEG signals. PMID:27313656

  2. Land-Use and Land-Cover Mapping Using a Gradable Classification Method

    Directory of Open Access Journals (Sweden)

    Keigo Kitada

    2012-05-01

    Full Text Available Conventional spectral-based classification methods have significant limitations in the digital classification of urban land-use and land-cover classes from high-resolution remotely sensed data because of the lack of consideration given to the spatial properties of images. To recognize the complex distribution of urban features in high-resolution image data, texture information consisting of a group of pixels should be considered. Lacunarity is an index used to characterize different texture appearances. It is often reported that the land-use and land-cover in urban areas can be effectively classified using the lacunarity index with high-resolution images. However, the applicability of the maximum-likelihood approach for hybrid analysis has not been reported. A more effective approach that employs the original spectral data and lacunarity index can be expected to improve the accuracy of the classification. A new classification procedure referred to as “gradable classification method” is proposed in this study. This method improves the classification accuracy in incremental steps. The proposed classification approach integrates several classification maps created from original images and lacunarity maps, which consist of lacnarity values, to create a new classification map. The results of this study confirm the suitability of the gradable classification approach, which produced a higher overall accuracy (68% and kappa coefficient (0.64 than those (65% and 0.60, respectively obtained with the maximum-likelihood approach.

  3. Detection and classification of different liver lesions: Comparison of Gd-EOB-DTPA-enhanced MRI versus multiphasic spiral CT in a clinical single centre investigation

    Energy Technology Data Exchange (ETDEWEB)

    Böttcher, Joachim [Institute of Diagnostic and Interventional Radiology, SRH Clinic Gera, Str. des Friedens 122, 07548 Gera (Germany); Hansch, Andreas [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University, Jena University Hospital, Erlanger Allee 101, 07740 Jena (Germany); Pfeil, Alexander [Department of Internal Medicine III, Friedrich-Schiller-University, Jena University Hospital, Erlanger Allee 101, 07740 Jena (Germany); Schmidt, Peter [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University, Jena University Hospital, Erlanger Allee 101, 07740 Jena (Germany); Malich, Ansgar [Institute of Diagnostic Radiology, Suedharz Clinic Nordhausen, Dr. Robert-Koch-Str. 39, 99734 Nordhausen (Germany); Schneeweiss, Albrecht [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University, Jena University Hospital, Erlanger Allee 101, 07740 Jena (Germany); Maurer, Martin H.; Streitparth, Florian [Department of Radiology, Charité University Medicine Berlin, Campus Virchow Clinic, Augustenburger Platz 1, 13353 Berlin (Germany); Teichgräber, Ulf K. [Institute of Diagnostic and Interventional Radiology, Friedrich-Schiller-University, Jena University Hospital, Erlanger Allee 101, 07740 Jena (Germany); Renz, Diane M., E-mail: diane.renz@charite.de [Department of Radiology, Charité University Medicine Berlin, Campus Virchow Clinic, Augustenburger Platz 1, 13353 Berlin (Germany)

    2013-11-01

    Objective: To compare the diagnostic efficacy of Gd-EOB-DTPA-enhanced magnetic resonance imaging (MRI) vs. multidetector computed tomography (MDCT) for the detection and classification of focal liver lesions, differentiated also for lesion entity and size; a separate analysis of pre- and postcontrast images as well as T2-weighted MRI sequences of focal and exclusively solid lesions was integrated. Methods: Twenty-nine patients with 130 focal liver lesions underwent MDCT (64-detector-row; contrast medium iopromide; native, arterial, portalvenous, venous phase) and MRI (1.5-T; dynamic and tissue-specific phase 20 min after application of Gd-EOB-DTPA). Hepatic lesions were verified against a standard of reference (SOR). CT and MR images were independently analysed by four blinded radiologists on an ordinal 6-point-scale, determining lesion classification and diagnostic confidence. Results: Among 130 lesions, 68 were classified as malignant and 62 as benign by SOR. The detection of malignant and benign lesions differed significantly between combined and postcontrast MRI vs. MDCT; overall detection rate was 91.5% for combined MRI and 80.4% for combined MDCT (p < 0.05). Considering all four readers together, combined MDCT achieved sensitivity of 66.2%, specificity of 79.0%, and diagnostic accuracy of 72.3%; combined MRI reached superior diagnostic efficacy: sensitivity 86.8%, specificity 94.4%, accuracy 90.4% (p < 0.05). Differentiated for lesion size, in particular lesions <20 mm revealed diagnostic benefit by MRI. Postcontrast MRI also achieved higher overall sensitivity, specificity, and accuracy compared to postcontrast MDCT for focal and exclusively solid liver lesions (p < 0.05). Conclusion: Combined and postcontrast Gd-EOB-DTPA-enhanced MRI provided significantly higher overall detection rate and diagnostic accuracy, including low inter-observer variability, compared to MDCT in a single centre study.

  4. Intelligence system based classification approach for medical disease diagnosis

    Science.gov (United States)

    Sagir, Abdu Masanawa; Sathasivam, Saratha

    2017-08-01

    The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.

  5. Integrating genetic algorithm method with neural network for land use classification using SZ-3 CMODIS data

    Institute of Scientific and Technical Information of China (English)

    WANG Changyao; LUO Chengfeng; LIU Zhengjun

    2005-01-01

    This paper presents a methodology on land use mapping using CMODIS (Chinese Moderate Resolution Imaging Spectroradiometer ) data on-board SZ-3 (Shenzhou 3) spacecraft. The integrated method is composed of genetic algorithm (GA) for feature extraction and neural network classifier for land use classification. In the data preprocessing, a moment matching method was adopted to reuse classification was obtained. To generate a land use map, the three layers back propagation neural network classifier is used for training the samples and classification. Compared with the Maximum Likelihood classification algorithm, the results show that the accuracy of land use classification is obviously improved by using our proposed method, the selected band number in the classification process is reduced,and the computational performance for training and classification is improved. The result also shows that the CMODIS data can be effectively used for land use/land cover classification and change monitoring at regional and global scale.

  6. Superior-subordinate relations as organizational processes

    DEFF Research Database (Denmark)

    Asmuss, Birte; Aggerholm, Helle Kryger; Oshima, Sae

    Since the emergence of the practice turn in social sciences (Golsorkhi et al. 2010), studies have shown a number of institutionally relevant aspects as achievements across time and by means of various resources (human and non-human) (Taylor & van Every 2000, Cooren et al. 2006). Such a process view...... superior-subordinate relations in a specific institutionalized setting: performance appraisal interviews (PAIs). While one main task of PAIs is to manage and integrate organizational and employee performance (Fletcher, 2001:473), PAIs are also organizational practices where superior-subordinate relations...... are shaped, (re)confirmed and re-evaluated. This paper pursues the better understanding of the latter aspect by looking at one substantial and recurrent activity in PAIs: the evaluation of employee performance. One resource for doing the evaluation work is making assessments (e.g. Goodwin & Goodwin, 1987...

  7. Exploring the word superiority effect using TVA

    DEFF Research Database (Denmark)

    Starrfelt, Randi

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. It is unclear, however, if this is due to a lower threshold...... for perception of words, or a higher speed of processing for words than letters. We have investigated the WSE using methods based on a Theory of Visual Attention. In an experiment using single stimuli (words or letters) presented centrally, we show that the classical WSE is specifically reflected in perceptual...... processing speed: words are simply processed faster than single letters. It is also clear from this experiment, that the word superiority effect can be observed at a large range of exposure durations, from the perceptual threshold to ceiling performance. Intriguingly, when multiple stimuli are presented...

  8. Resolution of superior oblique myokymia with memantine.

    Science.gov (United States)

    Jain, Saurabh; Farooq, Shegufta J; Gottlob, Irene

    2008-02-01

    We describe a novel treatment of superior oblique myokymia. A 40-year-old woman was treated with gabapentin for this disorder with partial success and reported significant side effects including loss of libido and weight gain. After a drug holiday, memantine therapy was initiated resulting in a substantial improvement in her symptoms with far fewer side effects and stability on long-term maintenance therapy.

  9. Reperfusion hemorrhage following superior mesenteric artery stenting.

    LENUS (Irish Health Repository)

    Moore, Michael

    2012-02-03

    Percutaneous transluminal angioplasty and stent placement is now an established treatment option for chronic mesenteric ischemia and is associated with low mortality and morbidity rates. We present a case of reperfusion hemorrhage complicating endovascular repair of superior mesenteric artery stenosis. Although a recognized complication following repair of carotid stenosis, hemorrhage has not previously been reported following mesenteric endovascular reperfusion. We describe both spontaneous cessation of bleeding and treatment with coil embolization.

  10. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets.

    Science.gov (United States)

    Giuly, Richard J; Martone, Maryann E; Ellisman, Mark H

    2012-02-09

    While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with

  11. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets

    Directory of Open Access Journals (Sweden)

    Giuly Richard J

    2012-02-01

    Full Text Available Abstract Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to

  12. [Mitral surgery by superior biatrial septotomy].

    Science.gov (United States)

    Saade, A; Delepine, G; Lemaitre, C; Baehrel, B

    1995-01-01

    The superior biatrial septotomy approach consists of two semicircular right atrial and septal incisions joined at the superior end of the interatrial septum and extended across the dome of the left atrium, allowing exposure of the mitral valve by reflecting the ventricular side using stay sutures. From 1991 to 1993, 81 patients underwent mitral valve surgery by this technic. Mitral valve operation was combined with other cardiac procedures in 30 patients (37%) and was performed as a second operation in 21 patients (25.9%). Duration of cardiopulmonary bypass and aortic occlusion was not significantly different from that of patients operated via a conventional left atrial approach. The five hospital deaths (6.2%) were not related to this operative approach. Only 2 patients (3.3%) with preoperative in sinus rythm were discharged in atrial fibrillation after operation. In one patient (1.6%), atrioventricular block appeared at late follow-up. There were no cases of bleeding, atrioventricular nodal dysfunction or intra-atrial shunting related to the approach. This approach provides excellent exposure of the mitral valve even in unfavorable situations such as a small left atrium, dense adhesions from previous procedures or a previously implanted aortic prosthesis, without damage to various cardiac structures due to excessive traction. No retractor or vena cava repair are required. These data support a wide application of the superior biatrial septotomy approach in mitral valve surgery.

  13. Classification of Herbaceous Vegetation Using Airborne Hyperspectral Imagery

    Directory of Open Access Journals (Sweden)

    Péter Burai

    2015-02-01

    Full Text Available Alkali landscapes hold an extremely fine-scale mosaic of several vegetation types, thus it seems challenging to separate these classes by remote sensing. Our aim was to test the applicability of different image classification methods of hyperspectral data in this complex situation. To reach the highest classification accuracy, we tested traditional image classifiers (maximum likelihood classifier—MLC, machine learning algorithms (support vector machine—SVM, random forest—RF and feature extraction (minimum noise fraction (MNF-transformation on training datasets of different sizes. Digital images were acquired from an AISA EAGLE II hyperspectral sensor of 128 contiguous bands (400–1000 nm, a spectral sampling of 5 nm bandwidth and a ground pixel size of 1 m. For the classification, we established twenty vegetation classes based on the dominant species, canopy height, and total vegetation cover. Image classification was applied to the original and MNF (minimum noise fraction transformed dataset with various training sample sizes between 10 and 30 pixels. In order to select the optimal number of the transformed features, we applied SVM, RF and MLC classification to 2–15 MNF transformed bands. In the case of the original bands, SVM and RF classifiers provided high accuracy irrespective of the number of the training pixels. We found that SVM and RF produced the best accuracy when using the first nine MNF transformed bands; involving further features did not increase classification accuracy. SVM and RF provided high accuracies with the transformed bands, especially in the case of the aggregated groups. Even MLC provided high accuracy with 30 training pixels (80.78%, but the use of a smaller training dataset (10 training pixels significantly reduced the accuracy of classification (52.56%. Our results suggest that in alkali landscapes, the application of SVM is a feasible solution, as it provided the highest accuracies compared to RF and MLC

  14. Urban Tree Classification Using Full-Waveform Airborne Laser Scanning

    Science.gov (United States)

    Koma, Zs.; Koenig, K.; Höfle, B.

    2016-06-01

    Vegetation mapping in urban environments plays an important role in biological research and urban management. Airborne laser scanning provides detailed 3D geodata, which allows to classify single trees into different taxa. Until now, research dealing with tree classification focused on forest environments. This study investigates the object-based classification of urban trees at taxonomic family level, using full-waveform airborne laser scanning data captured in the city centre of Vienna (Austria). The data set is characterised by a variety of taxa, including deciduous trees (beeches, mallows, plane trees and soapberries) and the coniferous pine species. A workflow for tree object classification is presented using geometric and radiometric features. The derived features are related to point density, crown shape and radiometric characteristics. For the derivation of crown features, a prior detection of the crown base is performed. The effects of interfering objects (e.g. fences and cars which are typical in urban areas) on the feature characteristics and the subsequent classification accuracy are investigated. The applicability of the features is evaluated by Random Forest classification and exploratory analysis. The most reliable classification is achieved by using the combination of geometric and radiometric features, resulting in 87.5% overall accuracy. By using radiometric features only, a reliable classification with accuracy of 86.3% can be achieved. The influence of interfering objects on feature characteristics is identified, in particular for the radiometric features. The results indicate the potential of using radiometric features in urban tree classification and show its limitations due to anthropogenic influences at the same time.

  15. URBAN TREE CLASSIFICATION USING FULL-WAVEFORM AIRBORNE LASER SCANNING

    Directory of Open Access Journals (Sweden)

    Zs. Koma

    2016-06-01

    Full Text Available Vegetation mapping in urban environments plays an important role in biological research and urban management. Airborne laser scanning provides detailed 3D geodata, which allows to classify single trees into different taxa. Until now, research dealing with tree classification focused on forest environments. This study investigates the object-based classification of urban trees at taxonomic family level, using full-waveform airborne laser scanning data captured in the city centre of Vienna (Austria. The data set is characterised by a variety of taxa, including deciduous trees (beeches, mallows, plane trees and soapberries and the coniferous pine species. A workflow for tree object classification is presented using geometric and radiometric features. The derived features are related to point density, crown shape and radiometric characteristics. For the derivation of crown features, a prior detection of the crown base is performed. The effects of interfering objects (e.g. fences and cars which are typical in urban areas on the feature characteristics and the subsequent classification accuracy are investigated. The applicability of the features is evaluated by Random Forest classification and exploratory analysis. The most reliable classification is achieved by using the combination of geometric and radiometric features, resulting in 87.5% overall accuracy. By using radiometric features only, a reliable classification with accuracy of 86.3% can be achieved. The influence of interfering objects on feature characteristics is identified, in particular for the radiometric features. The results indicate the potential of using radiometric features in urban tree classification and show its limitations due to anthropogenic influences at the same time.

  16. AVNM: A Voting based Novel Mathematical Rule for Image Classification.

    Science.gov (United States)

    Vidyarthi, Ankit; Mittal, Namita

    2016-12-01

    In machine learning, the accuracy of the system depends upon classification result. Classification accuracy plays an imperative role in various domains. Non-parametric classifier like K-Nearest Neighbor (KNN) is the most widely used classifier for pattern analysis. Besides its easiness, simplicity and effectiveness characteristics, the main problem associated with KNN classifier is the selection of a number of nearest neighbors i.e. "k" for computation. At present, it is hard to find the optimal value of "k" using any statistical algorithm, which gives perfect accuracy in terms of low misclassification error rate. Motivated by the prescribed problem, a new sample space reduction weighted voting mathematical rule (AVNM) is proposed for classification in machine learning. The proposed AVNM rule is also non-parametric in nature like KNN. AVNM uses the weighted voting mechanism with sample space reduction to learn and examine the predicted class label for unidentified sample. AVNM is free from any initial selection of predefined variable and neighbor selection as found in KNN algorithm. The proposed classifier also reduces the effect of outliers. To verify the performance of the proposed AVNM classifier, experiments are made on 10 standard datasets taken from UCI database and one manually created dataset. The experimental result shows that the proposed AVNM rule outperforms the KNN classifier and its variants. Experimentation results based on confusion matrix accuracy parameter proves higher accuracy value with AVNM rule. The proposed AVNM rule is based on sample space reduction mechanism for identification of an optimal number of nearest neighbor selections. AVNM results in better classification accuracy and minimum error rate as compared with the state-of-art algorithm, KNN, and its variants. The proposed rule automates the selection of nearest neighbor selection and improves classification rate for UCI dataset and manually created dataset. Copyright © 2016 Elsevier

  17. Process Analysis Via Accuracy Control

    Science.gov (United States)

    1982-02-01

    0 1 4 3 NDARDS THE NATIONAL February 1982 Process Analysis Via Accuracy Control RESEARCH PROG RAM U.S. DEPARTMENT OF TRANSPORTATION Maritime...SUBTITLE Process Analysis Via Accuracy Control 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e...examples are contained in Appendix C. Included, are examples of how “A/C” process - analysis leads to design improvement and how a change in sequence can

  18. A comparative study on classification of sleep stage based on EEG signals using feature selection and classification algorithms.

    Science.gov (United States)

    Şen, Baha; Peker, Musa; Çavuşoğlu, Abdullah; Çelebi, Fatih V

    2014-03-01

    Sleep scoring is one of the most important diagnostic methods in psychiatry and neurology. Sleep staging is a time consuming and difficult task undertaken by sleep experts. This study aims to identify a method which would classify sleep stages automatically and with a high degree of accuracy and, in this manner, will assist sleep experts. This study consists of three stages: feature extraction, feature selection from EEG signals, and classification of these signals. In the feature extraction stage, it is used 20 attribute algorithms in four categories. 41 feature parameters were obtained from these algorithms. Feature selection is important in the elimination of irrelevant and redundant features and in this manner prediction accuracy is improved and computational overhead in classification is reduced. Effective feature selection algorithms such as minimum redundancy maximum relevance (mRMR); fast correlation based feature selection (FCBF); ReliefF; t-test; and Fisher score algorithms are preferred at the feature selection stage in selecting a set of features which best represent EEG signals. The features obtained are used as input parameters for the classification algorithms. At the classification stage, five different classification algorithms (random forest (RF); feed-forward neural network (FFNN); decision tree (DT); support vector machine (SVM); and radial basis function neural network (RBF)) classify the problem. The results, obtained from different classification algorithms, are provided so that a comparison can be made between computation times and accuracy rates. Finally, it is obtained 97.03 % classification accuracy using the proposed method. The results show that the proposed method indicate the ability to design a new intelligent assistance sleep scoring system.

  19. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier.

    Science.gov (United States)

    Li, Qiang; Gu, Yu; Jia, Jing

    2017-01-30

    Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS) and support vector machine (SVM) algorithms in a quartz crystal microbalance (QCM)-based electronic nose (e-nose) we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3%) showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN) classifier (93.3%) and moving average-linear discriminant analysis (MA-LDA) classifier (87.6%). The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization) performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  20. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Ki Wan Kim

    2017-06-01

    Full Text Available The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.

  1. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2017-01-01

    Full Text Available Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS and support vector machine (SVM algorithms in a quartz crystal microbalance (QCM-based electronic nose (e-nose we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3% showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN classifier (93.3% and moving average-linear discriminant analysis (MA-LDA classifier (87.6%. The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  2. Evaluation of Digital Classification of Polarimetric SAR Data for Iron-Mineralized Laterites Mapping in the Amazon Region

    Directory of Open Access Journals (Sweden)

    Cleber G. Oliveira

    2013-06-01

    Full Text Available This study evaluates the potential of C- and L-band polarimetric SAR data for the discrimination of iron-mineralized laterites in the Brazilian Amazon region. The study area is the N1 plateau located on the northern border of the Carajás Mineral Province, the most important Brazilian mineral province which has numerous mineral deposits, particularly the world’s largest iron deposits. The plateau is covered by low-density savanna-type vegetation (campus rupestres which contrasts visibly with the dense equatorial forest. The laterites are subdivided into three units: chemical crust, iron-ore duricrust, and hematite, of which only the latter two are of economic interest. Full polarimetric data from the airborne R99B sensor of the SIVAM/CENSIPAM (L-band system and the RADARSAT-2 satellite (C-band were evaluated. The study focused on an assessment of distinct schemes for digital classification based on decomposition theory and hybrid approach, which incorporates statistical analysis as input data derived from the target decomposition modeling. The results indicated that the polarimetric classifications presented a poor performance, with global Kappa values below 0.20. The accuracy for the identification of units of economic interest varied from 55% to 89%, albeit with high commission error values. In addition, the results using L-band were considered superior compared to C-band, which suggest that the roughness scale for laterite discrimination in the area is nearer to L than to C-band.

  3. Breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution.

    Science.gov (United States)

    Pak, Fatemeh; Kanan, Hamidreza Rashidy; Alikhassi, Afsaneh

    2015-11-01

    Breast cancer is one of the most perilous diseases among women. Breast screening is a method of detecting breast cancer at a very early stage which can reduce the mortality rate. Mammography is a standard method for the early diagnosis of breast cancer. In this paper, a new algorithm is proposed for breast cancer detection and classification in digital mammography based on Non-Subsampled Contourlet Transform (NSCT) and Super Resolution (SR). The presented algorithm includes three main parts including pre-processing, feature extraction and classification. In the pre-processing stage, after determining the region of interest (ROI) by an automatic technique, the quality of image is improved using NSCT and SR algorithm. In the feature extraction part, several features of the image components are extracted and skewness of each feature is calculated. Finally, AdaBoost algorithm is used to classify and determine the probability of benign and malign disease. The obtained results on Mammographic Image Analysis Society (MIAS) database indicate the significant performance and superiority of the proposed method in comparison with the state of the art approaches. According to the obtained results, the proposed technique achieves 91.43% and 6.42% as a mean accuracy and FPR, respectively.

  4. a Two-Step Decision Fusion Strategy: Application to Hyperspectral and Multispectral Images for Urban Classification

    Science.gov (United States)

    Ouerghemmi, W.; Le Bris, A.; Chehata, N.; Mallet, C.

    2017-05-01

    Very high spatial resolution multispectral images and lower spatial resolution hyperspectral images are complementary sources for urban object classification. The first enables a fine delineation of objects, while the second can better discriminate classes and consider richer land cover semantics. This paper presents a decision fusion scheme taking advantage of both sources classification maps, to produce a better classification map. The proposed method aims at dealing with both semantic and spatial uncertainties and consists in two steps. First, class membership maps are merged at pixel level. Several fusion rules are considered and compared in this study. Secondly, classification is obtained from a global regularization of a graphical model, involving a fit-to-data term related to class membership measures and an image based contrast sensitive regularization term. Results are presented on three datasets. The classification accuracy is improved up to 5 %, with comparison to the best single source classification accuracy.

  5. Signal classification method based on data mining for multi-mode radar

    Institute of Scientific and Technical Information of China (English)

    Qiang Guo; Pulong Nan; Jian Wan

    2016-01-01

    For the multi-mode radar working in the modern elec-tronic battlefield, different working states of one single radar are prone to being classified as multiple emitters when adopting traditional classification methods to process intercepted signals, which has a negative effect on signal classification. A classification method based on spatial data mining is presented to address the above chal enge. Inspired by the idea of spatial data mining, the classification method applies nuclear field to depicting the distribu-tion information of pulse samples in feature space, and digs out the hidden cluster information by analyzing distribution characteristics. In addition, a membership-degree criterion to quantify the correla-tion among al classes is established, which ensures classification accuracy of signal samples. Numerical experiments show that the presented method can effectively prevent different working states of multi-mode emitter from being classified as several emitters, and achieve higher classification accuracy.

  6. Social Power Increases Interoceptive Accuracy

    Directory of Open Access Journals (Sweden)

    Mehrad Moeini-Jazani

    2017-08-01

    Full Text Available Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research.

  7. Classification of Brain Signals in Normal Subjects and Patients with Epilepsy Using Mixture of Experts

    Directory of Open Access Journals (Sweden)

    S. Amoozegar

    2013-06-01

    Full Text Available EEG is one of the most important and common sources for study of brain function and neurological disorders. Automated systems are under study for many years to detect EEG changes. Because of the importance of making correct decision, we are looking for better classification methods for EEG signals. In this paper a smart compound system is used for classifying EEG signals to different groups. Since in each classification the system accuracy of making decision is very important, in this study we look for some methods to improve the accuracy of EEG signals classification. In this paper the use of Mixture of Experts for improving the EEG signals classification of normal subjects and patients with epilepsy is shown and the classification accuracy is evaluated. Decision making was performed in two stages: 1 feature extractions with different methods of eigenvector and 2 Classification using the classifier trained by extracted features. This smart system inputs are formed from composites features that are selected appropriate with network structure. In this study tree methods based on eigenvectors (Minimum Norm, MUSIC, Pisarenko are chosen for the estimation of Power Spectral Density (PSD. After the implementation of ME and train it on composite features, we propose that this technique can reach high classification accuracy. Hence, EEG signals classification of epilepsy patients in different situations and control subjects is available. In this study, Mixture of Experts structure was used for EEG signals classification. Proper performance of Neural Network depends on the size of train and test data. Combination of multiple Neural Networks even without using the probable structure in obtaining weights in classification problem can produce high accuracy in less time, which is important and valuable in the classification point of view.

  8. Texture feature based liver lesion classification

    Science.gov (United States)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  9. Multiple Structure-View Learning for Graph Classification.

    Science.gov (United States)

    Wu, Jia; Pan, Shirui; Zhu, Xingquan; Zhang, Chengqi; Yu, Philip S

    2017-09-20

    Many applications involve objects containing structure and rich content information, each describing different feature aspects of the object. Graph learning and classification is a common tool for handling such objects. To date, existing graph classification has been limited to the single-graph setting with each object being represented as one graph from a single structure-view. This inherently limits its use to the classification of complicated objects containing complex structures and uncertain labels. In this paper, we advance graph classification to handle multigraph learning for complicated objects from multiple structure views, where each object is represented as a bag containing several graphs and the label is only available for each graph bag but not individual graphs inside the bag. To learn such graph classification models, we propose a multistructure-view bag constrained learning (MSVBL) algorithm, which aims to explore substructure features across multiple structure views for learning. By enabling joint regularization across multiple structure views and enforcing labeling constraints at the bag and graph levels, MSVBL is able to discover the most effective substructure features across all structure views. Experiments and comparisons on real-world data sets validate and demonstrate the superior performance of MSVBL in representing complicated objects as multigraph for classification, e.g., MSVBL outperforms the state-of-the-art multiview graph classification and multiview multi-instance learning approaches.

  10. Superior mesenteric artery compression syndrome - case report

    Directory of Open Access Journals (Sweden)

    Paulo Rocha França Neto

    2011-12-01

    Full Text Available Superior mesenteric artery syndrome is an entity generally caused by the loss of the intervening mesenteric fat pad, resulting in compression of the third portion of the duodenum by the superior mesenteric artery. This article reports the case of a patient with irremovable metastatic adenocarcinoma in the sigmoid colon, that evolved with intense vomiting. Intestinal transit was carried out, which showed important gastric dilation extended until the third portion of the duodenum, compatible with superior mesenteric artery syndrome. Considering the patient's nutritional condition, the medical team opted for the conservative treatment. Four months after the surgery and conservative measures, the patient did not present vomiting after eating, maintaining previous weight. Superior mesenteric artery syndrome is uncommon and can have unspecific symptoms. Thus, high suspicion is required for the appropriate clinical adjustment. A barium examination is required to make the diagnosis. The treatment can initially require gastric decompression and hydration, besides reversal of weight loss through adequate nutrition. Surgery should be adopted only in case of clinical treatment failure.A síndrome da artéria mesentérica superior é uma entidade clínica causada geralmente pela perda do tecido adiposo mesentérico, resultando na compressão da terceira porção do duodeno pela artéria mesentérica superior. Esse artigo relata o caso clínico de uma paciente portadora de adenocarcinoma de cólon sigmoide metastático irressecável, que evoluiu com vômitos incoercíveis. Realizou-se, então, trânsito intestinal que evidenciou dilatação gástrica importante, que se prolongava até a terceira porção duodenal, quadro radiológico compatível com pinçamento da artéria mesentérica superior. Diante da condição nutricional da paciente, foi optado por iniciar medidas conservadoras (porções alimentares pequenas e mais frequentes, além de dec

  11. [Integration of soft and hard classifications using linear spectral mixture model and support vector machines].

    Science.gov (United States)

    Hu, Tan-Gao; Pan, Yao-Zhong; Zhang, Jin-Shui; Li, Ling-Ling; Le, Li

    2011-02-01

    This paper presents a new soft and hard classification. By analyzing the target objects in the image distribution, and calculating the adaptive threshold automatically, the image is divided into three regions: pure regions, non-target objects regions and mixed regions. For pure regions and non-target objects regions, hard classification method (support vector machine) is used to quickly extract classified results; For mixed regions, soft classification method (selective endmember for linear spectral mixture model) is used to extract the abundance of target objects. Finally, it generates an integrated soft and hard classification map. In order to evaluate the accuracy of this new method, it is compared with SVM and LSMM using ALOS image. The RMSE value of new method is 0.203, and total accuracy is 95.48%. Both overall accuracies and RMSE show that integration of hard and soft classification has a higher accuracy than single hard or soft classification. Experimental results prove that the new method can effectively solve the problem of mixed pixels, and can obviously improve image classification accuracy.

  12. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... process in March 2012 (77 FR 5379). When verified by a futures classification, Smith-Doxey data serves as... Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed... for the addition of an optional cotton futures classification procedure--identified and known...

  13. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  14. Learning Apache Mahout classification

    CERN Document Server

    Gupta, Ashish

    2015-01-01

    If you are a data scientist who has some experience with the Hadoop ecosystem and machine learning methods and want to try out classification on large datasets using Mahout, this book is ideal for you. Knowledge of Java is essential.

  15. Update on diabetes classification.

    Science.gov (United States)

    Thomas, Celeste C; Philipson, Louis H

    2015-01-01

    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal.

  16. [Classification of cardiomyopathy].

    Science.gov (United States)

    Asakura, Masanori; Kitakaze, Masafumi

    2014-01-01

    Cardiomyopathy is a group of cardiovascular diseases with poor prognosis. Some patients with dilated cardiomyopathy need heart transplantations due to severe heart failure. Some patients with hypertrophic cardiomyopathy die unexpectedly due to malignant ventricular arrhythmias. Various phenotypes of cardiomyopathies are due to the heterogeneous group of diseases. The classification of cardiomyopathies is important and indispensable in the clinical situation. However, their classification has not been established, because the causes of cardiomyopathies have not been fully elucidated. We usually use definition and classification offered by WHO/ISFC task force in 1995. Recently, several new definitions and classifications of the cardiomyopathies have been published by American Heart Association, European Society of Cardiology and Japanese Circulation Society.

  17. Carbohydrate terminology and classification

    National Research Council Canada - National Science Library

    Cummings, J H; Stephen, A M

    2007-01-01

    ...) and polysaccharides (DP> or =10). Within this classification, a number of terms are used such as mono- and disaccharides, polyols, oligosaccharides, starch, modified starch, non-starch polysaccharides, total carbohydrate, sugars, etc...

  18. Optimization of Agricultural Crop Identification in SLAR Images: Hierarchic Classification and Texture Analysis

    NARCIS (Netherlands)

    Hoogeboom, P.

    1985-01-01

    In 1980 a large SLAR flight program was carried out over an agricultural area in The Netherlands. A classification study on this multitenporal dataset (Ref, l) showed that high accuracies are obtained from a simultaneous classification of 3 flights. In this paper the results of a follow on study wil

  19. Multiscale modeling for classification of SAR imagery using hybrid EM algorithm and genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    Xianbin Wen; Hua Zhang; Jianguang Zhang; Xu Jiao; Lei Wang

    2009-01-01

    A novel method that hybridizes genetic algorithm (GA) and expectation maximization (EM) algorithm for the classification of syn-thetic aperture radar (SAR) imagery is proposed by the finite Gaussian mixtures model (GMM) and multiscale autoregressive (MAR)model. This algorithm is capable of improving the global optimality and consistency of the classification performance. The experiments on the SAR images show that the proposed algorithm outperforms the standard EM method significantly in classification accuracy.

  20. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses Using Structural Plasticity

    OpenAIRE

    Hussain, Shaista; Basu, Arindam

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  1. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses using Structural Plasticity

    OpenAIRE

    Shaista eHussain; Arindam eBasu

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  2. Completion of the classification

    CERN Document Server

    Strade, Helmut

    2012-01-01

    This is the last of three volumes about ""Simple Lie Algebras over Fields of Positive Characteristic""by Helmut Strade, presenting the state of the art of the structure and classification of Lie algebras over fields of positive characteristic. In this monograph the proof of the Classification Theorem presented in the first volumeis concluded.Itcollects all the important results on the topic whichcan be found only in scatteredscientific literaturso far.

  3. Twitter content classification

    OpenAIRE

    2010-01-01

    This paper delivers a new Twitter content classification framework based sixteen existing Twitter studies and a grounded theory analysis of a personal Twitter history. It expands the existing understanding of Twitter as a multifunction tool for personal, profession, commercial and phatic communications with a split level classification scheme that offers broad categorization and specific sub categories for deeper insight into the real world application of the service.

  4. Image Reconstruction Using Pixel Wise Support Vector Machine SVM Classification.

    Directory of Open Access Journals (Sweden)

    Mohammad Mahmudul Alam Mia

    2015-02-01

    Full Text Available Abstract Image reconstruction using support vector machine SVM has been one of the major parts of image processing. The exactness of a supervised image classification is a function of the training data used in its generation. In this paper we studied support vector machine for classification aspects and reconstructed an image using support vector machine. Firstly value of the random pixels is used as the SVM classifier. Then the SVM classifier is trained by using those values of the random pixels. Finally the image is reconstructed after cross-validation with the trained SVM classifier. Matlab result shows that training with support vector machine produce better results and great computational efficiency with only a few minutes of runtime is necessary for training. Support vector machine have high classification accuracy and much faster convergence. Overall classification accuracy is 99.5. From our experiment It can be seen that classification accuracy mostly depends on the choice of the kernel function and best estimation of parameters for kernel is critical for a given image.

  5. Visual traffic surveillance framework: classification to event detection

    Science.gov (United States)

    Ambardekar, Amol; Nicolescu, Mircea; Bebis, George; Nicolescu, Monica

    2013-10-01

    Visual traffic surveillance using computer vision techniques can be noninvasive, automated, and cost effective. Traffic surveillance systems with the ability to detect, count, and classify vehicles can be employed in gathering traffic statistics and achieving better traffic control in intelligent transportation systems. However, vehicle classification poses a difficult problem as vehicles have high intraclass variation and relatively low interclass variation. Five different object recognition techniques are investigated: principal component analysis (PCA)+difference from vehicle space, PCA+difference in vehicle space, PCA+support vector machine, linear discriminant analysis, and constellation-based modeling applied to the problem of vehicle classification. Three of the techniques that performed well were incorporated into a unified traffic surveillance system for online classification of vehicles, which uses tracking results to improve the classification accuracy. To evaluate the accuracy of the system, 31 min of traffic video containing multilane traffic intersection was processed. It was possible to achieve classification accuracy as high as 90.49% while classifying correctly tracked vehicles into four classes: cars, SUVs/vans, pickup trucks, and buses/semis. While processing a video, our system also recorded important traffic parameters such as the appearance, speed, trajectory of a vehicle, etc. This information was later used in a search assistant tool to find interesting traffic events.

  6. Reducing Support Vector Machine Classification Error by Implementing Kalman Filter

    Directory of Open Access Journals (Sweden)

    Muhsin Hassan

    2013-08-01

    Full Text Available The aim of this is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine classification errors in classifying pipeline corrosion depth. In pipeline defect classification, it is important to increase the accuracy of the SVM classification so that one can avoid misclassification which can lead to greater problems in monitoring pipeline defect and prediction of pipeline leakage. In this paper, it is found that noisy data can greatly affect the performance of SVM. Hence, Kalman Filter + SVM hybrid technique has been proposed as a solution to reduce SVM classification errors. The datasets has been added with Additive White Gaussian Noise in several stages to study the effect of noise on SVM classification accuracy. Three techniques have been studied in this experiment, namely SVM, hybrid of Discrete Wavelet Transform + SVM and hybrid of Kalman Filter + SVM. Experiment results have been compared to find the most promising techniques among them. MATLAB simulations show Kalman Filter and Support Vector Machine combination in a single system produced higher accuracy compared to the other two techniques.

  7. Refinement of Hyperspectral Image Classification with Segment-Tree Filtering

    Directory of Open Access Journals (Sweden)

    Lu Li

    2017-01-01

    Full Text Available This paper proposes a novel method of segment-tree filtering to improve the classification accuracy of hyperspectral image (HSI. Segment-tree filtering is a versatile method that incorporates spatial information and has been widely applied in image preprocessing. However, to use this powerful framework in hyperspectral image classification, we must reduce the original feature dimensionality to avoid the Hughes problem; otherwise, the computational costs are high and the classification accuracy by original bands in the HSI is unsatisfactory. Therefore, feature extraction is adopted to produce new salient features. In this paper, the Semi-supervised Local Fisher (SELF method of discriminant analysis is used to reduce HSI dimensionality. Then, a tree-structure filter that adaptively incorporates contextual information is constructed. Additionally, an initial classification map is generated using multi-class support vector machines (SVMs, and segment-tree filtering is conducted using this map. Finally, a simple Winner-Take-All (WTA rule is applied to determine the class of each pixel in an HSI based on the maximum probability. The experimental results demonstrate that the proposed method can improve HSI classification accuracy significantly. Furthermore, a comparison between the proposed method and the current state-of-the-art methods, such as Extended Morphological Profiles (EMPs, Guided Filtering (GF, and Markov Random Fields (MRFs, suggests that our method is both competitive and robust.

  8. Land Cover Heterogeneity Effects on Sub-Pixel and Per-Pixel Classifications

    Directory of Open Access Journals (Sweden)

    Trung V. Tran

    2014-04-01

    Full Text Available Per-pixel and sub-pixel are two common classification methods in land cover studies. The characteristics of a landscape, particularly the land cover itself, can affect the accuracies of both methods. The objectives of this study were to: (1 compare the performance of sub-pixel vs. per-pixel classification methods for a broad heterogeneous region; and (2 analyze the impact of land cover heterogeneity (i.e., the number of land cover classes per pixel on both classification methods. The results demonstrated that the accuracy of both per-pixel and sub-pixel classification methods were generally reduced by increasing land cover heterogeneity. Urban areas, for example, were found to have the lowest accuracy for the per-pixel method, because they had the highest heterogeneity. Conversely, rural areas dominated by cropland and grassland had low heterogeneity and high accuracy. When a sub-pixel method was used, the producer’s accuracy for artificial surfaces was increased by more than 20%. For all other land cover classes, sub-pixel and per-pixel classification methods performed similarly. Thus, the sub-pixel classification was only advantageous for heterogeneous urban landscapes. Both creators and users of land cover datasets should be aware of the inherent landscape heterogeneity and its potential effect on map accuracy.

  9. An Object-Based Method for Chinese Landform Types Classification

    Science.gov (United States)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  10. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  11. Comparison research on iot oriented image classification algorithms

    Directory of Open Access Journals (Sweden)

    Du Ke

    2016-01-01

    Full Text Available Image classification belongs to the machine learning and computer vision fields, it aims to recognize and classify objects in the image contents. How to apply image classification algorithms to large-scale data in the IoT framework is the focus of current research. Based on Anaconda, this article implement sk-NN, SVM, Softmax and Neural Network algorithms by Python, performs data normalization, random search, HOG and colour histogram feature extraction to enhance the algorithms, experiments on them in CIFAR-10 datasets, then conducts comparison from three aspects of training time, test time and classification accuracy. The experimental results show that: the vectorized implementation of the algorithms is more efficient than the loop implementation; The training time of k-NN is the shortest, SVM and Softmax spend more time, and the training time of Neural Network is the longest; The test time of SVM, Softmax and Neural Network are much shorter than of k-NN; Neural Network gets the highest classification accuracy, SVM and Softmax get lower and approximate accuracies, and k-NN gets the lowest accuracy. The effects of three algorithm improvement methods are obvious.

  12. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer Hougaard

    2012-01-01

    Classification models are becoming increasing popular tools in the analysis of neuroimaging data sets. Besides obtaining good prediction accuracy, a competing goal is to interpret how the classifier works. From a neuroscientific perspective, we are interested in the brain pattern reflecting...

  13. Tree Classification with Fused Mobile Laser Scanning and Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Juha Hyyppä

    2011-05-01

    Full Text Available Mobile Laser Scanning data were collected simultaneously with hyperspectral data using the Finnish Geodetic Institute Sensei system. The data were tested for tree species classification. The test area was an urban garden in the City of Espoo, Finland. Point clouds representing 168 individual tree specimens of 23 tree species were determined manually. The classification of the trees was done using first only the spatial data from point clouds, then with only the spectral data obtained with a spectrometer, and finally with the combined spatial and hyperspectral data from both sensors. Two classification tests were performed: the separation of coniferous and deciduous trees, and the identification of individual tree species. All determined tree specimens were used in distinguishing coniferous and deciduous trees. A subset of 133 trees and 10 tree species was used in the tree species classification. The best classification results for the fused data were 95.8% for the separation of the coniferous and deciduous classes. The best overall tree species classification succeeded with 83.5% accuracy for the best tested fused data feature combination. The respective results for paired structural features derived from the laser point cloud were 90.5% for the separation of the coniferous and deciduous classes and 65.4% for the species classification. Classification accuracies with paired hyperspectral reflectance value data were 90.5% for the separation of coniferous and deciduous classes and 62.4% for different species. The results are among the first of their kind and they show that mobile collected fused data outperformed single-sensor data in both classification tests and by a significant margin.

  14. Supervised Classification in the Presence of Misclassified Training Data: A Monte Carlo Simulation Study in the Three Group Case

    Directory of Open Access Journals (Sweden)

    Jocelyn E Bolin

    2014-02-01

    Full Text Available Statistical classification of phenomena into observed groups is very common in the social and behavioral sciences. Statistical classification methods, however, are affected by the characteristics of the data under study. Statistical classification can be further complicated by initial misclassification of the observed groups. The purpose of this study is to investigate the impact of initial training data misclassification on several statistical classification and data mining techniques. Misclassification conditions in the three-group case will be simulated and results will be presented in terms of overall as well as subgroup classification accuracy. Results show decreased classification accuracy as sample size, group separation and group size ratio decrease and as misclassification percentage increases with random forests demonstrating the highest accuracy across conditions.

  15. Classification of Sporting Activities Using Smartphone Accelerometers

    Directory of Open Access Journals (Sweden)

    Noel E. O'Connor

    2013-04-01

    Full Text Available In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT. Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today’s society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach.

  16. Hierarchical Maximum Margin Learning for Multi-Class Classification

    CERN Document Server

    Yang, Jian-Bo

    2012-01-01

    Due to myriads of classes, designing accurate and efficient classifiers becomes very challenging for multi-class classification. Recent research has shown that class structure learning can greatly facilitate multi-class learning. In this paper, we propose a novel method to learn the class structure for multi-class classification problems. The class structure is assumed to be a binary hierarchical tree. To learn such a tree, we propose a maximum separating margin method to determine the child nodes of any internal node. The proposed method ensures that two classgroups represented by any two sibling nodes are most separable. In the experiments, we evaluate the accuracy and efficiency of the proposed method over other multi-class classification methods on real world large-scale problems. The results show that the proposed method outperforms benchmark methods in terms of accuracy for most datasets and performs comparably with other class structure learning methods in terms of efficiency for all datasets.

  17. Completed Local Ternary Pattern for Rotation Invariant Texture Classification

    Directory of Open Access Journals (Sweden)

    Taha H. Rassem

    2014-01-01

    Full Text Available Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP and the Completed Local Binary Count (CLBC, have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

  18. Link prediction boosted psychiatry disorder classification for functional connectivity network

    Science.gov (United States)

    Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang

    2017-02-01

    Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.

  19. Word pair classification during imagined speech using direct brain recordings

    Science.gov (United States)

    Martin, Stephanie; Brunner, Peter; Iturrate, Iñaki; Millán, José Del R.; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.

    2016-05-01

    People that cannot communicate due to neurological disorders would benefit from an internal speech decoder. Here, we showed the ability to classify individual words during imagined speech from electrocorticographic signals. In a word imagery task, we used high gamma (70–150 Hz) time features with a support vector machine model to classify individual words from a pair of words. To account for temporal irregularities during speech production, we introduced a non-linear time alignment into the SVM kernel. Classification accuracy reached 88% in a two-class classification framework (50% chance level), and average classification accuracy across fifteen word-pairs was significant across five subjects (mean = 58% p perception and production. These data represent a proof of concept study for basic decoding of speech imagery, and delineate a number of key challenges to usage of speech imagery neural representations for clinical applications.

  20. Structure and context in prostatic gland segmentation and classification.

    Science.gov (United States)

    Nguyen, Kien; Sarkar, Anindya; Jain, Anil K

    2012-01-01

    A novel gland segmentation and classification scheme applied to an H&E histology image of the prostate tissue is proposed. For gland segmentation, we associate appropriate nuclei objects with each lumen object to create a gland segment. We further extract 22 features to describe the structural information and contextual information for each segment. These features are used to classify a gland segment into one of the three classes: artifact, normal gland and cancer gland. On a dataset of 48 images at 5x magnification (which includes 525 artifacts, 931 normal glands and 1,375 cancer glands), we achieved the following classification accuracies: 93% for artifacts v. true glands; 79% for normal v. cancer glands, and 77% for discriminating all three classes. The proposed method outperforms state of the art methods in terms of segmentation and classification accuracies and computational efficiency.

  1. A Hybrid Sensing Approach for Pure and Adulterated Honey Classification

    Directory of Open Access Journals (Sweden)

    Ammar Zakaria

    2012-10-01

    Full Text Available This paper presents a comparison between data from single modality and fusion methods to classify Tualang honey as pure or adulterated using Linear Discriminant Analysis (LDA and Principal Component Analysis (PCA statistical classification approaches. Ten different brands of certified pure Tualang honey were obtained throughout peninsular Malaysia and Sumatera, Indonesia. Various concentrations of two types of sugar solution (beet and cane sugar were used in this investigation to create honey samples of 20%, 40%, 60% and 80% adulteration concentrations. Honey data extracted from an electronic nose (e-nose and Fourier Transform Infrared Spectroscopy (FTIR were gathered, analyzed and compared based on fusion methods. Visual observation of classification plots revealed that the PCA approach able to distinct pure and adulterated honey samples better than the LDA technique. Overall, the validated classification results based on FTIR data (88.0% gave higher classification accuracy than e-nose data (76.5% using the LDA technique. Honey classification based on normalized low-level and intermediate-level FTIR and e-nose fusion data scored classification accuracies of 92.2% and 88.7%, respectively using the Stepwise LDA method. The results suggested that pure and adulterated honey samples were better classified using FTIR and e-nose fusion data than single modality data.

  2. Design of a robust EMG sensing interface for pattern classification.

    Science.gov (United States)

    Huang, He; Zhang, Fan; Sun, Yan L; He, Haibo

    2010-10-01

    Electromyographic (EMG) pattern classification has been widely investigated for neural control of external devices in order to assist with movements of patients with motor deficits. Classification performance deteriorates due to inevitable disturbances to the sensor interface, which significantly challenges the clinical value of this technique. This study aimed to design a sensor fault detection (SFD) module in the sensor interface to provide reliable EMG pattern classification. This module monitored the recorded signals from individual EMG electrodes and performed a self-recovery strategy to recover the classification performance when one or more sensors were disturbed. To evaluate this design, we applied synthetic disturbances to EMG signals collected from leg muscles of able-bodied subjects and a subject with a transfemoral amputation and compared the accuracies for classifying transitions between different locomotion modes with and without the SFD module. The results showed that the SFD module maintained classification performance when one signal was distorted and recovered about 20% of classification accuracy when four signals were distorted simultaneously. The method was simple to implement. Additionally, these outcomes were observed for all subjects, including the leg amputee, which implies the promise of the designed sensor interface for providing a reliable neural-machine interface for artificial legs.

  3. Classification of fricative consonants for speech enhancement in hearing devices.

    Directory of Open Access Journals (Sweden)

    Ying-Yee Kong

    Full Text Available OBJECTIVE: To investigate a set of acoustic features and classification methods for the classification of three groups of fricative consonants differing in place of articulation. METHOD: A support vector machine (SVM algorithm was used to classify the fricatives extracted from the TIMIT database in quiet and also in speech babble noise at various signal-to-noise ratios (SNRs. Spectral features including four spectral moments, peak, slope, Mel-frequency cepstral coefficients (MFCC, Gammatone filters outputs, and magnitudes of fast Fourier Transform (FFT spectrum were used for the classification. The analysis frame was restricted to only 8 msec. In addition, commonly-used linear and nonlinear principal component analysis dimensionality reduction techniques that project a high-dimensional feature vector onto a lower dimensional space were examined. RESULTS: With 13 MFCC coefficients, 14 or 24 Gammatone filter outputs, classification performance was greater than or equal to 85% in quiet and at +10 dB SNR. Using 14 Gammatone filter outputs above 1 kHz, classification accuracy remained high (greater than 80% for a wide range of SNRs from +20 to +5 dB SNR. CONCLUSIONS: High levels of classification accuracy for fricative consonants in quiet and in noise could be achieved using only spectral features extracted from a short time window. Results of this work have a direct impact on the development of speech enhancement algorithms for hearing devices.

  4. Design of a robust EMG sensing interface for pattern classification

    Science.gov (United States)

    Huang, He; Zhang, Fan; Sun, Yan L.; He, Haibo

    2010-10-01

    Electromyographic (EMG) pattern classification has been widely investigated for neural control of external devices in order to assist with movements of patients with motor deficits. Classification performance deteriorates due to inevitable disturbances to the sensor interface, which significantly challenges the clinical value of this technique. This study aimed to design a sensor fault detection (SFD) module in the sensor interface to provide reliable EMG pattern classification. This module monitored the recorded signals from individual EMG electrodes and performed a self-recovery strategy to recover the classification performance when one or more sensors were disturbed. To evaluate this design, we applied synthetic disturbances to EMG signals collected from leg muscles of able-bodied subjects and a subject with a transfemoral amputation and compared the accuracies for classifying transitions between different locomotion modes with and without the SFD module. The results showed that the SFD module maintained classification performance when one signal was distorted and recovered about 20% of classification accuracy when four signals were distorted simultaneously. The method was simple to implement. Additionally, these outcomes were observed for all subjects, including the leg amputee, which implies the promise of the designed sensor interface for providing a reliable neural-machine interface for artificial legs.

  5. Fuzzy Aspect Based Opinion Classification System for Mining Tourist Reviews

    Directory of Open Access Journals (Sweden)

    Muhammad Afzaal

    2016-01-01

    Full Text Available Due to the large amount of opinions available on the websites, tourists are often overwhelmed with information and find it extremely difficult to use the available information to make a decision about the tourist places to visit. A number of opinion mining methods have been proposed in the past to identify and classify an opinion into positive or negative. Recently, aspect based opinion mining has been introduced which targets the various aspects present in the opinion text. A number of existing aspect based opinion classification methods are available in the literature but very limited research work has targeted the automatic aspect identification and extraction of implicit, infrequent, and coreferential aspects. Aspect based classification suffers from the presence of irrelevant sentences in a typical user review. Such sentences make the data noisy and degrade the classification accuracy of the machine learning algorithms. This paper presents a fuzzy aspect based opinion classification system which efficiently extracts aspects from user opinions and perform near to accurate classification. We conducted experiments on real world datasets to evaluate the effectiveness of our proposed system. Experimental results prove that the proposed system not only is effective in aspect extraction but also improves the classification accuracy.

  6. Accuracy of rainfall measurement for scales of hydrological interest

    Directory of Open Access Journals (Sweden)

    S. J. Wood

    2000-01-01

    Full Text Available The dense network of 49 raingauges over the 135 km2 Brue catchment in Somerset, England is used to examine the accuracy of rainfall estimates obtained from raingauges and from weather radar. Methods for data quality control and classification of precipitation types are first described. A super-dense network comprising eight gauges within a 2 km grid square is employed to obtain a 'true value' of rainfall against which the 2 km radar grid and a single 'typical gauge' estimate can be compared. Accuracy is assessed as a function of rainfall intensity, for different periods of time-integration (15 minutes, 1 hour and 1 day and for two 8-gauge networks in areas of low and high relief. In a similar way, the catchment gauge network is used to provide the 'true catchment rainfall' and the accuracy of a radar estimate (an area-weighted average of radar pixel values and a single 'typical gauge' estimate of catchment rainfall evaluated as a function of rainfall intensity. A single gauge gives a standard error of estimate for rainfall in a 2 km square and over the catchment of 33% and 65% respectively, at rain rates of 4 mm in 15 minutes. Radar data at 2 km resolution give corresponding errors of 50% and 55%. This illustrates the benefit of using radar when estimating catchment scale rainfall. A companion paper (Wood et al., 2000 considers the accuracy of rainfall estimates obtained using raingauge and radar in combination. Keywords: rainfall, accuracy, raingauge, radar

  7. Investigation of the Accuracy of Google Earth Elevation Data

    Science.gov (United States)

    El-Ashmawy, Khalid L. A.

    2016-09-01

    Digital Elevation Models (DEMs) comprise valuable source of elevation data required for many engineering applications. Contour lines, slope - aspect maps are part of their many uses. Moreover, DEMs are used often in geographic information systems (GIS), and are the most common basis for digitally-produced relief maps. This paper proposes a method of generating DEM by using Google Earth elevation data which is easier and free. The case study consisted of three different small regions in the northern beach in Egypt. The accuracy of the Google earth derived elevation data are reported using root mean square error (RMSE), mean error (ME) and maximum absolute error (MAE). All these accuracy statistics were computed using the ground coordinates of 200 reference points for each region of the case study. The reference data was collected with total station survey. The results showed that the accuracies for the prepared DEMs are suitable for some certain engineering applications but inadequate to meet the standard required for fine/small scale DEM for very precise engineering study. The obtained accuracies for terrain with small height difference can be used for preparing large area cadastral, city planning, or land classification maps. In general, Google Earth elevation data can be used only for investigation and preliminary studies with low cost. It is strongly concluded that the users of Google Earth have to test the accuracy of elevation data by comparing with reference data before using it.

  8. Exploring the word superiority effect using TVA

    DEFF Research Database (Denmark)

    Starrfelt, Randi

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. It is unclear, however, if this is due to a lower threshold for perc...... simultaneously we find a different pattern: In a whole report experiment with six stimuli (letters or words), letters are perceived more easily than words, and this is reflected both in perceptual processing speed and short term memory capacity....

  9. de educación media superior

    Directory of Open Access Journals (Sweden)

    Enrique Cerón Ferrer

    2007-01-01

    Full Text Available El trabajo presenta los resultados obtenidos en relación al conocimiento y manejo que sobre matemáticas tienen los estudiantes de educación media superior, de los Centros de Estudios Tecnológicos y de Servicios del Distrito Federal, de diferentes carreras que se imparten en estos centros escolares, durante el año 2005. El instrumento de análisis es un cuestionario que contestaron los alumnos, la metodología utilizada es de tipo longitudinal y comparativa.

  10. Research advances in the study of Pistacia chinensis Bunge, a superior tree species for biomass energy

    Institute of Scientific and Technical Information of China (English)

    Li Hong-lin; Zhang Zhi-xiang; Lin Shan-zhi; Li Xiao-xu

    2007-01-01

    As a renewable energy, biomass energy has aroused wide attention and studies of this issue have become a hot topic throughout the world. Pistacia chinensis Bunge (Anacardiaceae) is a superior species for biomass energy with high oil content in seeds and wide geographic distribution. It is a dioeciously, deciduous arbor, flowering from March to April and bearing fruits from September to November. The classification, regional distribution and biological characteristics of P. chinensis are stated in this paper,then, research advances in the growth, breeding and physiology of this species are summarized. The problems in present studies are broached. Finally, a future direction for research is proposed.

  11. Classification via Clustering for Anonymization Data

    Directory of Open Access Journals (Sweden)

    Sridhar Mandapati

    2014-02-01

    Full Text Available Due to the exponential growth of hardware technology particularly in the field of electronic data storage media and processing such data, has raised serious issues related in order to protect the individual privacy like ethical, philosophical and legal. Data mining techniques are employed to ensure the privacy. Privacy Preserving Data Mining (PPDM techniques aim at protecting the sensitive data and mining results. In this study, the different Clustering techniques via classification with and without anonymization data using mining tool WEKA is presented. The aim of this study is to investigate the performance of different clustering methods for the diabetic data set and to compare the efficiency of privacy preserving mining. The accuracy of classification via clustering is evaluated using K-means, Expectation-Maximization (EM and Density based clustering methods.

  12. Generating Best Features for Web Page Classification

    Directory of Open Access Journals (Sweden)

    K. Selvakuberan

    2008-03-01

    Full Text Available As the Internet provides millions of web pages for each and every search term, getting interesting and required results quickly from the Web becomes very difficult. Automatic classification of web pages into relevant categories is the current research topic which helps the search engine to get relevant results. As the web pages contain many irrelevant, infrequent and stop words that reduce the performance of the classifier, extracting or selecting representative features from the web page is an essential pre-processing step. The goal of this paper is to find minimum number of highly qualitative features by integrating feature selection techniques. We conducted experiments with various numbers of features selected by different feature selection algorithms on a well defined initial set of features and show that cfssubset evaluator combined with term frequency method gives minimal qualitative features enough to attain considerable classification accuracy.

  13. Semisupervised Particle Swarm Optimization for Classification

    Directory of Open Access Journals (Sweden)

    Xiangrong Zhang

    2014-01-01

    Full Text Available A semisupervised classification method based on particle swarm optimization (PSO is proposed. The semisupervised PSO simultaneously uses limited labeled samples and large amounts of unlabeled samples to find a collection of prototypes (or centroids that are considered to precisely represent the patterns of the whole data, and then, in principle of the “nearest neighborhood,” the unlabeled data can be classified with the obtained prototypes. In order to validate the performance of the proposed method, we compare the classification accuracy of PSO classifier, k-nearest neighbor algorithm, and support vector machine on six UCI datasets, four typical artificial datasets, and the USPS handwritten dataset. Experimental results demonstrate that the proposed method has good performance even with very limited labeled samples due to the usage of both discriminant information provided by labeled samples and the structure information provided by unlabeled samples.

  14. Automatic lexical classification: bridging research and practice.

    Science.gov (United States)

    Korhonen, Anna

    2010-08-13

    Natural language processing (NLP)--the automatic analysis, understanding and generation of human language by computers--is vitally dependent on accurate knowledge about words. Because words change their behaviour between text types, domains and sub-languages, a fully accurate static lexical resource (e.g. a dictionary, word classification) is unattainable. Researchers are now developing techniques that could be used to automatically acquire or update lexical resources from textual data. If successful, the automatic approach could considerably enhance the accuracy and portability of language technologies, such as machine translation, text mining and summarization. This paper reviews the recent and on-going research in automatic lexical acquisition. Focusing on lexical classification, it discusses the many challenges that still need to be met before the approach can benefit NLP on a large scale.

  15. Automated spectral classification using template matching

    Institute of Scientific and Technical Information of China (English)

    Fu-Qing Duan; Rong Liu; Ping Guo; Ming-Quan Zhou; Fu-Chao Wu

    2009-01-01

    An automated spectral classification technique for large sky surveys is pro-posed. We firstly perform spectral line matching to determine redshift candidates for an observed spectrum, and then estimate the spectral class by measuring the similarity be-tween the observed spectrum and the shifted templates for each redshift candidate. As a byproduct of this approach, the spectral redshift can also be obtained with high accuracy. Compared with some approaches based on computerized learning methods in the liter-ature, the proposed approach needs no training, which is time-consuming and sensitive to selection of the training set. Both simulated data and observed spectra are used to test the approach; the results show that the proposed method is efficient, and it can achieve a correct classification rate as high as 92.9%, 97.9% and 98.8% for stars, galaxies and quasars, respectively.

  16. Classification using Hierarchical Naive Bayes models

    DEFF Research Database (Denmark)

    Langseth, Helge; Dyhre Nielsen, Thomas

    2006-01-01

    Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe...... an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to “information double-counting” and interaction omission. In this paper we focus on a relatively new set of models......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

  17. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  18. Optic nerve head analysis of superior segmental optic hypoplasia using Heidelberg retina tomography

    Directory of Open Access Journals (Sweden)

    Atsushi Miki

    2010-10-01

    Full Text Available Atsushi Miki1,2, Motohiro Shirakashi1, Kiyoshi Yaoeda1, Atsushi Fukushima1, Mineo Takagi1, Haruki Abe11Division of Ophthalmology and Visual Sciences, Niigata University Graduate School of Medical and Dental Sciences, Niigata, 2Department of Ophthalmology, Kawasaki Medical School, Okayama, JapanPurpose: To evaluate the optic disc characteristics of eyes with superior segmental optic hypoplasia (SSOH using the Heidelberg retina tomograph (HRT.Patients and methods: Thirteen eyes of 13 Japanese patients with SSOH were studied with the HRT (software version: 3.0. The group included six males and seven females, with a mean age of 34.7 years. Six optic disc parameters in the six sectors derived from the patients with SSOH were compared with those of 13 eyes of 13 normal controls. In addition, the diagnostic classification performance of the Frederick S Mikelberg (FSM discriminant function, glaucoma probability score (GPS, and Moorfields regression analysis (MRA were assessed.Results: When compared with normal subjects, many of the optic disc parameters were significantly altered in SSOH in the superior sectors. The area under the curve (AUC for the receiver operating characteristic was 0.932 for the rim area, 0.926 for the cup-to-disc area ratio, and 0.882 for the cup shape measure. Among the HRT parameters, the largest AUC (0.988 was found for the cup shape measure in the nasal superior segment. The proportion classified as outside normal limits by the FSM discriminant function was 92.3% (12 eyes. For GPS, six eyes (46.2% were classified as outside normal limits. For MRA, when borderline cases were considered test-negative or test-positive, 10 eyes (76.9% or 11 eyes (84.6% were classified as outside normal limits, respectively. The AUCs were 0.976 for the FSM discriminant function, 0.914 for the MRA overall classification, and 0.710 for the GPS overall classification.Conclusions: In eyes with SSOH, there is a significant thinning of the rim

  19. Diagnostic accuracy in virtual dermatopathology

    DEFF Research Database (Denmark)

    Mooney, E.; Kempf, W.; Jemec, G.B.E.;

    2012-01-01

    Background Virtual microscopy is used for teaching medical students and residents and for in-training and certification examinations in the United States. However, no existing studies compare diagnostic accuracy using virtual slides and photomicrographs. The objective of this study was to compare...... slides and photomicrographs with corresponding clinical photographs and information in a self-assessment examination format. Descriptive data analysis and comparison of groups were performed using a chi-square test. Results Diagnostic accuracy in dermatopathology using virtual dermatopathology...... represented a useful tool for learning; 90% felt that virtual dermatopathology is useful tool for teaching dermatopathology. Conclusion No significant difference was observed in diagnostic accuracy using virtual dermatopathology compared to photomicrographs. Most participants felt virtual dermatopathology...

  20. Decision theory for discrimination-aware classification

    KAUST Repository

    Kamiran, Faisal

    2012-12-01

    Social discrimination (e.g., against females) arising from data mining techniques is a growing concern worldwide. In recent years, several methods have been proposed for making classifiers learned over discriminatory data discriminationaware. However, these methods suffer from two major shortcomings: (1) They require either modifying the discriminatory data or tweaking a specific classification algorithm and (2) They are not flexible w.r.t. discrimination control and multiple sensitive attribute handling. In this paper, we present two solutions for discrimination-aware classification that neither require data modification nor classifier tweaking. Our first and second solutions exploit, respectively, the reject option of probabilistic classifier(s) and the disagreement region of general classifier ensembles to reduce discrimination. We relate both solutions with decision theory for better understanding of the process. Our experiments using real-world datasets demonstrate that our solutions outperform existing state-ofthe-art methods, especially at low discrimination which is a significant advantage. The superior performance coupled with flexible control over discrimination and easy applicability to multiple sensitive attributes makes our solutions an important step forward in practical discrimination-aware classification. © 2012 IEEE.