WorldWideScience

Sample records for classified information

  1. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security......

  2. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  3. 75 FR 707 - Classified National Security Information

    Science.gov (United States)

    2010-01-05

    ... National Security Information Memorandum of December 29, 2009--Implementation of the Executive Order ``Classified National Security Information'' Order of December 29, 2009--Original Classification Authority #0... 13526 of December 29, 2009 Classified National Security Information This order prescribes a...

  4. Searching and Classifying non-textual information

    OpenAIRE

    Arentz, Will Archer

    2004-01-01

    This dissertation contains a set of contributions that deal with search or classification of non-textual information. Each contribution can be considered a solution to a specific problem, in an attempt to map out a common ground. The problems cover a wide range of research fields, including search in music, classifying digitally sampled music, visualization and navigation in search results, and classifying images and Internet sites.On classification of digitally sample music, as method for ex...

  5. Comparing cosmic web classifiers using information theory

    Science.gov (United States)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  6. Comparing cosmic web classifiers using information theory

    CERN Document Server

    Leclercq, Florent; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-web, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  7. 5 CFR 1312.23 - Access to classified information.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Access to classified information. 1312.23... Classified Information § 1312.23 Access to classified information. Classified information may be made... “need to know” and the access is essential to the accomplishment of official government duties....

  8. 6 CFR 7.23 - Emergency release of classified information.

    Science.gov (United States)

    2010-01-01

    ... Classified Information Non-disclosure Form. In emergency situations requiring immediate verbal release of... information through approved communication channels by the most secure and expeditious method possible, or...

  9. What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

    CERN Document Server

    Hu, Bao-Gang

    2011-01-01

    In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-info...

  10. 75 FR 37253 - Classified National Security Information

    Science.gov (United States)

    2010-06-28

    ... Special access programs. 2001.50 Telecommunications, automated information systems, and network security... Telecommunications, automated 4.1, 4.2 information systems, and network security. 2001.51 Technical security 4.1 2001... and Records Administration Information Security Oversight Office 32 CFR Parts 2001 and 2003...

  11. 21 CFR 1402.4 - Information classified by another agency.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Information classified by another agency. 1402.4 Section 1402.4 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY MANDATORY DECLASSIFICATION REVIEW § 1402.4 Information classified by another agency. When a request is received for information that...

  12. 18 CFR 3a.12 - Authority to classify official information.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Authority to classify official information. 3a.12 Section 3a.12 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Classification §...

  13. 3 CFR - Classified Information and Controlled Unclassified Information

    Science.gov (United States)

    2010-01-01

    ... had been known collectively as “Sensitive But Unclassified” (SBU) information. Sensitive But... protections for categorizing and handling SBU, there are more than 107 unique markings and over 130 different labeling or handling processes and procedures for SBU information. A Presidential Memorandum of December...

  14. 75 FR 733 - Implementation of the Executive Order, ``Classified National Security Information''

    Science.gov (United States)

    2010-01-05

    ... December 29, 2009 Implementation of the Executive Order, ``Classified National Security Information... entitled, ``Classified National Security Information'' (the ``order''), which substantially advances my... Information Security Oversight Office (ISOO) a copy of the department or agency regulations implementing...

  15. 15 CFR 4a.8 - Access to classified information by individuals outside the Government.

    Science.gov (United States)

    2010-01-01

    ... Access to classified information by individuals outside the Government. (a) Industrial, Educational, and... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Access to classified information by... information: (1) Determines in writing that: (i) Access is consistent with national security; (ii)...

  16. Classifying and Designing the Educational Methods with Information Communications Technoligies

    Directory of Open Access Journals (Sweden)

    I. N. Semenova

    2013-01-01

    Full Text Available The article describes the conceptual apparatus for implementing the Information Communications Technologies (ICT in education. The authors suggest the classification variants of the related teaching methods according to the following component combinations: types of students work with information, goals of ICT incorporation into the training process, individualization degrees, contingent involvement, activity levels and pedagogical field targets, ideology of informational didactics, etc. Each classification can solve the educational tasks in the context of the partial paradigm of modern didactics; any kind of methods implies the particular combination of activities in educational environment.The whole spectrum of classifications provides the informational functional basis for the adequate selection of necessary teaching methods in accordance with the specified goals and planned results. The potential variants of ICT implementation methods are given for different teaching models. 

  17. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  18. 10 CFR 95.35 - Access to matter classified as National Security Information and Restricted Data.

    Science.gov (United States)

    2010-01-01

    ... SECURITY CLEARANCE AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION AND RESTRICTED DATA Control of Information § 95.35 Access to matter classified as National Security Information and Restricted Data. (a... have access to matter revealing Secret or Confidential National Security Information or Restricted...

  19. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    Science.gov (United States)

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  20. 18 CFR 3a.61 - Storage and custody of classified information.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Storage and custody of classified information. 3a.61 Section 3a.61 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Storage and...

  1. Classifying cognitive profiles using machine learning with privileged information in Mild Cognitive Impairment

    Directory of Open Access Journals (Sweden)

    Hanin Hamdan Alahmadi

    2016-11-01

    Full Text Available Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalised Matrix Learning Vector Quantization (GMLVQ classifiers to discriminate patients with Mild Cognitive Impairment (MCI from healthy controls based on their cognitive skills. Further, we adopted a ``Learning with privileged information'' approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants.MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls based on the learning performance and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on the learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1 when overall fMRI signal for structured stimuli is

  2. Classifying Cognitive Profiles Using Machine Learning with Privileged Information in Mild Cognitive Impairment

    Science.gov (United States)

    Alahmadi, Hanin H.; Shen, Yuan; Fouad, Shereen; Luft, Caroline Di B.; Bentham, Peter; Kourtzi, Zoe; Tino, Peter

    2016-01-01

    Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalized Matrix Learning Vector Quantization (GMLVQ) classifiers to discriminate patients with Mild Cognitive Impairment (MCI) from healthy controls based on their cognitive skills. Further, we adopted a “Learning with privileged information” approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI) during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants. MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on a probabilistic sequence learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1) when overall fMRI signal is used as inputs to the classifier, the post

  3. Comparison of classifiers for decoding sensory and cognitive information from prefrontal neuronal populations.

    Science.gov (United States)

    Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann

    2014-01-01

    Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders.

  4. Comparison of classifiers for decoding sensory and cognitive information from prefrontal neuronal populations.

    Directory of Open Access Journals (Sweden)

    Elaine Astrand

    Full Text Available Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF: the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders.

  5. Insider Threats: DOD Should Strengthen Management and Guidance to Protect Classified Information and Systems

    Science.gov (United States)

    2015-06-01

    examination reports, facility access records, security violation files, travel records, foreign contact reports, and financial disclosure filings. b...suffered grave damage to national security and an increased risk to the lives of U.S. personnel due to unauthorized disclosures of classified information...because this work may contain copyrighted images or other material , permission from the copyright holder may be necessary if you wish to reproduce this

  6. 22 CFR 9.12 - Access to classified information by historical researchers and certain former government personnel.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Access to classified information by historical... GENERAL SECURITY INFORMATION REGULATIONS § 9.12 Access to classified information by historical researchers and certain former government personnel. For Department procedures regarding the access to...

  7. A criterion based on an information theoretic measure for goodness of fit between classifier and data base

    Science.gov (United States)

    Eigen, D. J.; Davida, G. I.; Northouse, R. A.

    1973-01-01

    A criterion for characterizing an iteratively trained classifier is presented. The criterion is based on an information theoretic measure that is developed from modeling classifier training iterations as a set of cascaded channels. The criterion is formulated as a figure of merit and as a performance index to check the appropriateness of application of the characterized classifier to an unknown data base and for implementing classifier updates and data selection respectively.

  8. A criterion based on an information theoretic measure for goodness of fit between classifier and data base. [in pattern recognition

    Science.gov (United States)

    Eigen, D. J.; Davida, G. I.; Northouse, R. A.

    1974-01-01

    A criterion for characterizing an iteratively trained classifier is presented. The criterion is based on an information theoretic measure that is developed from modeling classifier training iterations as a set of cascaded channels. The criterion is formulated as a figure of merit and as a performance index to check the appropriateness of application of the characterized classifier to an unknown data base and for implementing classifier updates and data selection, respectively.

  9. New approach to information fusion for Lipschitz classifiers ensembles: Application in multi-channel C-OTDR-monitoring systems

    Science.gov (United States)

    Timofeev, Andrey V.; Egorov, Dmitry V.

    2016-06-01

    This paper presents new results concerning selection of an optimal information fusion formula for an ensemble of Lipschitz classifiers. The goal of information fusion is to create an integral classificatory which could provide better generalization ability of the ensemble while achieving a practically acceptable level of effectiveness. The problem of information fusion is very relevant for data processing in multi-channel C-OTDR-monitoring systems. In this case we have to effectively classify targeted events which appear in the vicinity of the monitored object. Solution of this problem is based on usage of an ensemble of Lipschitz classifiers each of which corresponds to a respective channel. We suggest a brand new method for information fusion in case of ensemble of Lipschitz classifiers. This method is called "The Weighing of Inversely as Lipschitz Constants" (WILC). Results of WILC-method practical usage in multichannel C-OTDR monitoring systems are presented.

  10. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor

    Science.gov (United States)

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider’s hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse’s gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider’s motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country’s top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  11. 48 CFR 53.204-1 - Safeguarding classified information within industry (DD Form 254, DD Form 441).

    Science.gov (United States)

    2010-10-01

    ... information within industry (DD Form 254, DD Form 441). 53.204-1 Section 53.204-1 Federal Acquisition....204-1 Safeguarding classified information within industry (DD Form 254, DD Form 441). The following... specified in subpart 4.4 and the clause at 52.204-2: (a) DD Form 254 (Department of Defense (DOD)),...

  12. Towards Evidence-based Precision Medicine: Extracting Population Information from Biomedical Text using Binary Classifiers and Syntactic Patterns

    Science.gov (United States)

    Raja, Kalpana; Dasot, Naman; Goyal, Pawan; Jonnalagadda, Siddhartha R

    2016-01-01

    Precision Medicine is an emerging approach for prevention and treatment of disease that considers individual variability in genes, environment, and lifestyle for each person. The dissemination of individualized evidence by automatically identifying population information in literature is a key for evidence-based precision medicine at the point-of-care. We propose a hybrid approach using natural language processing techniques to automatically extract the population information from biomedical literature. Our approach first implements a binary classifier to classify sentences with or without population information. A rule-based system based on syntactic-tree regular expressions is then applied to sentences containing population information to extract the population named entities. The proposed two-stage approach achieved an F-score of 0.81 using a MaxEnt classifier and the rule- based system, and an F-score of 0.87 using a Nai've-Bayes classifier and the rule-based system, and performed relatively well compared to many existing systems. The system and evaluation dataset is being released as open source. PMID:27570671

  13. 高校信息安全等级保护评测%University classified protection of information security evaluation

    Institute of Scientific and Technical Information of China (English)

    刘玉燕

    2011-01-01

    Information security classified protection, risk assessment, system security assessment is the current state of the construction of information security system is an important content. Rank is standard,evaluation is the means. With everyone here is to protect infonnation security classified related issues.Implementation of infonnation security classified protection can promote network security service to establish and perfect the mechanism, to adopt systems, standardized, scientific management and safeguard measures, to improve technology classified of information security and protection, security departments business system efficiency, safety operation.%信息安全等级保护、风险评估、系统安全测评是当前国家信息安全保障体系建设的重要内容.等级保护是标准,评估、测评是手段.这里探讨的是与信息安全等级保护相关问题.实施信息安全等级保护可以推动网络安全服务机制的建立和完善;有利于采取系统、规范、科学的管理和技术保障措施,提高信息安全保护水平;保障各部门业务系统高效、安全运转.

  14. 19 CFR 351.105 - Public, business proprietary, privileged, and classified information.

    Science.gov (United States)

    2010-04-01

    ... proprietary information was obtained; (10) The position of a domestic producer or workers regarding a petition... not properly designated as business proprietary; (4) Publicly available laws, regulations, decrees... consider information privileged if, based on principles of law concerning privileged information,...

  15. 3 CFR 13526 - Executive Order 13526 of December 29, 2009. Classified National Security Information

    Science.gov (United States)

    2010-01-01

    .... Nevertheless, throughout our history, the national defense has required that certain information be maintained... the time frame established in paragraph (b) of this section. (b) If the original classification... plans that remain in effect, or reveal operational or tactical elements of prior plans that...

  16. 75 FR 51609 - Classified National Security Information Program for State, Local, Tribal, and Private Sector...

    Science.gov (United States)

    2010-08-23

    ... security information shared by the Federal Government with State, local, tribal, and private sector (SLTPS... entity'' as defined in section 2 of the Homeland Security Act of 2002 (6 U.S.C. 101(11)). (g) ``Private... President [[Page 51609

  17. Scientometric Indicators as a Way to Classify Brands for Customer’s Information

    Directory of Open Access Journals (Sweden)

    Mihaela Paun

    2015-10-01

    Full Text Available The paper proposes a novel approach for classification of different brands that commercialize similar products, for customer information. The approach is tested on electronic shopping records found on Amazon.com, by quantifying customer behavior and comparing the results with classifications of the same brands found online through search engines. The indicators proposed for the classification are currently used scientometric measures that can be easily applied to marketing classification.

  18. Classification of traumatic brain injury severity using informed data reduction in a series of binary classifier algorithms.

    Science.gov (United States)

    Prichep, Leslie S; Jacquin, Arnaud; Filipenko, Julie; Dastidar, Samanwoy Ghosh; Zabele, Stephen; Vodencarević, Asmir; Rothman, Neil S

    2012-11-01

    Assessment of medical disorders is often aided by objective diagnostic tests which can lead to early intervention and appropriate treatment. In the case of brain dysfunction caused by head injury, there is an urgent need for quantitative evaluation methods to aid in acute triage of those subjects who have sustained traumatic brain injury (TBI). Current clinical tools to detect mild TBI (mTBI/concussion) are limited to subjective reports of symptoms and short neurocognitive batteries, offering little objective evidence for clinical decisions; or computed tomography (CT) scans, with radiation-risk, that are most often negative in mTBI. This paper describes a novel methodology for the development of algorithms to provide multi-class classification in a substantial population of brain injured subjects, across a broad age range and representative subpopulations. The method is based on age-regressed quantitative features (linear and nonlinear) extracted from brain electrical activity recorded from a limited montage of scalp electrodes. These features are used as input to a unique "informed data reduction" method, maximizing confidence of prospective validation and minimizing over-fitting. A training set for supervised learning was used, including: "normal control," "concussed," and "structural injury/CT positive (CT+)." The classifier function separating CT+ from the other groups demonstrated a sensitivity of 96% and specificity of 78%; the classifier separating "normal controls" from the other groups demonstrated a sensitivity of 81% and specificity of 74%, suggesting high utility of such classifiers in acute clinical settings. The use of a sequence of classifiers where the desired risk can be stratified further supports clinical utility.

  19. Supervised Feature Subset Selection based on Modified Fuzzy Relative Information Measure for classifier Cart

    Directory of Open Access Journals (Sweden)

    K.SAROJINI,

    2010-06-01

    Full Text Available Feature subset selection is an essential task in data mining. This paper presents a new method for dealing with supervised feature subset selection based on Modified Fuzzy Relative Information Measure (MFRIM. First, Discretization algorithm is applied to discretize numeric features to construct the membership functions of each fuzzy sets of a feature. Then the proposed MFRIM is applied to select the feature subset focusing on boundary samples. The proposed method can select feature subset with minimum number of features, which are relevant to get higher average classification accuracy for datasets. The experimental results with UCI datasets show that the proposed algorithm is effective and efficient in selecting subset with minimum number of features getting higher average classification accuracy than the consistency based feature subset selection method.

  20. Characterizing, Classifying, and Understanding Information Security Laws and Regulations: Considerations for Policymakers and Organizations Protecting Sensitive Information Assets

    Science.gov (United States)

    Thaw, David Bernard

    2011-01-01

    Current scholarly understanding of information security regulation in the United States is limited. Several competing mechanisms exist, many of which are untested in the courts and before state regulators, and new mechanisms are being proposed on a regular basis. Perhaps of even greater concern, the pace at which technology and threats change far…

  1. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... and integration possible, the field of molecular biology seems to be overwhelmingly homogeneous, and in need of heterogeneity and conflict to add drive and momentum to the work being carried out. The paper is based on observations of daily life in a molecular microbiology laboratory at the Technical University...

  2. An Advancement To The Security Level Through Galois Field In The Existing Password Based Technique Of Hiding Classified Information In Images

    Directory of Open Access Journals (Sweden)

    Mita Kosode

    2015-06-01

    Full Text Available Abstract In this paper we are using the existing passcode based approach of hiding classified information in images with addition of the Galois field theorywhich is advancing the security level to make this combination method extremely difficult to intercept and useful for open channel communication while maintaining the losses and high speed transmission.

  3. A Learning Outcome-Oriented Approach towards Classifying Pervasive Games for Learning Using Game Design Patterns and Contextual Information

    Science.gov (United States)

    Schmitz, Birgit; Klemke, Roland; Specht, Marcus

    2013-01-01

    Mobile and in particular pervasive games are a strong component of future scenarios for teaching and learning. Based on results from a previous review of practical papers, this work explores the educational potential of pervasive games for learning by analysing underlying game mechanisms. In order to determine and classify cognitive and affective…

  4. Multi-source Fuzzy Information Fusion Method Based on Bayesian Optimal Classifier%基于贝叶斯最优分类器的多源模糊信息融合方法

    Institute of Scientific and Technical Information of China (English)

    苏宏升

    2008-01-01

    To make conventional Bayesian optimal classifier possess the abilities of disposing fuzzy information and realizing the automation of reasoning process, a new Bayesian optimal classifier is proposed with fuzzy information embedded. It can not only dispose fuzzy information effectively, but also retain learning properties of Bayesian optimal classifier. In addition, according to the evolution of fuzzy set theory, vague set is also imbedded into it to generate vague Bayesian optimal classifier. It can simultaneously simulate the twofold characteristics of fuzzy information from the positive and reverse directions. Further, a set pair Bayesian optimal classifier is also proposed considering the threefold characteristics of fuzzy information from the positive, reverse, and indeterminate sides. In the end, a knowledge-based artificial neural network (KBANN) is presented to realize automatic reasoning of Bayesian optimal classifier. It not only reduces the computational cost of Bayesian optimal classifier but also improves its classification learning quality.

  5. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    How does a corporation know it emits carbon? Acquiring such knowledge starts with the classification of environmentally relevant consumption information. This paper visits the corporate location at which this underlying element for their knowledge is assembled to give rise to carbon emissions. Us...

  6. Classifying Pediatric Central Nervous System Tumors through near Optimal Feature Selection and Mutual Information: A Single Center Cohort

    Directory of Open Access Journals (Sweden)

    Mohammad Faranoush

    2013-10-01

    Full Text Available Background: Labeling, gathering mutual information, clustering and classificationof central nervous system tumors may assist in predicting not only distinct diagnosesbased on tumor-specific features but also prognosis. This study evaluates the epidemi-ological features of central nervous system tumors in children who referred to Mahak’sPediatric Cancer Treatment and Research Center in Tehran, Iran.Methods: This cohort (convenience sample study comprised 198 children (≤15years old with central nervous system tumors who referred to Mahak's PediatricCancer Treatment and Research Center from 2007 to 2010. In addition to the descriptiveanalyses on epidemiological features and mutual information, we used the LeastSquares Support Vector Machines method in MATLAB software to propose apreliminary predictive model of pediatric central nervous system tumor feature-labelanalysis. Results:Of patients, there were 63.1% males and 36.9% females. Patients' mean±SDage was 6.11±3.65 years. Tumor location was as follows: supra-tentorial (30.3%, infra-tentorial (67.7% and 2% (spinal. The most frequent tumors registered were: high-gradeglioma (supra-tentorial in 36 (59.99% patients and medulloblastoma (infra-tentorialin 65 (48.51% patients. The most prevalent clinical findings included vomiting,headache and impaired vision. Gender, age, ethnicity, tumor stage and the presence ofmetastasis were the features predictive of supra-tentorial tumor histology.Conclusion: Our data agreed with previous reports on the epidemiology of centralnervous system tumors. Our feature-label analysis has shown how presenting features maypartially predict diagnosis. Timely diagnosis and management of central nervous systemtumors can lead to decreased disease burden and improved survival. This may be furtherfacilitated through development of partitioning, risk prediction and prognostic models.

  7. Information security management system and management baseline for classified protection%信息安全管理体系与等级保护管理要求

    Institute of Scientific and Technical Information of China (English)

    马遥; 黄俊强

    2012-01-01

    This article describes the information security management system and development process and the main contents, analyses the information security management system standard and the baseline for classified protection standards, to identify common during points and differences, and analyzes the relationship, and apply both of them into the actual management process.%介绍了信息安全管理体系的主要内容及发展过程,结合信息安全管理体系标准与等级保护标准,将信息安全管理体系与等级保护基本要求的安全管理进行对比分析,找出其间的共同点及区别,并分析了两者在实际应用上的相互关系,将二者运用到实际管理过程中.

  8. An Informal Summary of a New Formalism for Classifying Spin-Orbit Systems Using Tools Distilled from the Theory of Bundles

    CERN Document Server

    Heinemann, Klaus; Ellison, James A; Vogt, Mathias

    2015-01-01

    We give an informal summary of ongoing work which uses tools distilled from the theory of fibre bundles to classify and connect invariant fields associated with spin motion in storage rings. We mention four major theorems. One ties invariant fields with the notion of normal form, the second allows comparison of different invariant fields and the two others tie the existence of invariant fields to the existence of certain invariant sets. We explain how the theorems apply to the spin dynamics of spin-$1/2$ and spin-$1$ particles. Our approach elegantly unifies the spin-vector dynamics from the T-BMT equation with the spin-tensor dynamics and other dynamics and suggests an avenue for addressing the question of the existence of the invariant spin field.

  9. Mapping Robinia Pseudoacacia Forest Health Conditions by Using Combined Spectral, Spatial, and Textural Information Extracted from IKONOS Imagery and Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2015-07-01

    Full Text Available The textural and spatial information extracted from very high resolution (VHR remote sensing imagery provides complementary information for applications in which the spectral information is not sufficient for identification of spectrally similar landscape features. In this study grey-level co-occurrence matrix (GLCM textures and a local statistical analysis Getis statistic (Gi, computed from IKONOS multispectral (MS imagery acquired from the Yellow River Delta in China, along with a random forest (RF classifier, were used to discriminate Robina pseudoacacia tree health levels. Specifically, eight GLCM texture features (mean, variance, homogeneity, dissimilarity, contrast, entropy, angular second moment, and correlation were first calculated from IKONOS NIR band (Band 4 to determine an optimal window size (13 × 13 and an optimal direction (45°. Then, the optimal window size and direction were applied to the three other IKONOS MS bands (blue, green, and red for calculating the eight GLCM textures. Next, an optimal distance value (5 and an optimal neighborhood rule (Queen’s case were determined for calculating the four Gi features from the four IKONOS MS bands. Finally, different RF classification results of the three forest health conditions were created: (1 an overall accuracy (OA of 79.5% produced using the four MS band reflectances only; (2 an OA of 97.1% created with the eight GLCM features calculated from IKONOS Band 4 with the optimal window size of 13 × 13 and direction 45°; (3 an OA of 93.3% created with the all 32 GLCM features calculated from the four IKONOS MS bands with a window size of 13 × 13 and direction of 45°; (4 an OA of 94.0% created using the four Gi features calculated from the four IKONOS MS bands with the optimal distance value of 5 and Queen’s neighborhood rule; and (5 an OA of 96.9% created with the combined 16 spectral (four, spatial (four, and textural (eight features. The most important feature ranked by RF

  10. 涉密信息系统安全控制方案探究%Study on safety control scheme of classified information system

    Institute of Scientific and Technical Information of China (English)

    丁笑迎; 赵云; 黄攀

    2016-01-01

    The development of information technology to the confidentiality of the countries has brought enormous challenges,countries have realized the serious under the condition of the information technology, the security,has been more and more emphasized pays special attention to the confidentiality of the necessity,this article explores the classified information system security control scheme, in physical layer, network layer,system layer,application layer and data layer aspects were fully explained.%信息技术的发展给国家的保密工作带来了巨大的挑战,国家已经意识到信息技术条件下保密工作的严峻性,也越来越强调抓好保密工作的必要性,本文探究了涉密信息系统安全控制方案,在物理层、网络层、系统层、应用层和数据层方面进行了充分说明。

  11. Uncertainty and Climate Change and its effect on Generalization and Prediction abilities by creating Diverse Classifiers and Feature Section Methods using Information Fusion

    Directory of Open Access Journals (Sweden)

    Y. P. Kosta

    2010-11-01

    Full Text Available The model forecast suggests a deterministic approach. Forecasting was traditionally done by a singlemodel - deterministic prediction, recent years has witnessed drastic changes. Today, with InformationFusion (Ensemble technique it is possible to improve the generalization ability of classifiers with highlevels of reliability. Through Information Fusion it is easily possible to combine diverse & independentoutcomes for decision-making. This approach adopts the idea of combining the results of multiplemethods (two-way interactions between them using appropriate model on the testset. Althoughuncertainties are often very significant, for the purpose of single prediction, especially at the initialstage, one dose not consider uncertainties in the model, the initial conditions, or the very nature of theclimate (environment or atmosphere itself using single model. If we make small changes in the initialparameter setting, it will result in change in predictive accuracy of the model. Similarly, uncertainty inmodel physics can result in large forecast differences and errors. So, instead of running one prediction,run a collection/package/bundle (ensemble of predictions, each one kick starting from a different initialstate or with different conditions and sequentially executing the next. The variations resulting due toexecution of different prediction package/model could be then used (independently combining oraggregating to estimate the uncertainty of the prediction, giving us better accuracy and reliability. Inthis paper the authors propose to use Information fusion technique that will provide insight of probablekey parameters that is necessary to purposefully evaluate the successes of new generation of productsand services, improving forecasting. Ensembles can be creatively applied to provide insight against thenew generation products yielding higher probabilities of success. Ensemble will yield critical features ofthe products and also provide insight to

  12. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  13. An examination of electronic file transfer between host and microcomputers for the AMPMODNET/AIMNET (Army Material Plan Modernization Network/Acquisition Information Management Network) classified network environment

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.

    1990-11-01

    This report presents the results of investigation and testing conducted by Oak Ridge National Laboratory (ORNL) for the Project Manager -- Acquisition Information Management (PM-AIM), and the United States Army Materiel Command Headquarters (HQ-AMC). It concerns the establishment of file transfer capabilities on the Army Materiel Plan Modernization (AMPMOD) classified computer system. The discussion provides a general context for micro-to-mainframe connectivity and focuses specifically upon two possible solutions for file transfer capabilities. The second section of this report contains a statement of the problem to be examined, a brief description of the institutional setting of the investigation, and a concise declaration of purpose. The third section lays a conceptual foundation for micro-to-mainframe connectivity and provides a more detailed description of the AMPMOD computing environment. It gives emphasis to the generalized International Business Machines, Inc. (IBM) standard of connectivity because of the predominance of this vendor in the AMPMOD computing environment. The fourth section discusses two test cases as possible solutions for file transfer. The first solution used is the IBM 3270 Control Program telecommunications and terminal emulation software. A version of this software was available on all the IBM Tempest Personal Computer 3s. The second solution used is Distributed Office Support System host electronic mail software with Personal Services/Personal Computer microcomputer e-mail software running with IBM 3270 Workstation Program for terminal emulation. Test conditions and results are presented for both test cases. The fifth section provides a summary of findings for the two possible solutions tested for AMPMOD file transfer. The report concludes with observations on current AMPMOD understanding of file transfer and includes recommendations for future consideration by the sponsor.

  14. Classifying TDSS Stellar Variables

    Science.gov (United States)

    Amaro, Rachael Christina; Green, Paul J.; TDSS Collaboration

    2017-01-01

    The Time Domain Spectroscopic Survey (TDSS), a subprogram of SDSS-IV eBOSS, obtains classification/discovery spectra of point-source photometric variables selected from PanSTARRS and SDSS multi-color light curves regardless of object color or lightcurve shape. Tens of thousands of TDSS spectra are already available and have been spectroscopically classified both via pipeline and by visual inspection. About half of these spectra are quasars, half are stars. Our goal is to classify the stars with their correct variability types. We do this by acquiring public multi-epoch light curves for brighter stars (rprogram for analyzing astronomical time-series data, to constrain variable type both for broad statistics relevant to future surveys like the Transiting Exoplanet Survey Satellite (TESS) and the Large Synoptic Survey Telescope (LSST), and to find the inevitable exotic oddballs that warrant further follow-up. Specifically, the Lomb-Scargle Periodogram and the Box-Least Squares Method are being implemented and tested against their known variable classifications and parameters in the Catalina Surveys Periodic Variable Star Catalog. Variable star classifications include RR Lyr, close eclipsing binaries, CVs, pulsating white dwarfs, and other exotic systems. The key difference between our catalog and others is that along with the light curves, we will be using TDSS spectra to help in the classification of variable type, as spectra are rich with information allowing estimation of physical parameters like temperature, metallicity, gravity, etc. This work was supported by the SDSS Research Experience for Undergraduates program, which is funded by a grant from Sloan Foundation to the Astrophysical Research Consortium.

  15. Dynamic system classifier

    Science.gov (United States)

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A.

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω (t ) and damping factor γ (t ) . Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  16. Dynamic system classifier

    CERN Document Server

    Pumpe, Daniel; Müller, Ewald; Enßlin, Torsten A

    2016-01-01

    Stochastic differential equations describe well many physical, biological and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of DSC to oscillation processes with a time dependent frequency {\\omega}(t) and damping factor {\\gamma}(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The {\\omega} and {\\gamma} timelines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiment...

  17. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  18. Research on Information Security Protection Model for Smart Grid Based on Classified Protection%基于等级保护的智能电网信息安全防护模型研究

    Institute of Scientific and Technical Information of China (English)

    蒋诚智; 张涛; 余勇

    2012-01-01

    Analyzing the construction status of security classified protection for electric information systems and the security demand of smart grid, this paper proposes an information security protection model for smart grid based on classified protection. According to the characteristics of smart grid, the presented model describes possible information security protections from aspects of physical, terminal, boundary, network, host, application and data security. With the aim of enhancing the information security defense capability of smart grid, the proposed model can be useful and practical during the construction of smart grid information security facilities.%在分析电力信息系统安全等级保护建设情况的基础上,结合智能电网信息安全需求,提出一种基于等级保护的智能电网信息安全防护模型,从物理、终端、边界、网络、主机、应用和数据几方面描述针对智能电网特点的信息安全防护建议,以提升智能电网整体信息安全防御能力,对智能电网信息安全防护及建设具有一定的指导意义和作用.

  19. Classifier in Age classification

    Directory of Open Access Journals (Sweden)

    B. Santhi

    2012-12-01

    Full Text Available Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This study addresses the limitations in the existing classifiers, as it uses the Grey Level Co-occurrence Matrix (GLCM for feature extraction and Support Vector Machine (SVM for classification. This improves the accuracy of the classification as it outperforms the existing methods.

  20. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  1. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  2. Generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  3. ANALYSIS OF BAYESIAN CLASSIFIER ACCURACY

    Directory of Open Access Journals (Sweden)

    Felipe Schneider Costa

    2013-01-01

    Full Text Available The naïve Bayes classifier is considered one of the most effective classification algorithms today, competing with more modern and sophisticated classifiers. Despite being based on unrealistic (naïve assumption that all variables are independent, given the output class, the classifier provides proper results. However, depending on the scenario utilized (network structure, number of samples or training cases, number of variables, the network may not provide appropriate results. This study uses a process variable selection, using the chi-squared test to verify the existence of dependence between variables in the data model in order to identify the reasons which prevent a Bayesian network to provide good performance. A detailed analysis of the data is also proposed, unlike other existing work, as well as adjustments in case of limit values between two adjacent classes. Furthermore, variable weights are used in the calculation of a posteriori probabilities, calculated with mutual information function. Tests were applied in both a naïve Bayesian network and a hierarchical Bayesian network. After testing, a significant reduction in error rate has been observed. The naïve Bayesian network presented a drop in error rates from twenty five percent to five percent, considering the initial results of the classification process. In the hierarchical network, there was not only a drop in fifteen percent error rate, but also the final result came to zero.

  4. A Sequential Algorithm for Training Text Classifiers

    CERN Document Server

    Lewis, D D; Lewis, David D.; Gale, William A.

    1994-01-01

    The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.

  5. Local Component Analysis for Nonparametric Bayes Classifier

    CERN Document Server

    Khademi, Mahmoud; safayani, Meharn

    2010-01-01

    The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...

  6. Classifying Failing States

    Science.gov (United States)

    2007-03-01

    To Amy High and the other AFIT librarians , thank you for repeatedly going above and beyond your call to track down crucial information, often...Liechtenstein Bangladesh Ethiopia Lithuania Barbados Fiji Luxembourg Belarus Finland Macau Belgium France Macedonia Belize French Polynesia Madagascar Benin...0.017 0.006 -0.905 1.752 Poland Core Core 0.841 0.015 0.143 0.267 1.784 Denmark Core Core 0.985 0.014 0.000 -1.902 1.788 Finland Core Core 0.986 0.013

  7. Emergent behaviors of classifier systems

    Energy Technology Data Exchange (ETDEWEB)

    Forrest, S.; Miller, J.H.

    1989-01-01

    This paper discusses some examples of emergent behavior in classifier systems, describes some recently developed methods for studying them based on dynamical systems theory, and presents some initial results produced by the methodology. The goal of this work is to find techniques for noticing when interesting emergent behaviors of classifier systems emerge, to study how such behaviors might emerge over time, and make suggestions for designing classifier systems that exhibit preferred behaviors. 20 refs., 1 fig.

  8. 使用GIS区划白河林业局森林经营类型%A Geographic Information Systems approach for classifying and mapping forest management category in Baihe Forestry Bureau, Northeast China

    Institute of Scientific and Technical Information of China (English)

    王顺忠; 邵国凡; 谷会岩; 王庆礼; 代力民

    2006-01-01

    This paper demonstrates a Geographic Information Systems (GIS) procedure of classifying and mapping forest management category in Baihe Forestry Burea, Jilin Province, China. Within the study area, Baihe Forestry Bureau land was classified into a two-hierarchy system. The top-level class included the non-forest and forest. Over 96% of land area is forest in the study area, which was further divided into key ecological service forest (KES), general ecological service forest (GES), and commodity forest (COM).COM covered 45.0% of the total land area and was the major forest management type in Baihe Forest Bureau. KES and GES accounted for 21.2% and 29.9% of the total land area, respectively. The forest management zones designed with GIS in this study were then compared with the forest management zones established using the hand draw by the local agency. There were obvious differences between the two products. It suggested that the differences had some to do with the data sources, basic unit and mapping procedures. It also suggested that the GIS method was a useful tool in integrating forest inventory data and other data for classifying and mapping forest zones to meet the needs of the classified forest management system.%使用GIS区划了白河林业局森林经营类型,并和原有的森林经营类型进行了比较.在数字化区划的森林经营类型中,二级系统被采纳.首先,白河林业局被区划为林地和非林地,其中,总面积的96%为林地,然后,林地区划为重点公益林,一般公益林和商品林.在重新区划的森林经营类型中,商品林达到总面积的45.0%,是最主要的森林经营类型:重点公益林和一般公益林分别为总面积的21.2%和29.9%.两个区划结果有很大的不同,在数字化区划的森林经营类型中,各类型斑块数量较多,面积较小,这些不同主要由使用数据,区划单位和区划方法的不同引起的.研究表明,GIS在区划森林经营类型时是一种有效的方

  9. Anomaly Detection of Ranging Information of Airborne TACAN Based on One-Class SVM Classifier%基于One-Class SVM的机载塔康测距信息异常检测方法研究

    Institute of Scientific and Technical Information of China (English)

    李城梁

    2015-01-01

    针对多源导航信息融合系统中导航传感器数据保障的问题,本文提出了一种基于One-Class SVM的机载塔康测距信息异常检测方法。首先,提取机载塔康测距信息的时域参数构成特征样本空间;然后,采用One-Class SVM训练出机载塔康测距信息正常状态时的模型,通过发现非正常状态的样本进行异常检测。利用模拟的机载塔康测距数据进行方法验证,实验结果表明:该异常检测方法对机载塔康测距信息中的噪声有一定的鲁棒性,可以满足实际应用的需要。%According to the anomaly detection for ranging information of airborne TACAN in system which fused information of multi-source navigation, this article proposed a detection method based on One-Class SVM. First, the features space is built by extracting time domain features. Then, the model of normal state is built by training the One-Class SVM classifier. Anomaly information is detected by the trained model. The experimental results show that the proposed method has good performance in anomaly detection.

  10. Feature Selection and Effective Classifiers.

    Science.gov (United States)

    Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri

    1998-01-01

    Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…

  11. Evolving Classifiers: Methods for Incremental Learning

    CERN Document Server

    Hulley, Greg

    2007-01-01

    The ability of a classifier to take on new information and classes by evolving the classifier without it having to be fully retrained is known as incremental learning. Incremental learning has been successfully applied to many classification problems, where the data is changing and is not all available at once. In this paper there is a comparison between Learn++, which is one of the most recent incremental learning algorithms, and the new proposed method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has shown good incremental learning capabilities on benchmark datasets on which the new ILUGA method has been tested. ILUGA has also shown good incremental learning ability using only a few classifiers and does not suffer from catastrophic forgetting. The results obtained for ILUGA on the Optical Character Recognition (OCR) and Wine datasets are good, with an overall accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT for the difficult multi-class OCR dataset.

  12. Multi-iPPseEvo: A Multi-label Classifier for Identifying Human Phosphorylated Proteins by Incorporating Evolutionary Information into Chou's General PseAAC via Grey System Theory.

    Science.gov (United States)

    Qiu, Wang-Ren; Zheng, Quan-Shu; Sun, Bi-Qian; Xiao, Xuan

    2017-03-01

    Predicting phosphorylation protein is a challenging problem, particularly when query proteins have multi-label features meaning that they may be phosphorylated at two or more different type amino acids. In fact, human protein usually be phosphorylated at serine, threonine and tyrosine. By introducing the "multi-label learning" approach, a novel predictor has been developed that can be used to deal with the systems containing both single- and multi-label phosphorylation protein. Here we proposed a predictor called Multi-iPPseEvo by (1) incorporating the protein sequence evolutionary information into the general pseudo amino acid composition (PseAAC) via the grey system theory, (2) balancing out the skewed training datasets by the asymmetric bootstrap approach, and (3) constructing an ensemble predictor by fusing an array of individual random forest classifiers thru a voting system. Rigorous cross-validations via a set of multi-label metrics indicate that the multi-label phosphorylation predictor is very promising and encouraging. The current approach represents a new strategy to deal with the multi-label biological problems, and the software is freely available for academic use at http://www.jci-bioinfo.cn/Multi-iPPseEvo.

  13. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  14. Sampling Based Average Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Jian Hou

    2014-01-01

    fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.

  15. Visual Classifier Training for Text Document Retrieval.

    Science.gov (United States)

    Heimerl, F; Koch, S; Bosch, H; Ertl, T

    2012-12-01

    Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.

  16. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  17. Optimally Training a Cascade Classifier

    CERN Document Server

    Shen, Chunhua; Hengel, Anton van den

    2010-01-01

    Cascade classifiers are widely used in real-time object detection. Different from conventional classifiers that are designed for a low overall classification error rate, a classifier in each node of the cascade is required to achieve an extremely high detection rate and moderate false positive rate. Although there are a few reported methods addressing this requirement in the context of object detection, there is no a principled feature selection method that explicitly takes into account this asymmetric node learning objective. We provide such an algorithm here. We show a special case of the biased minimax probability machine has the same formulation as the linear asymmetric classifier (LAC) of \\cite{wu2005linear}. We then design a new boosting algorithm that directly optimizes the cost function of LAC. The resulting totally-corrective boosting algorithm is implemented by the column generation technique in convex optimization. Experimental results on object detection verify the effectiveness of the proposed bo...

  18. Combining different types of classifiers

    OpenAIRE

    Gatnar, Eugeniusz

    2008-01-01

    Model fusion has proved to be a very successful strategy for obtaining accurate models in classification and regression. The key issue, however, is the diversity of the component classifiers because classification error of an ensemble depends on the correlation between its members. The majority of existing ensemble methods combine the same type of models, e.g. trees. In order to promote the diversity of the ensemble members, we propose to aggregate classifiers of different t...

  19. Optimal weighted nearest neighbour classifiers

    CERN Document Server

    Samworth, Richard J

    2011-01-01

    We derive an asymptotic expansion for the excess risk (regret) of a weighted nearest-neighbour classifier. This allows us to find the asymptotically optimal vector of non-negative weights, which has a rather simple form. We show that the ratio of the regret of this classifier to that of an unweighted $k$-nearest neighbour classifier depends asymptotically only on the dimension $d$ of the feature vectors, and not on the underlying population densities. The improvement is greatest when $d=4$, but thereafter decreases as $d \\rightarrow \\infty$. The popular bagged nearest neighbour classifier can also be regarded as a weighted nearest neighbour classifier, and we show that its corresponding weights are somewhat suboptimal when $d$ is small (in particular, worse than those of the unweighted $k$-nearest neighbour classifier when $d=1$), but are close to optimal when $d$ is large. Finally, we argue that improvements in the rate of convergence are possible under stronger smoothness assumptions, provided we allow nega...

  20. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  1. MScanner: a classifier for retrieving Medline citations

    Directory of Open Access Journals (Sweden)

    Altman Russ B

    2008-02-01

    Full Text Available Abstract Background Keyword searching through PubMed and other systems is the standard means of retrieving information from Medline. However, ad-hoc retrieval systems do not meet all of the needs of databases that curate information from literature, or of text miners developing a corpus on a topic that has many terms indicative of relevance. Several databases have developed supervised learning methods that operate on a filtered subset of Medline, to classify Medline records so that fewer articles have to be manually reviewed for relevance. A few studies have considered generalisation of Medline classification to operate on the entire Medline database in a non-domain-specific manner, but existing applications lack speed, available implementations, or a means to measure performance in new domains. Results MScanner is an implementation of a Bayesian classifier that provides a simple web interface for submitting a corpus of relevant training examples in the form of PubMed IDs and returning results ranked by decreasing probability of relevance. For maximum speed it uses the Medical Subject Headings (MeSH and journal of publication as a concise document representation, and takes roughly 90 seconds to return results against the 16 million records in Medline. The web interface provides interactive exploration of the results, and cross validated performance evaluation on the relevant input against a random subset of Medline. We describe the classifier implementation, cross validate it on three domain-specific topics, and compare its performance to that of an expert PubMed query for a complex topic. In cross validation on the three sample topics against 100,000 random articles, the classifier achieved excellent separation of relevant and irrelevant article score distributions, ROC areas between 0.97 and 0.99, and averaged precision between 0.69 and 0.92. Conclusion MScanner is an effective non-domain-specific classifier that operates on the entire Medline

  2. Adaptively robust filtering with classified adaptive factors

    Institute of Scientific and Technical Information of China (English)

    CUI Xianqiang; YANG Yuanxi

    2006-01-01

    The key problems in applying the adaptively robust filtering to navigation are to establish an equivalent weight matrix for the measurements and a suitable adaptive factor for balancing the contributions of the measurements and the predicted state information to the state parameter estimates. In this paper, an adaptively robust filtering with classified adaptive factors was proposed, based on the principles of the adaptively robust filtering and bi-factor robust estimation for correlated observations. According to the constant velocity model of Kalman filtering, the state parameter vector was divided into two groups, namely position and velocity. The estimator of the adaptively robust filtering with classified adaptive factors was derived, and the calculation expressions of the classified adaptive factors were presented. Test results show that the adaptively robust filtering with classified adaptive factors is not only robust in controlling the measurement outliers and the kinematic state disturbing but also reasonable in balancing the contributions of the predicted position and velocity, respectively, and its filtering accuracy is superior to the adaptively robust filter with single adaptive factor based on the discrepancy of the predicted position or the predicted velocity.

  3. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  4. Classifying Cereal Data (Earlier Methods)

    Science.gov (United States)

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  5. Maximum margin Bayesian network classifiers.

    Science.gov (United States)

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian

    2012-03-01

    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  6. Semantic Features for Classifying Referring Search Terms

    Energy Technology Data Exchange (ETDEWEB)

    May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.

    2012-05-11

    When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.

  7. Classifying objects in LWIR imagery via CNNs

    Science.gov (United States)

    Rodger, Iain; Connor, Barry; Robertson, Neil M.

    2016-10-01

    The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.

  8. Classifying self-gravitating radiations

    CERN Document Server

    Kim, Hyeong-Chan

    2016-01-01

    We study static systems of self-gravitating radiations confined in a sphere by using numerical and analytic calculations. We classify and analyze the solutions systematically. Due to the scaling symmetry, any solution can be represented as a segment of a solution curve on a plane of two-dimensional scale invariant variables. We find that a system can be conveniently parametrized by three parameters representing the solution curve, the scaling, and the system size, instead of the parameters defined at the outer boundary. The solution curves are classified to three types representing regular solutions, conically singular solutions with, and without an object which resembles an event horizon up to causal disconnectedness. For the last type, the behavior of a self-gravitating system is simple enough to allow analytic calculations.

  9. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  10. Intelligent neural network classifier for automatic testing

    Science.gov (United States)

    Bai, Baoxing; Yu, Heping

    1996-10-01

    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  11. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam;

    2014-01-01

    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged...... in large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...

  12. Adaptive classifier for steel strip surface defects

    Science.gov (United States)

    Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li

    2017-01-01

    Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.

  13. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable. The perfor....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....

  14. Classifying and ranking DMUs in interval DEA

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-peng; WU Yu-hua; LI Wen-hua

    2005-01-01

    During efficiency evaluating by DEA, the inputs and outputs of DMUs may be intervals because of insufficient information or measure error. For this reason, interval DEA is proposed. To make the efficiency scores more discriminative, this paper builds an Interval Modified DEA (IMDEA) model based on MDEA.Furthermore, models of obtaining upper and lower bounds of the efficiency scores for each DMU are set up.Based on this, the DMUs are classified into three types. Next, a new order relation between intervals which can express the DM' s preference to the three types is proposed. As a result, a full and more eonvietive ranking is made on all the DMUs. Finally an example is given.

  15. Segmentation of Fingerprint Images Using Linear Classifier

    Directory of Open Access Journals (Sweden)

    Xinjian Chen

    2004-04-01

    Full Text Available An algorithm for the segmentation of fingerprints and a criterion for evaluating the block feature are presented. The segmentation uses three block features: the block clusters degree, the block mean information, and the block variance. An optimal linear classifier has been trained for the classification per block and the criteria of minimal number of misclassified samples are used. Morphology has been applied as postprocessing to reduce the number of classification errors. The algorithm is tested on FVC2002 database, only 2.45% of the blocks are misclassified, while the postprocessing further reduces this ratio. Experiments have shown that the proposed segmentation method performs very well in rejecting false fingerprint features from the noisy background.

  16. A semi-automated approach to building text summarisation classifiers

    Directory of Open Access Journals (Sweden)

    Matias Garcia-Constantino

    2012-12-01

    Full Text Available An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique, is described. The summarisation technique utilises the concept of classification but with the support of domain/human experts during classifier construction. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool, is presented and evaluated using real questionnaire data. The results of this evaluation are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

  17. Building an automated SOAP classifier for emergency department reports.

    Science.gov (United States)

    Mowery, Danielle; Wiebe, Janyce; Visweswaran, Shyam; Harkema, Henk; Chapman, Wendy W

    2012-02-01

    Information extraction applications that extract structured event and entity information from unstructured text can leverage knowledge of clinical report structure to improve performance. The Subjective, Objective, Assessment, Plan (SOAP) framework, used to structure progress notes to facilitate problem-specific, clinical decision making by physicians, is one example of a well-known, canonical structure in the medical domain. Although its applicability to structuring data is understood, its contribution to information extraction tasks has not yet been determined. The first step to evaluating the SOAP framework's usefulness for clinical information extraction is to apply the model to clinical narratives and develop an automated SOAP classifier that classifies sentences from clinical reports. In this quantitative study, we applied the SOAP framework to sentences from emergency department reports, and trained and evaluated SOAP classifiers built with various linguistic features. We found the SOAP framework can be applied manually to emergency department reports with high agreement (Cohen's kappa coefficients over 0.70). Using a variety of features, we found classifiers for each SOAP class can be created with moderate to outstanding performance with F(1) scores of 93.9 (subjective), 94.5 (objective), 75.7 (assessment), and 77.0 (plan). We look forward to expanding the framework and applying the SOAP classification to clinical information extraction tasks.

  18. A Film Classifier Based on Low-level Visual Features

    Directory of Open Access Journals (Sweden)

    Hui-Yu Huang

    2008-07-01

    Full Text Available We propose an approach to classify the film classes by using low level features and visual features. This approach aims to classify the films into genres. Our current domain of study is using the movie preview. A movie preview often emphasizes the theme of a film and hence provides suitable information for classifying process. In our approach, we categorize films into three broad categories: action, dramas, and thriller films. Four computable video features (average shot length, color variance, motion content and lighting key and visual features (show and fast moving effects are combined in our approach to provide the advantage information to demonstrate the movie category. The experimental results present that visual features are the useful messages for processing the film classification. On the other hand, our approach can also be extended for other potential applications, including the browsing and retrieval of videos on the internet, video-on-demand, and video libraries.

  19. Classifying supernovae using only galaxy data

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Mandel, Kaisey [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-12-01

    We present a new method for probabilistically classifying supernovae (SNe) without using SN spectral or photometric data. Unlike all previous studies to classify SNe without spectra, this technique does not use any SN photometry. Instead, the method relies on host-galaxy data. We build upon the well-known correlations between SN classes and host-galaxy properties, specifically that core-collapse SNe rarely occur in red, luminous, or early-type galaxies. Using the nearly spectroscopically complete Lick Observatory Supernova Search sample of SNe, we determine SN fractions as a function of host-galaxy properties. Using these data as inputs, we construct a Bayesian method for determining the probability that an SN is of a particular class. This method improves a common classification figure of merit by a factor of >2, comparable to the best light-curve classification techniques. Of the galaxy properties examined, morphology provides the most discriminating information. We further validate this method using SN samples from the Sloan Digital Sky Survey and the Palomar Transient Factory. We demonstrate that this method has wide-ranging applications, including separating different subclasses of SNe and determining the probability that an SN is of a particular class before photometry or even spectra can. Since this method uses completely independent data from light-curve techniques, there is potential to further improve the overall purity and completeness of SN samples and to test systematic biases of the light-curve techniques. Further enhancements to the host-galaxy method, including additional host-galaxy properties, combination with light-curve methods, and hybrid methods, should further improve the quality of SN samples from past, current, and future transient surveys.

  20. A Neural Network Classifier of Volume Datasets

    CERN Document Server

    Zukić, Dženan; Kolb, Andreas

    2009-01-01

    Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets,...

  1. Combining classifiers for robust PICO element detection

    Directory of Open Access Journals (Sweden)

    Grad Roland

    2010-05-01

    Full Text Available Abstract Background Formulating a clinical information need in terms of the four atomic parts which are Population/Problem, Intervention, Comparison and Outcome (known as PICO elements facilitates searching for a precise answer within a large medical citation database. However, using PICO defined items in the information retrieval process requires a search engine to be able to detect and index PICO elements in the collection in order for the system to retrieve relevant documents. Methods In this study, we tested multiple supervised classification algorithms and their combinations for detecting PICO elements within medical abstracts. Using the structural descriptors that are embedded in some medical abstracts, we have automatically gathered large training/testing data sets for each PICO element. Results Combining multiple classifiers using a weighted linear combination of their prediction scores achieves promising results with an f-measure score of 86.3% for P, 67% for I and 56.6% for O. Conclusions Our experiments on the identification of PICO elements showed that the task is very challenging. Nevertheless, the performance achieved by our identification method is competitive with previously published results and shows that this task can be achieved with a high accuracy for the P element but lower ones for I and O elements.

  2. Is it important to classify ischaemic stroke?

    LENUS (Irish Health Repository)

    Iqbal, M

    2012-02-01

    Thirty-five percent of all ischemic events remain classified as cryptogenic. This study was conducted to ascertain the accuracy of diagnosis of ischaemic stroke based on information given in the medical notes. It was tested by applying the clinical information to the (TOAST) criteria. Hundred and five patients presented with acute stroke between Jan-Jun 2007. Data was collected on 90 patients. Male to female ratio was 39:51 with age range of 47-93 years. Sixty (67%) patients had total\\/partial anterior circulation stroke; 5 (5.6%) had a lacunar stroke and in 25 (28%) the mechanism of stroke could not be identified. Four (4.4%) patients with small vessel disease were anticoagulated; 5 (5.6%) with atrial fibrillation received antiplatelet therapy and 2 (2.2%) patients with atrial fibrillation underwent CEA. This study revealed deficiencies in the clinical assessment of patients and treatment was not tailored to the mechanism of stroke in some patients.

  3. 22 CFR 125.3 - Exports of classified technical data and classified defense articles.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Exports of classified technical data and... IN ARMS REGULATIONS LICENSES FOR THE EXPORT OF TECHNICAL DATA AND CLASSIFIED DEFENSE ARTICLES § 125.3 Exports of classified technical data and classified defense articles. (a) A request for authority...

  4. Pavement Crack Classifiers: A Comparative Study

    Directory of Open Access Journals (Sweden)

    S. Siddharth

    2012-12-01

    Full Text Available Non Destructive Testing (NDT is an analysis technique used to inspect metal sheets and components without harming the product. NDT do not cause any change after inspection; this technique saves money and time in product evaluation, research and troubleshooting. In this study the objective is to perform NDT using soft computing techniques. Digital images are taken; Gray Level Co-occurrence Matrix (GLCM extracts features from these images. Extracted features are then fed into the classifiers which classifies them into images with and without cracks. Three major classifiers: Neural networks, Support Vector Machine (SVM and Linear classifiers are taken for the classification purpose. Performances of these classifiers are assessed and the best classifier for the given data is chosen.

  5. Discrimination-Aware Classifiers for Student Performance Prediction

    Science.gov (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei

    2015-01-01

    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  6. 18 CFR 3a.71 - Accountability for classified material.

    Science.gov (United States)

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Accountability for classified material. 3a.71 Section 3a.71 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY COMMISSION, DEPARTMENT OF ENERGY GENERAL RULES NATIONAL SECURITY INFORMATION Accountability for...

  7. Classifying VAT Legislation for Automation

    DEFF Research Database (Denmark)

    Sudzina, Frantisek; Nielsen, Morten Ib; Simonsen, Jakob Grue;

    The paper offers a framework for partitioning articles in legal documents pertaining to value added tax (VAT) into categories suitable for subsequent integration in computerized systems for automatically deriving VAT rates. The importance of an enterprise resource planning (ERP) system supporting...... VAT is not that it is required by a definition but because information technology in general increasingly supports everyday activities, so users expect more even from ERP systems. As an extended example, the classification of all articles of the European Council directive 2006/112/EC of 28 November...... 2006 on the common system of value added tax is presented. The classification of VAT articles is important in order to allow for easier VAT modeling for ERP systems. Better VAT modeling should eventually lead to lower cost of implementing changes in VAT legislature....

  8. Comparing different classifiers for automatic age estimation.

    Science.gov (United States)

    Lanitis, Andreas; Draganova, Chrisina; Christodoulou, Chris

    2004-02-01

    We describe a quantitative evaluation of the performance of different classifiers in the task of automatic age estimation. In this context, we generate a statistical model of facial appearance, which is subsequently used as the basis for obtaining a compact parametric description of face images. The aim of our work is to design classifiers that accept the model-based representation of unseen images and produce an estimate of the age of the person in the corresponding face image. For this application, we have tested different classifiers: a classifier based on the use of quadratic functions for modeling the relationship between face model parameters and age, a shortest distance classifier, and artificial neural network based classifiers. We also describe variations to the basic method where we use age-specific and/or appearance specific age estimation methods. In this context, we use age estimation classifiers for each age group and/or classifiers for different clusters of subjects within our training set. In those cases, part of the classification procedure is devoted to choosing the most appropriate classifier for the subject/age range in question, so that more accurate age estimates can be obtained. We also present comparative results concerning the performance of humans and computers in the task of age estimation. Our results indicate that machines can estimate the age of a person almost as reliably as humans.

  9. Examining the significance of fingerprint-based classifiers

    Directory of Open Access Journals (Sweden)

    Collins Jack R

    2008-12-01

    Full Text Available Abstract Background Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. Results The classification accuracy of two fingerprint-based classifiers, a decision tree (DT algorithm and a medoid classification algorithm (MCA, are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. Conclusion While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no

  10. A review of learning vector quantization classifiers

    CERN Document Server

    Nova, David

    2015-01-01

    In this work we present a review of the state of the art of Learning Vector Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets.

  11. Fuzzy Wavenet (FWN classifier for medical images

    Directory of Open Access Journals (Sweden)

    Entather Mahos

    2005-01-01

    Full Text Available The combination of wavelet theory and neural networks has lead to the development of wavelet networks. Wavelet networks are feed-forward neural networks using wavelets as activation function. Wavelets networks have been used in classification and identification problems with some success. In this work we proposed a fuzzy wavenet network (FWN, which learns by common back-propagation algorithm to classify medical images. The library of medical image has been analyzed, first. Second, Two experimental tables’ rules provide an excellent opportunity to test the ability of fuzzy wavenet network due to the high level of information variability often experienced with this type of images. We have known that the wavelet transformation is more accurate in small dimension problem. But image processing is large dimension problem then we used neural network. Results are presented on the application on the three layer fuzzy wavenet to vision system. They demonstrate a considerable improvement in performance by proposed two table’s rule for fuzzy and deterministic dilation and translation in wavelet transformation techniques.

  12. Classifying Unidentified Gamma-ray Sources

    CERN Document Server

    Salvetti, David

    2016-01-01

    During its first 2 years of mission the Fermi-LAT instrument discovered more than 1,800 gamma-ray sources in the 100 MeV to 100 GeV range. Despite the application of advanced techniques to identify and associate the Fermi-LAT sources with counterparts at other wavelengths, about 40% of the LAT sources have no a clear identification remaining "unassociated". The purpose of my Ph.D. work has been to pursue a statistical approach to identify the nature of each Fermi-LAT unassociated source. To this aim, we implemented advanced machine learning techniques, such as logistic regression and artificial neural networks, to classify these sources on the basis of all the available gamma-ray information about location, energy spectrum and time variability. These analyses have been used for selecting targets for AGN and pulsar searches and planning multi-wavelength follow-up observations. In particular, we have focused our attention on the search of possible radio-quiet millisecond pulsar (MSP) candidates in the sample of...

  13. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  14. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  15. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  16. A Multiple Classifier Fusion Algorithm Using Weighted Decision Templates

    Directory of Open Access Journals (Sweden)

    Aizhong Mi

    2016-01-01

    Full Text Available Fusing classifiers’ decisions can improve the performance of a pattern recognition system. Many applications areas have adopted the methods of multiple classifier fusion to increase the classification accuracy in the recognition process. From fully considering the classifier performance differences and the training sample information, a multiple classifier fusion algorithm using weighted decision templates is proposed in this paper. The algorithm uses a statistical vector to measure the classifier’s performance and makes a weighed transform on each classifier according to the reliability of its output. To make a decision, the information in the training samples around an input sample is used by the k-nearest-neighbor rule if the algorithm evaluates the sample as being highly likely to be misclassified. An experimental comparison was performed on 15 data sets from the KDD’99, UCI, and ELENA databases. The experimental results indicate that the algorithm can achieve better classification performance. Next, the algorithm was applied to cataract grading in the cataract ultrasonic phacoemulsification operation. The application result indicates that the proposed algorithm is effective and can meet the practical requirements of the operation.

  17. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  18. An Efficient and Effective Immune Based Classifier

    Directory of Open Access Journals (Sweden)

    Shahram Golzari

    2011-01-01

    Full Text Available Problem statement: Artificial Immune Recognition System (AIRS is most popular and effective immune inspired classifier. Resource competition is one stage of AIRS. Resource competition is done based on the number of allocated resources. AIRS uses a linear method to allocate resources. The linear resource allocation increases the training time of classifier. Approach: In this study, a new nonlinear resource allocation method is proposed to make AIRS more efficient. New algorithm, AIRS with proposed nonlinear method, is tested on benchmark datasets from UCI machine learning repository. Results: Based on the results of experiments, using proposed nonlinear resource allocation method decreases the training time and number of memory cells and doesn't reduce the accuracy of AIRS. Conclusion: The proposed classifier is an efficient and effective classifier.

  19. Classifying Genomic Sequences by Sequence Feature Analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Liu; Dian Jiao; Xiao Sun

    2005-01-01

    Traditional sequence analysis depends on sequence alignment. In this study, we analyzed various functional regions of the human genome based on sequence features, including word frequency, dinucleotide relative abundance, and base-base correlation. We analyzed the human chromosome 22 and classified the upstream,exon, intron, downstream, and intergenic regions by principal component analysis and discriminant analysis of these features. The results show that we could classify the functional regions of genome based on sequence feature and discriminant analysis.

  20. Classifying the Quantum Phases of Matter

    Science.gov (United States)

    2015-01-01

    2013), arXiv:1305.2176. [10] J. Haah, Lattice quantum codes and exotic topological phases of matter , arXiv:1305.6973. [11[ M. Hastings and S...CLASSIFYING THE QUANTUM PHASES OF MATTER CALIFORNIA INSTITUTE OF TECHNOLOGY JANUARY 2015 FINAL TECHNICAL REPORT...REPORT 3. DATES COVERED (From - To) JAN 2012 – AUG 2014 4. TITLE AND SUBTITLE CLASSIFYING THE QUANTUM PHASES OF MATTER 5a. CONTRACT NUMBER FA8750-12-2

  1. Criminal Prohibitions on the Publication of Classified Defense Information

    Science.gov (United States)

    2013-09-09

    Affairs Elizabeth King explained that “State Department cables by their nature contain everyday analysis and candid assessments that any government...WikiLeaks Shine Light Into Secret Diplomatic Channels, NY TIMES. 25 The Guardian states that the earliest of the cables is from 1966. See The US...role.53 The contractor, Stephen Kim, was at the time of the disclosure a senior adviser for intelligence detailed to the State Department’s arms

  2. COMBINING CLASSIFIERS FOR CREDIT RISK PREDICTION

    Institute of Scientific and Technical Information of China (English)

    Bhekisipho TWALA

    2009-01-01

    Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how could such accuracy be improved by using pairs of classifier ensembles. Benchmarking results on five credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy.

  3. A multi-class large margin classifier

    Institute of Scientific and Technical Information of China (English)

    Liang TANG; Qi XUAN; Rong XIONG; Tie-jun WU; Jian CHU

    2009-01-01

    Currently there are two approaches for a multi-class support vector classifier (SVC). One is to construct and combine several binary classifiers while the other is to directly consider all classes of data in one optimization formulation. For a K-class problem (K>2), the first approach has to construct at least K classifiers, and the second approach has to solve a much larger op-timization problem proportional to K by the algorithms developed so far. In this paper, following the second approach, we present a novel multi-class large margin classifier (MLMC). This new machine can solve K-class problems in one optimization formula-tion without increasing the size of the quadratic programming (QP) problem proportional to K. This property allows us to construct just one classifier with as few variables in the QP problem as possible to classify multi-class data, and we can gain the advantage of speed from it especially when K is large. Our experiments indicate that MLMC almost works as well as (sometimes better than) many other multi-class SVCs for some benchmark data classification problems, and obtains a reasonable performance in face recognition application on the AR face database.

  4. Mathematical Modeling and Analysis of Classified Marketing of Agricultural Products

    Institute of Scientific and Technical Information of China (English)

    Fengying; WANG

    2014-01-01

    Classified marketing of agricultural products was analyzed using the Logistic Regression Model. This method can take full advantage of information in agricultural product database,to find factors influencing best selling degree of agricultural products,and make quantitative analysis accordingly. Using this model,it is also able to predict sales of agricultural products,and provide reference for mapping out individualized sales strategy for popularizing agricultural products.

  5. An ensemble self-training protein interaction article classifier.

    Science.gov (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard

    2014-01-01

    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.

  6. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    Science.gov (United States)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  7. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  8. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  9. Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    CERN Document Server

    Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon

    2012-01-01

    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...

  10. 3 CFR - Implementation of the Executive Order, “Classified National Security Information”

    Science.gov (United States)

    2010-01-01

    ... 3 The President 1 2010-01-01 2010-01-01 false Implementation of the Executive Order, âClassified National Security Informationâ Presidential Documents Other Presidential Documents Memorandum of December 29, 2009 Implementation of the Executive Order, “Classified National Security Information” Memorandum for the Heads of Executive Departments...

  11. 76 FR 63811 - Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and...

    Science.gov (United States)

    2011-10-13

    ... Documents#0;#0; ] Executive Order 13587 of October 7, 2011 Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information By the authority... to ensure the responsible sharing and safeguarding of classified national security...

  12. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  13. A survey of decision tree classifier methodology

    Science.gov (United States)

    Safavian, S. R.; Landgrebe, David

    1991-01-01

    Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  14. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  15. 基于电力行业信息系统等级保护的Oracle数据库安全加固实践%Practice of Classified-protection-based Oracle Database Security Reinforcement in Electric Power Information Systems

    Institute of Scientific and Technical Information of China (English)

    杨大哲; 孙瑞浩

    2015-01-01

    In view of Oracle database management system,and according to the requirements on information security classified protection,this study make use of the database's own security mechanism to have completed the security reinforcement practice from the aspects of authentication, access control,intrusion prevention and security audits,after which, the capabilities of data leak prevention and anti-tampering are strengthened, and the security level of electric power information systems is improved.%针对Oracle数据库管理系统,按照信息安全等级保护要求,从身份鉴别、访问控制、入侵防范、安全审计等方面,利用数据库自身的安全机制,进行了具体加固实践,加强数据的防泄密能力与防篡改能力,提高电力信息系统的安全防护水平.

  16. Classifying Finitely Generated Indecomposable RA Loops

    CERN Document Server

    Cornelissen, Mariana

    2012-01-01

    In 1995, E. Jespers, G. Leal and C. Polcino Milies classified all finite ring alternative loops (RA loops for short) which are not direct products of proper subloops. In this paper we extend this result to finitely generated RA loops and provide an explicit description of all such loops.

  17. Classifying web pages with visual features

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Lupascu, T.; Filipe, J.; Cordeiro, J.

    2010-01-01

    To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying (HTML) code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual fea

  18. Neural Classifier Construction using Regularization, Pruning

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan;

    1998-01-01

    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...

  19. Large margin classifier-based ensemble tracking

    Science.gov (United States)

    Wang, Yuru; Liu, Qiaoyuan; Yin, Minghao; Wang, ShengSheng

    2016-07-01

    In recent years, many studies consider visual tracking as a two-class classification problem. The key problem is to construct a classifier with sufficient accuracy in distinguishing the target from its background and sufficient generalize ability in handling new frames. However, the variable tracking conditions challenges the existing methods. The difficulty mainly comes from the confused boundary between the foreground and background. This paper handles this difficulty by generalizing the classifier's learning step. By introducing the distribution data of samples, the classifier learns more essential characteristics in discriminating the two classes. Specifically, the samples are represented in a multiscale visual model. For features with different scales, several large margin distribution machine (LDMs) with adaptive kernels are combined in a Baysian way as a strong classifier. Where, in order to improve the accuracy and generalization ability, not only the margin distance but also the sample distribution is optimized in the learning step. Comprehensive experiments are performed on several challenging video sequences, through parameter analysis and field comparison, the proposed LDM combined ensemble tracker is demonstrated to perform with sufficient accuracy and generalize ability in handling various typical tracking difficulties.

  20. Design and evaluation of neural classifiers

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Pedersen, Morten With; Hansen, Lars Kai;

    1996-01-01

    In this paper we propose a method for the design of feedforward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme we derive a modified form of the entropy error measure and an algebraic estimate of the test error. In conjunction...

  1. Face detection by aggregated Bayesian network classifiers

    NARCIS (Netherlands)

    Pham, T.V.; Worring, M.; Smeulders, A.W.M.

    2002-01-01

    A face detection system is presented. A new classification method using forest-structured Bayesian networks is used. The method is used in an aggregated classifier to discriminate face from non-face patterns. The process of generating non-face patterns is integrated with the construction of the aggr

  2. Using Syntactic-Based Kernels for Classifying Temporal Relations

    Institute of Scientific and Technical Information of China (English)

    Seyed Abolghasem Mirroshandel; Gholamreza Ghassem-Sani; Mahdy Khayyamian

    2011-01-01

    Temporal relation classification is one of contemporary demanding tasks of natural language processing. This task can be used in various applications such as question answering, summarization, and language specific information retrieval. In this paper, we propose an improved algorithm for classifying temporal relations, between events or between events and time, using support vector machines (SVM). Along with gold-standard corpus features, the proposed method aims at exploiting some useful automatically generated syntactic features to improve the accuracy of classification. Accordingly, a number of novel kernel functions are introduced and evaluated. Our evaluations clearly demonstrate that adding syntactic features results in a considerable improvement over the state-of-the-art method of classifying temporal relations.

  3. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    Science.gov (United States)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-12-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  4. Improving 2D Boosted Classifiers Using Depth LDA Classifier for Robust Face Detection

    Directory of Open Access Journals (Sweden)

    Mahmood Rahat

    2012-05-01

    Full Text Available Face detection plays an important role in Human Robot Interaction. Many of services provided by robots depend on face detection. This paper presents a novel face detection algorithm which uses depth data to improve the efficiency of a boosted classifier on 2D data for reduction of false positive alarms. The proposed method uses two levels of cascade classifiers. The classifiers of the first level deal with 2D data and classifiers of the second level use depth data captured by a stereo camera. The first level employs conventional cascade of boosted classifiers which eliminates many of nonface sub windows. The remaining sub windows are used as input to the second level. After calculating the corresponding depth model of the sub windows, a heuristic classifier along with a Linear Discriminant analysis (LDA classifier is applied on the depth data to reject remaining non face sub windows. The experimental results of the proposed method using a Bumblebee-2 stereo vision system on a mobile platform for real time detection of human faces in natural cluttered environments reveal significantly reduction of false positive alarms of 2D face detector.

  5. Max-margin based Bayesian classifier

    Institute of Scientific and Technical Information of China (English)

    Tao-cheng HU‡; Jin-hui YU

    2016-01-01

    There is a tradeoff between generalization capability and computational overhead in multi-class learning. We propose a generative probabilistic multi-class classifi er, considering both the generalization capability and the learning/prediction rate. We show that the classifi er has a max-margin property. Thus, prediction on future unseen data can nearly achieve the same performance as in the training stage. In addition, local variables are eliminated, which greatly simplifi es the optimization problem. By convex and probabilistic analysis, an efficient online learning algorithm is developed. The algorithm aggregates rather than averages dualities, which is different from the classical situations. Empirical results indicate that our method has a good generalization capability and coverage rate.

  6. Classifying bed inclination using pressure images.

    Science.gov (United States)

    Baran Pouyan, M; Ostadabbas, S; Nourani, M; Pompeo, M

    2014-01-01

    Pressure ulcer is one of the most prevalent problems for bed-bound patients in hospitals and nursing homes. Pressure ulcers are painful for patients and costly for healthcare systems. Accurate in-bed posture analysis can significantly help in preventing pressure ulcers. Specifically, bed inclination (back angle) is a factor contributing to pressure ulcer development. In this paper, an efficient methodology is proposed to classify bed inclination. Our approach uses pressure values collected from a commercial pressure mat system. Then, by applying a number of image processing and machine learning techniques, the approximate degree of bed is estimated and classified. The proposed algorithm was tested on 15 subjects with various sizes and weights. The experimental results indicate that our method predicts bed inclination in three classes with 80.3% average accuracy.

  7. Classifying sows' activity types from acceleration patterns

    DEFF Research Database (Denmark)

    Cornou, Cecile; Lundbye-Christensen, Søren

    2008-01-01

    -dimensional axes, plus the length of the acceleration vector) are selected for each activity. Each time series is modeled using a Dynamic Linear Model with cyclic components. The classification method, based on a Multi-Process Kalman Filter (MPKF), is applied to a total of 15 times series of 120 observations......An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen....... This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three...

  8. Combining supervised classifiers with unlabeled data

    Institute of Scientific and Technical Information of China (English)

    刘雪艳; 张雪英; 李凤莲; 黄丽霞

    2016-01-01

    Ensemble learning is a wildly concerned issue. Traditional ensemble techniques are always adopted to seek better results with labeled data and base classifiers. They fail to address the ensemble task where only unlabeled data are available. A label propagation based ensemble (LPBE) approach is proposed to further combine base classification results with unlabeled data. First, a graph is constructed by taking unlabeled data as vertexes, and the weights in the graph are calculated by correntropy function. Average prediction results are gained from base classifiers, and then propagated under a regularization framework and adaptively enhanced over the graph. The proposed approach is further enriched when small labeled data are available. The proposed algorithms are evaluated on several UCI benchmark data sets. Results of simulations show that the proposed algorithms achieve satisfactory performance compared with existing ensemble methods.

  9. Letter identification and the neural image classifier.

    Science.gov (United States)

    Watson, Andrew B; Ahumada, Albert J

    2015-02-12

    Letter identification is an important visual task for both practical and theoretical reasons. To extend and test existing models, we have reviewed published data for contrast sensitivity for letter identification as a function of size and have also collected new data. Contrast sensitivity increases rapidly from the acuity limit but slows and asymptotes at a symbol size of about 1 degree. We recast these data in terms of contrast difference energy: the average of the squared distances between the letter images and the average letter image. In terms of sensitivity to contrast difference energy, and thus visual efficiency, there is a peak around ¼ degree, followed by a marked decline at larger sizes. These results are explained by a Neural Image Classifier model that includes optical filtering and retinal neural filtering, sampling, and noise, followed by an optimal classifier. As letters are enlarged, sensitivity declines because of the increasing size and spacing of the midget retinal ganglion cell receptive fields in the periphery.

  10. Classification Studies in an Advanced Air Classifier

    Science.gov (United States)

    Routray, Sunita; Bhima Rao, R.

    2016-10-01

    In the present paper, experiments are carried out using VSK separator which is an advanced air classifier to recover heavy minerals from beach sand. In classification experiments the cage wheel speed and the feed rate are set and the material is fed to the air cyclone and split into fine and coarse particles which are collected in separate bags. The size distribution of each fraction was measured by sieve analysis. A model is developed to predict the performance of the air classifier. The objective of the present model is to predict the grade efficiency curve for a given set of operating parameters such as cage wheel speed and feed rate. The overall experimental data with all variables studied in this investigation is fitted to several models. It is found that the present model is fitting good to the logistic model.

  11. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential......This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...

  12. Statistical Mechanics of Soft Margin Classifiers

    OpenAIRE

    Risau-Gusman, Sebastian; Gordon, Mirta B.

    2001-01-01

    We study the typical learning properties of the recently introduced Soft Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the tools of Statistical Mechanics. We derive analytically the behaviour of the learning curves in the regime of very large training sets. We obtain exponential and power laws for the decay of the generalization error towards the asymptotic value, depending on the task and on general characteristics of the distribution of stabilities of the patte...

  13. Deterministic Pattern Classifier Based on Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    LI Jian-wu; LI Min-qiang; KOU Ji-song

    2001-01-01

    This paper proposes a supervised training-test method with Genetic Programming (GP) for pattern classification. Compared and contrasted with traditional methods with regard to deterministic pattern classifiers, this method is true for both linear separable problems and linear non-separable problems. For specific training samples, it can formulate the expression of discriminate function well without any prior knowledge. At last, an experiment is conducted, and the result reveals that this system is effective and practical.

  14. Image Classifying Registration and Dynamic Region Merging

    Directory of Open Access Journals (Sweden)

    Himadri Nath Moulick

    2013-07-01

    Full Text Available In this paper, we address a complex image registration issue arising when the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequentlyencountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weights the two mixture components. The registration problem is formulated both as an energy minimization problem and as a Maximum A Posteriori (MAP estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated at the same time, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into

  15. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  16. Comparison of artificial intelligence classifiers for SIP attack data

    Science.gov (United States)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  17. Analysis of classifiers performance for classification of potential microcalcification

    Science.gov (United States)

    M. N., Arun K.; Sheshadri, H. S.

    2013-07-01

    Breast cancer is a significant public health problem in the world. According to the literature early detection improve breast cancer prognosis. Mammography is a screening tool used for early detection of breast cancer. About 10-30% cases are missed during the routine check as it is difficult for the radiologists to make accurate analysis due to large amount of data. The Microcalcifications (MCs) are considered to be important signs of breast cancer. It has been reported in literature that 30% - 50% of breast cancer detected radio graphically show MCs on mammograms. Histologic examinations report 62% to 79% of breast carcinomas reveals MCs. MC are tiny, vary in size, shape, and distribution, and MC may be closely connected to surrounding tissues. There is a major challenge using the traditional classifiers in the classification of individual potential MCs as the processing of mammograms in appropriate stage generates data sets with an unequal amount of information for both classes (i.e., MC, and Not-MC). Most of the existing state-of-the-art classification approaches are well developed by assuming the underlying training set is evenly distributed. However, they are faced with a severe bias problem when the training set is highly imbalanced in distribution. This paper addresses this issue by using classifiers which handle the imbalanced data sets. In this paper, we also compare the performance of classifiers which are used in the classification of potential MC.

  18. Image Classifying Registration for Gaussian & Bayesian Techniques: A Review

    Directory of Open Access Journals (Sweden)

    Rahul Godghate,

    2014-04-01

    Full Text Available A Bayesian Technique for Image Classifying Registration to perform simultaneously image registration and pixel classification. Medical image registration is critical for the fusion of complementary information about patient anatomy and physiology, for the longitudinal study of a human organ over time and the monitoring of disease development or treatment effect, for the statistical analysis of a population variation in comparison to a so-called digital atlas, for image-guided therapy, etc. A Bayesian Technique for Image Classifying Registration is well-suited to deal with image pairs that contain two classes of pixels with different inter-image intensity relationships. We will show through different experiments that the model can be applied in many different ways. For instance if the class map is known, then it can be used for template-based segmentation. If the full model is used, then it can be applied to lesion detection by image comparison. Experiments have been conducted on both real and simulated data. It show that in the presence of an extra-class, the classifying registration improves both the registration and the detection, especially when the deformations are small. The proposed model is defined using only two classes but it is straightforward to extend it to an arbitrary number of classes.

  19. Evolving a Bayesian Classifier for ECG-based Age Classification in Medical Applications.

    Science.gov (United States)

    Wiggins, M; Saad, A; Litt, B; Vachtsevanos, G

    2008-01-01

    OBJECTIVE: To classify patients by age based upon information extracted from their electro-cardiograms (ECGs). To develop and compare the performance of Bayesian classifiers. METHODS AND MATERIAL: We present a methodology for classifying patients according to statistical features extracted from their ECG signals using a genetically evolved Bayesian network classifier. Continuous signal feature variables are converted to a discrete symbolic form by thresholding, to lower the dimensionality of the signal. This simplifies calculation of conditional probability tables for the classifier, and makes the tables smaller. Two methods of network discovery from data were developed and compared: the first using a greedy hill-climb search and the second employed evolutionary computing using a genetic algorithm (GA). RESULTS AND CONCLUSIONS: The evolved Bayesian network performed better (86.25% AUC) than both the one developed using the greedy algorithm (65% AUC) and the naïve Bayesian classifier (84.75% AUC). The methodology for evolving the Bayesian classifier can be used to evolve Bayesian networks in general thereby identifying the dependencies among the variables of interest. Those dependencies are assumed to be non-existent by naïve Bayesian classifiers. Such a classifier can then be used for medical applications for diagnosis and prediction purposes.

  20. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    Science.gov (United States)

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  1. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  2. Integrating language models into classifiers for BCI communication: a review

    Science.gov (United States)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  3. Colorfulness Enhancement Using Image Classifier Based on Chroma-histogram

    Institute of Scientific and Technical Information of China (English)

    Moon-cheol KIM; Kyoung-won LIM

    2010-01-01

    The paper proposes a colorfulness enhancement of pictorial images using image classifier based on chroma histogram.This ap-poach firstly estimates strength of colorfulness of images and their types.With such determined information,the algorithm automatically adjusts image colorfulness for a better natural image look.With the help of an additional detection of skin colors and a pixel chroma adaptive local processing,the algorithm produces more natural image look.The algorithm performance had been tested with an image quality judgment experiment of 20 persons.The experimental result indicates a better image preference.

  4. Reconfiguration-based implementation of SVM classifier on FPGA for Classifying Microarray data.

    Science.gov (United States)

    Hussain, Hanaa M; Benkrid, Khaled; Seker, Huseyin

    2013-01-01

    Classifying Microarray data, which are of high dimensional nature, requires high computational power. Support Vector Machines-based classifier (SVM) is among the most common and successful classifiers used in the analysis of Microarray data but also requires high computational power due to its complex mathematical architecture. Implementing SVM on hardware exploits the parallelism available within the algorithm kernels to accelerate the classification of Microarray data. In this work, a flexible, dynamically and partially reconfigurable implementation of the SVM classifier on Field Programmable Gate Array (FPGA) is presented. The SVM architecture achieved up to 85× speed-up over equivalent general purpose processor (GPP) showing the capability of FPGAs in enhancing the performance of SVM-based analysis of Microarray data as well as future bioinformatics applications.

  5. 36 CFR 1260.22 - Who is responsible for the declassification of classified national security White House...

    Science.gov (United States)

    2010-07-01

    ... declassification of classified national security White House originated information in NARA's holdings? 1260.22... for the declassification of classified national security White House originated information in NARA's... was originated by: (1) The President; (2) The White House staff; (3) Committees, commissions,...

  6. Classifying spaces of degenerating polarized Hodge structures

    CERN Document Server

    Kato, Kazuya

    2009-01-01

    In 1970, Phillip Griffiths envisioned that points at infinity could be added to the classifying space D of polarized Hodge structures. In this book, Kazuya Kato and Sampei Usui realize this dream by creating a logarithmic Hodge theory. They use the logarithmic structures begun by Fontaine-Illusie to revive nilpotent orbits as a logarithmic Hodge structure. The book focuses on two principal topics. First, Kato and Usui construct the fine moduli space of polarized logarithmic Hodge structures with additional structures. Even for a Hermitian symmetric domain D, the present theory is a refinem

  7. Learning Rates for -Regularized Kernel Classifiers

    Directory of Open Access Journals (Sweden)

    Hongzhi Tong

    2013-01-01

    Full Text Available We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.

  8. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  9. Cubical sets as a classifying topos

    DEFF Research Database (Denmark)

    Spitters, Bas

    Coquand’s cubical set model for homotopy type theory provides the basis for a computational interpretation of the univalence axiom and some higher inductive types, as implemented in the cubical proof assistant. We show that the underlying cube category is the opposite of the Lawvere theory of De...... Morgan algebras. The topos of cubical sets itself classifies the theory of ‘free De Morgan algebras’. This provides us with a topos with an internal ‘interval’. Using this interval we construct a model of type theory following van den Berg and Garner. We are currently investigating the precise relation...

  10. A recursive kinematic random forest and alpha beta filter classifier for 2D radar tracks

    Science.gov (United States)

    Jochumsen, Lars W.; Østergaard, Jan; Jensen, Søren H.; Clemente, Carmine; Ø. Pedersen, Morten

    2016-12-01

    In this work, we show that by using a recursive random forest together with an alpha beta filter classifier, it is possible to classify radar tracks from the tracks' kinematic data. The kinematic data is from a 2D scanning radar without Doppler or height information. We use random forest as this classifier implicitly handles the uncertainty in the position measurements. As stationary targets can have an apparently high speed because of the measurement uncertainty, we use an alpha beta filter classifier to classify stationary targets from moving targets. We show an overall classification rate from simulated data at 82.6 % and from real-world data at 79.7 %. Additional to the confusion matrix, we also show recordings of real-world data.

  11. Rotational Study of Ambiguous Taxonomic Classified Asteroids

    Science.gov (United States)

    Linder, Tyler R.; Sanchez, Rick; Wuerker, Wolfgang; Clayson, Timothy; Giles, Tucker

    2017-01-01

    The Sloan Digital Sky Survey (SDSS) moving object catalog (MOC4) provided the largest ever catalog of asteroid spectrophotometry observations. Carvano et al. (2010), while analyzing MOC4, discovered that individual observations of asteroids which were observed multiple times did not classify into the same photometric-based taxonomic class. A small subset of those asteroids were classified as having both the presence and absence of a 1um silicate absorption feature. If these variations are linked to differences in surface mineralogy, the prevailing assumption that an asteroid’s surface composition is predominantly homogenous would need to be reexamined. Furthermore, our understanding of the evolution of the asteroid belt, as well as the linkage between certain asteroids and meteorite types may need to be modified.This research is an investigation to determine the rotational rates of these taxonomically ambiguous asteroids. Initial questions to be answered:Do these asteroids have unique or nonstandard rotational rates?Is there any evidence in their light curve to suggest an abnormality?Observations were taken using PROMPT6 a 0.41-m telescope apart of the SKYNET network at Cerro Tololo Inter-American Observatory (CTIO). Observations were calibrated and analyzed using Canopus software. Initial results will be presented at AAS.

  12. Objectively classifying Southern Hemisphere extratropical cyclones

    Science.gov (United States)

    Catto, Jennifer

    2016-04-01

    There has been a long tradition in attempting to separate extratropical cyclones into different classes depending on their cloud signatures, airflows, synoptic precursors, or upper-level flow features. Depending on these features, the cyclones may have different impacts, for example in their precipitation intensity. It is important, therefore, to understand how the distribution of different cyclone classes may change in the future. Many of the previous classifications have been performed manually. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850 hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first detected), has been used to objectively classify identified cyclones. The results are compared to the manual classification of Sinclair and Revell (2000) and the four objectively identified classes shown in this presentation are found to match well. The relative importance of diabatic heating in the clusters is investigated, as well as the differing precipitation characteristics. The success of the objective classification shows its utility in climate model evaluation and climate change studies.

  13. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  14. Classifying anatomical subtypes of subjective memory impairment.

    Science.gov (United States)

    Jung, Na-Yeon; Seo, Sang Won; Yoo, Heejin; Yang, Jin-Ju; Park, Seongbeom; Kim, Yeo Jin; Lee, Juyoun; Lee, Jin San; Jang, Young Kyoung; Lee, Jong Min; Kim, Sung Tae; Kim, Seonwoo; Kim, Eun-Joo; Na, Duk L; Kim, Hee Jin

    2016-12-01

    We aimed to categorize subjective memory impairment (SMI) individuals based on their patterns of cortical thickness and to propose simple models that can classify each subtype. We recruited 613 SMI individuals and 613 age- and gender-matched normal controls. Using hierarchical agglomerative cluster analysis, SMI individuals were divided into 3 subtypes: temporal atrophy (12.9%), minimal atrophy (52.4%), and diffuse atrophy (34.6%). Individuals in the temporal atrophy (Alzheimer's disease-like atrophy) subtype were older, had more vascular risk factors, and scored the lowest on neuropsychological tests. Combination of these factors classified the temporal atrophy subtype with 73.2% accuracy. On the other hand, individuals with the minimal atrophy (non-neurodegenerative) subtype were younger, were more likely to be female, and had depression. Combination of these factors discriminated the minimal atrophy subtype with 76.0% accuracy. We suggest that SMI can be largely categorized into 3 anatomical subtypes that have distinct clinical features. Our models may help physicians decide next steps when encountering SMI patients and may also be used in clinical trials.

  15. Comparison of Current Frame-Based Phoneme Classifiers

    Directory of Open Access Journals (Sweden)

    Vaclav Pfeifer

    2011-01-01

    Full Text Available This paper compares today’s most common frame-based classifiers. These classifiers can be divided into the two main groups – generic classifiers which creates the most probable model based on the training data (for example GMM and discriminative classifiers which focues on creating decision hyperplane. A lot of research has been done with the GMM classifiers and therefore this paper will be mainly focused on the frame-based classifiers. Two discriminative classifiers will be presented. These classifiers implements a hieararchical tree root structure over the input phoneme group which shown to be an effective. Based on these classifiers, two efficient training algorithms will be presented. We demonstrate advantages of our training algorithms by evaluating all classifiers over the TIMIT speech corpus.

  16. Classifying prion and prion-like phenomena.

    Science.gov (United States)

    Harbi, Djamel; Harrison, Paul M

    2014-01-01

    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  17. Learning Vector Quantization for Classifying Astronomical Objects

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The sizes of astronomical surveys in different wavebands are increas-ing rapidly. Therefore, automatic classification of objects is becoming ever moreimportant. We explore the performance of learning vector quantization (LVQ) inclassifying multi-wavelength data. Our analysis concentrates on separating activesources from non-active ones. Different classes of X-ray emitters populate distinctregions of a multidimensional parameter space. In order to explore the distributionof various objects in a multidimensional parameter space, we positionally cross-correlate the data of quasars, BL Lacs, active galaxies, stars and normal galaxiesin the optical, X-ray and infrared bands. We then apply LVQ to classify them withthe obtained data. Our results show that LVQ is an effective method for separatingAGNs from stars and normal galaxies with multi-wavelength data.

  18. A cognitive approach to classifying perceived behaviors

    Science.gov (United States)

    Benjamin, Dale Paul; Lyons, Damian

    2010-04-01

    This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.

  19. Speech Emotion Recognition Using Fuzzy Logic Classifier

    Directory of Open Access Journals (Sweden)

    Daniar aghsanavard

    2016-01-01

    Full Text Available Over the last two decades, emotions, speech recognition and signal processing have been one of the most significant issues in the adoption of techniques to detect them. Each method has advantages and disadvantages. This paper tries to suggest fuzzy speech emotion recognition based on the classification of speech's signals in order to better recognition along with a higher speed. In this system, the use of fuzzy logic system with 5 layers, which is the combination of neural progressive network and algorithm optimization of firefly, first, speech samples have been given to input of fuzzy orbit and then, signals will be investigated and primary classified in a fuzzy framework. In this model, a pattern of signals will be created for each class of signals, which results in reduction of signal data dimension as well as easier speech recognition. The obtained experimental results show that our proposed method (categorized by firefly, improves recognition of utterances.

  20. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    Science.gov (United States)

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  1. Classifying antiarrhythmic actions: by facts or speculation.

    Science.gov (United States)

    Vaughan Williams, E M

    1992-11-01

    Classification of antiarrhythmic actions is reviewed in the context of the results of the Cardiac Arrhythmia Suppression Trials, CAST 1 and 2. Six criticisms of the classification recently published (The Sicilian Gambit) are discussed in detail. The alternative classification, when stripped of speculative elements, is shown to be similar to the original classification. Claims that the classification failed to predict the efficacy of antiarrhythmic drugs for the selection of appropriate therapy have been tested by an example. The antiarrhythmic actions of cibenzoline were classified in 1980. A detailed review of confirmatory experiments and clinical trials during the past decade shows that predictions made at the time agree with subsequent results. Classification of the effects drugs actually have on functioning cardiac tissues provides a rational basis for finding the preferred treatment for a particular arrhythmia in accordance with the diagnosis.

  2. A Spiking Neural Learning Classifier System

    CERN Document Server

    Howard, Gerard; Lanzi, Pier-Luca

    2012-01-01

    Learning Classifier Systems (LCS) are population-based reinforcement learners used in a wide variety of applications. This paper presents a LCS where each traditional rule is represented by a spiking neural network, a type of network with dynamic internal state. We employ a constructivist model of growth of both neurons and dendrites that realise flexible learning by evolving structures of sufficient complexity to solve a well-known problem involving continuous, real-valued inputs. Additionally, we extend the system to enable temporal state decomposition. By allowing our LCS to chain together sequences of heterogeneous actions into macro-actions, it is shown to perform optimally in a problem where traditional methods can fail to find a solution in a reasonable amount of time. Our final system is tested on a simulated robotics platform.

  3. 78 FR 5116 - NASA Information Security Protection

    Science.gov (United States)

    2013-01-24

    ... SPACE ADMINISTRATION 14 CFR Part 1203 RIN 2700-AD61 NASA Information Security Protection AGENCY..., Classified National Security Information, and appropriately to correspond with NASA's internal requirements, NPR 1600.2, Classified National Security Information, that establishes the Agency's requirements...

  4. Classifying user states in next generation networks

    Science.gov (United States)

    He, Y.; Bilgic, A.

    2011-08-01

    In this paper we apply a classification method to learn geographic regions using Location Based Services (LBS) in IP Multimedia Subsystem (IMS). We assume that the information in Local Network (cellular network) can be freely exchanged with Global IP Network (IMS) and the information can be gathered in a data base. LBS in the IMS also provide location information for the data sets. Statistic classification methods are applied to the data sets in the data base. Depending on the information provided by the users, they are divided into different user groups (event classes) using Type Filters (TF). Then discriminant analysis is applied to the position information offered by LBS in IMS to determine the geographic regions of the different classes. The learned geographic regions can be used to inform the users in this region or other regions over IMS. This kind of service can be used for any location-based events.

  5. Information

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    There are unstructured abstracts (no more than 256 words) and structured abstracts (no more than 480). The specific requirements for structured abstracts are as follows:An informative, structured abstracts of no more than 4-80 words should accompany each manuscript. Abstracts for original contributions should be structured into the following sections. AIM (no more than 20 words): Only the purpose should be included. Please write the aim as the form of "To investigate/ study/..."; MATERIALS AND METHODS (no more than 140 words); RESULTS (no more than 294 words): You should present P values where appropnate and must provide relevant data to illustrate how they were obtained, e.g. 6.92 ± 3.86 vs 3.61 ± 1.67, P< 0.001; CONCLUSION (no more than 26 words).

  6. A HYBRID CLASSIFICATION ALGORITHM TO CLASSIFY ENGINEERING STUDENTS’ PROBLEMS AND PERKS

    Directory of Open Access Journals (Sweden)

    Mitali Desai

    2016-03-01

    Full Text Available The social networking sites have brought a new horizon for expressing views and opinions of individuals. Moreover, they provide medium to students to share their sentiments including struggles and joy during the learning process. Such informal information has a great venue for decision making. The large and growing scale of information needs automatic classification techniques. Sentiment analysis is one of the automated techniques to classify large data. The existing predictive sentiment analysis techniques are highly used to classify reviews on E-commerce sites to provide business intelligence. However, they are not much useful to draw decisions in education system since they classify the sentiments into merely three pre-set categories: positive, negative and neutral. Moreover, classifying the students’ sentiments into positive or negative category does not provide deeper insight into their problems and perks. In this paper, we propose a novel Hybrid Classification Algorithm to classify engineering students’ sentiments. Unlike traditional predictive sentiment analysis techniques, the proposed algorithm makes sentiment analysis process descriptive. Moreover, it classifies engineering students’ perks in addition to problems into several categories to help future students and education system in decision making.

  7. Classified Ads Harvesting Agent and Notification System

    CERN Document Server

    Doomun, Razvi; Nadeem, Auleear; Aukin, Mozafar

    2010-01-01

    The shift from an information society to a knowledge society require rapid information harvesting, reliable search and instantaneous on demand delivery. Information extraction agents are used to explore and collect data available from Web, in order to effectively exploit such data for business purposes, such as automatic news filtering, advertisement or product searching and price comparing. In this paper, we develop a real-time automatic harvesting agent for adverts posted on Servihoo web portal and an SMS-based notification system. It uses the URL of the web portal and the object model, i.e., the fields of interests and a set of rules written using the HTML parsing functions to extract latest adverts information. The extraction engine executes the extraction rules and stores the information in a database to be processed for automatic notification. This intelligent system helps to tremendously save time. It also enables users or potential product buyers to react more quickly to changes and newly posted sales...

  8. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  9. Colorization by classifying the prior knowledge

    Institute of Scientific and Technical Information of China (English)

    DU Weiwei

    2011-01-01

    When a one-dimensional luminance scalar is replaced by a vector of a colorful multi-dimension for every pixel of a monochrome image,the process is called colorization.However,colorization is under-constrained.Therefore,the prior knowledge is considered and given to the monochrome image.Colorization using optimization algorithm is an effective algorithm for the above problem.However,it cannot effectively do with some images well without repeating experiments for confirming the place of scribbles.In this paper,a colorization algorithm is proposed,which can automatically generate the prior knowledge.The idea is that firstly,the prior knowledge crystallizes into some points of the prior knowledge which is automatically extracted by downsampling and upsampling method.And then some points of the prior knowledge are classified and given with corresponding colors.Lastly,the color image can be obtained by the color points of the prior knowledge.It is demonstrated that the proposal can not only effectively generate the prior knowledge but also colorize the monochrome image according to requirements of user with some experiments.

  10. Fault diagnosis with the Aladdin transient classifier

    Science.gov (United States)

    Roverso, Davide

    2003-08-01

    The purpose of Aladdin is to assist plant operators in the early detection and diagnosis of faults and anomalies in the plant that either have an impact on the plant performance, or that could lead to a plant shutdown or component damage if allowed to go unnoticed. The kind of early fault detection and diagnosis performed by Aladdin is aimed at allowing more time for decision making, increasing the operator awareness, reducing component damage, and supporting improved plant availability and reliability. In this paper we describe in broad lines the Aladdin transient classifier, which combines techniques such as recurrent neural network ensembles, Wavelet On-Line Pre-processing (WOLP), and Autonomous Recursive Task Decomposition (ARTD), in an attempt to improve the practical applicability and scalability of this type of systems to real processes and machinery. The paper focuses then on describing an application of Aladdin to a Nuclear Power Plant (NPP) through the use of the HAMBO experimental simulator of the Forsmark 3 boiling water reactor NPP in Sweden. It should be pointed out that Aladdin is not necessarily restricted to applications in NPPs. Other types of power plants, or even other types of processes, can also benefit from the diagnostic capabilities of Aladdin.

  11. Optimized Radial Basis Function Classifier for Multi Modal Biometrics

    Directory of Open Access Journals (Sweden)

    Anand Viswanathan

    2014-07-01

    Full Text Available Biometric systems can be used for the identification or verification of humans based on their physiological or behavioral features. In these systems the biometric characteristics such as fingerprints, palm-print, iris or speech can be recorded and are compared with the samples for the identification or verification. Multimodal biometrics is more accurate and solves spoof attacks than the single modal bio metrics systems. In this study, a multimodal biometric system using fingerprint images and finger-vein patterns is proposed and also an optimized Radial Basis Function (RBF kernel classifier is proposed to identify the authorized users. The extracted features from these modalities are selected by PCA and kernel PCA and combined to classify by RBF classifier. The parameters of RBF classifier is optimized by using BAT algorithm with local search. The performance of the proposed classifier is compared with the KNN classifier, Naïve Bayesian classifier and non-optimized RBF classifier.

  12. MISR Level 2 TOA/Cloud Classifier parameters V003

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Regional Cloud Classifiers, Cloud Shadow Mask, and...

  13. Predict or classify: The deceptive role of time-locking in brain signal classification

    CERN Document Server

    Rusconi, Marco

    2016-01-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate...

  14. Feature Fusion Based SVM Classifier for Protein Subcellular Localization Prediction.

    Science.gov (United States)

    Rahman, Julia; Mondal, Md Nazrul Islam; Islam, Md Khaled Ben; Hasan, Md Al Mehedi

    2016-12-18

    For the importance of protein subcellular localization in different branches of life science and drug discovery, researchers have focused their attentions on protein subcellular localization prediction. Effective representation of features from protein sequences plays a most vital role in protein subcellular localization prediction specially in case of machine learning techniques. Single feature representation-like pseudo amino acid composition (PseAAC), physiochemical property models (PPM), and amino acid index distribution (AAID) contains insufficient information from protein sequences. To deal with such problems, we have proposed two feature fusion representations, AAIDPAAC and PPMPAAC, to work with Support Vector Machine classifiers, which fused PseAAC with PPM and AAID accordingly. We have evaluated the performance for both single and fused feature representation of a Gram-negative bacterial dataset. We have got at least 3% more actual accuracy by AAIDPAAC and 2% more locative accuracy by PPMPAAC than single feature representation.

  15. Performance evaluation of artificial intelligence classifiers for the medical domain.

    Science.gov (United States)

    Smith, A E; Nugent, C D; McClean, S I

    2002-01-01

    The application of artificial intelligence systems is still not widespread in the medical field, however there is an increasing necessity for these to handle the surfeit of information available. One drawback to their implementation is the lack of criteria or guidelines for the evaluation of these systems. This is the primary issue in their acceptability to clinicians, who require them for decision support and therefore need evidence that these systems meet the special safety-critical requirements of the domain. This paper shows evidence that the most prevalent form of intelligent system, neural networks, is generally not being evaluated rigorously regarding classification precision. A taxonomy of the types of evaluation tests that can be carried out, to gauge inherent performance of the outputs of intelligent systems has been assembled, and the results of this presented in a clear and concise form, which should be applicable to all intelligent classifiers for medicine.

  16. Performance Evaluation of Bagged RBF Classifier for Data Mining Applications

    Directory of Open Access Journals (Sweden)

    M.Govindarajan

    2013-11-01

    Full Text Available Data mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. The feasibility and the benefits of the proposed approaches are demonstrated by the means of data mining applications like intrusion detection, direct marketing, and signature verification. A variety of techniques have been employed for analysis ranging from traditional statistical methods to data mining approaches. Bagging and boosting are two relatively new but popular methods for producing ensembles. In this work, bagging is evaluated on real and benchmark data sets of intrusion detection, direct marketing, and signature verification in conjunction with radial basis function classifier as the base learner. The proposed bagged radial basis function is superior to individual approach for data mining applications in terms of classification accuracy.

  17. Higher School Marketing Strategy Formation: Classifying the Factors

    Directory of Open Access Journals (Sweden)

    N. K. Shemetova

    2012-01-01

    Full Text Available The paper deals with the main trends of higher school management strategy formation. The author specifies the educational changes in the modern information society determining the strategy options. For each professional training level the author denotes the set of strategic factors affecting the educational service consumers and, therefore, the effectiveness of the higher school marketing. The given factors are classified from the stand-points of the providers and consumers of educational service (enrollees, students, graduates and postgraduates. The research methods include the statistic analysis and general methods of scientific analysis, synthesis, induction, deduction, comparison, and classification. The author is convinced that the university management should develop the necessary prerequisites for raising the graduates’ competitiveness in the labor market, and stimulate the active marketing policies of the relating subdivisions and departments. In author’s opinion, the above classification of marketing strategy factors can be used as the system of values for educational service providers. 

  18. Method of generating features optimal to a dataset and classifier

    Energy Technology Data Exchange (ETDEWEB)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    2016-10-18

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  19. Application of Multidimensional Chain classifiers to Eddy Current Images for Defect Characterization

    Directory of Open Access Journals (Sweden)

    S. Shuaib Ahmed

    2012-12-01

    Full Text Available Multidimensional learning problem deals with learning a function that maps a vector of input features to a vector of class labels. Dependency between the classes is not taken into account while constructing independent classifiers for each component class of vector. To counteract this limitation, Chain Classifiers (CC approach for multidimensional learning is proposed in this study. In this approach, the information of class dependency is passed along a chain. Radial Basis Functions (RBF and Support Vector Machines (SVM are used as core for CC. Studies on multidimensional dataset of images obtained from simulated eddy current non-destructive evaluation of a stainless steel plate with sub-surface defects clearly indicate that the performance of the chain classifier is superior to the independent classifiers.

  20. Development of a combined GIS, neural network and Bayesian classifier methodology for classifying remotely sensed data

    Science.gov (United States)

    Schneider, Claudio Albert

    This research is aimed at the solution of two common but still largely unsolved problems in the classification of remotely sensed data: (1) Classification accuracy of remotely sensed data decreases significantly in mountainous terrain, where topography strongly influences the spectral response of the features on the ground; and (2) when attempting to obtain more detailed classifications, e.g. forest cover types or species, rather than just broad categories of forest such as coniferous or deciduous, the accuracy of the classification generally decreases significantly. The main objective of the study was to develop a widely applicable and efficient classification procedure for mapping forest and other cover types in mountainous terrain, using an integrated GIS/neural network/Bayesian classification approach. The performance of this new technique was compared to a standard supervised Maximum Likelihood classification technique, a "conventional" Bayesian/Maximum Likelihood classification, and to a "conventional" neural network classifier. Results indicate a considerable improvement of the new technique over the standard Maximum Likelihood classification technique, as well as a better accuracy than the "conventional" Bayesian/Maximum Likelihood classifier (13.08 percent improvement in overall accuracy), but the "conventional" neural network classifiers outperformed all the techniques compared in this study, with an overall accuracy improvement of 15.94 percent as compared to the standard Maximum Likelihood classifier (from 46.77 percent to 62.71 percent). However, the overall accuracies of all the classification techniques compared in this study were relative low. It is believed that this was caused by problems related to the inadequacy of the reference data. On the other hand, the results also indicate the need to develop a different sampling design to more effectively cover the variability across all the parameters needed by the neural network classification technique

  1. Counting, Measuring And The Semantics Of Classifiers

    Directory of Open Access Journals (Sweden)

    Susan Rothstein

    2010-12-01

    Full Text Available This paper makes two central claims. The first is that there is an intimate and non-trivial relation between the mass/count distinction on the one hand and the measure/individuation distinction on the other: a (if not the defining property of mass nouns is that they denote sets of entities which can be measured, while count nouns denote sets of entities which can be counted. Crucially, this is a difference in grammatical perspective and not in ontological status. The second claim is that the mass/count distinction between two types of nominals has its direct correlate at the level of classifier phrases: classifier phrases like two bottles of wine are ambiguous between a counting, or individuating, reading and a measure reading. On the counting reading, this phrase has count semantics, on the measure reading it has mass semantics.ReferencesBorer, H. 1999. ‘Deconstructing the construct’. In K. Johnson & I. Roberts (eds. ‘Beyond Principles and Parameters’, 43–89. Dordrecht: Kluwer publications.Borer, H. 2008. ‘Compounds: the view from Hebrew’. In R. Lieber & P. Stekauer (eds. ‘The Oxford Handbook of Compounds’, 491–511. Oxford: Oxford University Press.Carlson, G. 1977b. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts at Amherst.Carlson, G. 1997. Quantifiers and Selection. Ph.D. thesis, University of Leiden.Carslon, G. 1977a. ‘Amount relatives’. Language 53: 520–542.Chierchia, G. 2008. ‘Plurality of mass nouns and the notion of ‘semantic parameter”. In S. Rothstein (ed. ‘Events and Grammar’, 53–103. Dordrecht: Kluwer.Danon, G. 2008. ‘Definiteness spreading in the Hebrew construct state’. Lingua 118: 872–906.http://dx.doi.org/10.1016/j.lingua.2007.05.012Gillon, B. 1992. ‘Toward a common semantics for English count and mass nouns’. Linguistics and Philosophy 15: 597–640.http://dx.doi.org/10.1007/BF00628112Grosu, A. & Landman, F. 1998. ‘Strange relatives of the third kind

  2. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  3. Classifying controllers by activities : An exploratory study

    NARCIS (Netherlands)

    Verstegen, B.; De Loo, I.G.M.; Mol, P.; Slagter, K.; Geerkens, H.

    2007-01-01

    The goal of this paper is to discern variables (triggers) that affect a controller’s role in an organisation. Using survey data, groups of controllers are distinguished based on coherent combinations of activities. We find that controllers either operate as so-called ‘information adapters’ or ‘watch

  4. A new approach to classifier fusion based on upper integral.

    Science.gov (United States)

    Wang, Xi-Zhao; Wang, Ran; Feng, Hui-Min; Wang, Hua-Chao

    2014-05-01

    Fusing a number of classifiers can generally improve the performance of individual classifiers, and the fuzzy integral, which can clearly express the interaction among the individual classifiers, has been acknowledged as an effective tool of fusion. In order to make the best use of the individual classifiers and their combinations, we propose in this paper a new scheme of classifier fusion based on upper integrals, which differs from all the existing models. Instead of being a fusion operator, the upper integral is used to reasonably arrange the finite resources, and thus to maximize the classification efficiency. By solving an optimization problem of upper integrals, we obtain a scheme for assigning proportions of examples to different individual classifiers and their combinations. According to these proportions, new examples could be classified by different individual classifiers and their combinations, and the combination of classifiers that specific examples should be submitted to depends on their performance. The definition of upper integral guarantees such a conclusion that the classification efficiency of the fused classifier is not less than that of any individual classifier theoretically. Furthermore, numerical simulations demonstrate that most existing fusion methodologies, such as bagging and boosting, can be improved by our upper integral model.

  5. Phenotype Recognition with Combined Features and Random Subspace Classifier Ensemble

    Directory of Open Access Journals (Sweden)

    Pham Tuan D

    2011-04-01

    Full Text Available Abstract Background Automated, image based high-content screening is a fundamental tool for discovery in biological science. Modern robotic fluorescence microscopes are able to capture thousands of images from massively parallel experiments such as RNA interference (RNAi or small-molecule screens. As such, efficient computational methods are required for automatic cellular phenotype identification capable of dealing with large image data sets. In this paper we investigated an efficient method for the extraction of quantitative features from images by combining second order statistics, or Haralick features, with curvelet transform. A random subspace based classifier ensemble with multiple layer perceptron (MLP as the base classifier was then exploited for classification. Haralick features estimate image properties related to second-order statistics based on the grey level co-occurrence matrix (GLCM, which has been extensively used for various image processing applications. The curvelet transform has a more sparse representation of the image than wavelet, thus offering a description with higher time frequency resolution and high degree of directionality and anisotropy, which is particularly appropriate for many images rich with edges and curves. A combined feature description from Haralick feature and curvelet transform can further increase the accuracy of classification by taking their complementary information. We then investigate the applicability of the random subspace (RS ensemble method for phenotype classification based on microscopy images. A base classifier is trained with a RS sampled subset of the original feature set and the ensemble assigns a class label by majority voting. Results Experimental results on the phenotype recognition from three benchmarking image sets including HeLa, CHO and RNAi show the effectiveness of the proposed approach. The combined feature is better than any individual one in the classification accuracy. The

  6. Rule Based Ensembles Using Pair Wise Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    Moslem Mohammadi Jenghara

    2015-03-01

    Full Text Available In value estimation, the inexperienced people's estimation average is good approximation to true value, provided that the answer of these individual are independent. Classifier ensemble is the implementation of mentioned principle in classification tasks that are investigated in two aspects. In the first aspect, feature space is divided into several local regions and each region is assigned with a highly competent classifier and in the second, the base classifiers are applied in parallel and equally experienced in some ways to achieve a group consensus. In this paper combination of two methods are used. An important consideration in classifier combination is that much better results can be achieved if diverse classifiers, rather than similar classifiers, are combined. To achieve diversity in classifiers output, the symmetric pairwise weighted feature space is used and the outputs of trained classifiers over the weighted feature space are combined to inference final result. In this paper MLP classifiers are used as the base classifiers. The Experimental results show that the applied method is promising.

  7. Intelligent query by humming system based on score level fusion of multiple classifiers

    Science.gov (United States)

    Pyo Nam, Gi; Thu Trang Luong, Thi; Ha Nam, Hyun; Ryoung Park, Kang; Park, Sung-Joo

    2011-12-01

    Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH) systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB) code-based linear scaling (LS), pitch-based dynamic time warping (DTW), and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.

  8. Intelligent query by humming system based on score level fusion of multiple classifiers

    Directory of Open Access Journals (Sweden)

    Park Sung-Joo

    2011-01-01

    Full Text Available Abstract Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB code-based linear scaling (LS, pitch-based dynamic time warping (DTW, and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.

  9. Tree Species Classification Using Hyperspectral Imagery: A Comparison of Two Classifiers

    Directory of Open Access Journals (Sweden)

    Laurel Ballanti

    2016-05-01

    Full Text Available The identification of tree species can provide a useful and efficient tool for forest managers for planning and monitoring purposes. Hyperspectral data provide sufficient spectral information to classify individual tree species. Two non-parametric classifiers, support vector machines (SVM and random forest (RF, have resulted in high accuracies in previous classification studies. This research takes a comparative classification approach to examine the SVM and RF classifiers in the complex and heterogeneous forests of Muir Woods National Monument and Kent Creek Canyon in Marin County, California. The influence of object- or pixel-based training samples and segmentation size on the object-oriented classification is also explored. To reduce the data dimensionality, a minimum noise fraction transform was applied to the mosaicked hyperspectral image, resulting in the selection of 27 bands for the final classification. Each classifier was also assessed individually to identify any advantage related to an increase in training sample size or an increase in object segmentation size. All classifications resulted in overall accuracies above 90%. No difference was found between classifiers when using object-based training samples. SVM outperformed RF when additional training samples were used. An increase in training samples was also found to improve the individual performance of the SVM classifier.

  10. Learning multiscale and deep representations for classifying remotely sensed imagery

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong

    2016-03-01

    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

  11. Proposing an adaptive mutation to improve XCSF performance to classify ADHD and BMD patients

    Science.gov (United States)

    Sadatnezhad, Khadijeh; Boostani, Reza; Ghanizadeh, Ahmad

    2010-12-01

    There is extensive overlap of clinical symptoms observed among children with bipolar mood disorder (BMD) and those with attention deficit hyperactivity disorder (ADHD). Thus, diagnosis according to clinical symptoms cannot be very accurate. It is therefore desirable to develop quantitative criteria for automatic discrimination between these disorders. This study is aimed at designing an efficient decision maker to accurately classify ADHD and BMD patients by analyzing their electroencephalogram (EEG) signals. In this study, 22 channels of EEGs have been recorded from 21 subjects with ADHD and 22 individuals with BMD. Several informative features, such as fractal dimension, band power and autoregressive coefficients, were extracted from the recorded signals. Considering the multimodal overlapping distribution of the obtained features, linear discriminant analysis (LDA) was used to reduce the input dimension in a more separable space to make it more appropriate for the proposed classifier. A piecewise linear classifier based on the extended classifier system for function approximation (XCSF) was modified by developing an adaptive mutation rate, which was proportional to the genotypic content of best individuals and their fitness in each generation. The proposed operator controlled the trade-off between exploration and exploitation while maintaining the diversity in the classifier's population to avoid premature convergence. To assess the effectiveness of the proposed scheme, the extracted features were applied to support vector machine, LDA, nearest neighbor and XCSF classifiers. To evaluate the method, a noisy environment was simulated with different noise amplitudes. It is shown that the results of the proposed technique are more robust as compared to conventional classifiers. Statistical tests demonstrate that the proposed classifier is a promising method for discriminating between ADHD and BMD patients.

  12. Bayesian classifiers applied to the Tennessee Eastman process.

    Science.gov (United States)

    Dos Santos, Edimilson Batista; Ebecken, Nelson F F; Hruschka, Estevam R; Elkamel, Ali; Madhuranthakam, Chandra M R

    2014-03-01

    Fault diagnosis includes the main task of classification. Bayesian networks (BNs) present several advantages in the classification task, and previous works have suggested their use as classifiers. Because a classifier is often only one part of a larger decision process, this article proposes, for industrial process diagnosis, the use of a Bayesian method called dynamic Markov blanket classifier that has as its main goal the induction of accurate Bayesian classifiers having dependable probability estimates and revealing actual relationships among the most relevant variables. In addition, a new method, named variable ordering multiple offspring sampling capable of inducing a BN to be used as a classifier, is presented. The performance of these methods is assessed on the data of a benchmark problem known as the Tennessee Eastman process. The obtained results are compared with naive Bayes and tree augmented network classifiers, and confirm that both proposed algorithms can provide good classification accuracies as well as knowledge about relevant variables.

  13. Stochastic margin-based structure learning of Bayesian network classifiers.

    Science.gov (United States)

    Pernkopf, Franz; Wohlmayr, Michael

    2013-02-01

    The margin criterion for parameter learning in graphical models gained significant impact over the last years. We use the maximum margin score for discriminatively optimizing the structure of Bayesian network classifiers. Furthermore, greedy hill-climbing and simulated annealing search heuristics are applied to determine the classifier structures. In the experiments, we demonstrate the advantages of maximum margin optimized Bayesian network structures in terms of classification performance compared to traditionally used discriminative structure learning methods. Stochastic simulated annealing requires less score evaluations than greedy heuristics. Additionally, we compare generative and discriminative parameter learning on both generatively and discriminatively structured Bayesian network classifiers. Margin-optimized Bayesian network classifiers achieve similar classification performance as support vector machines. Moreover, missing feature values during classification can be handled by discriminatively optimized Bayesian network classifiers, a case where purely discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  14. OMC automatic variable star classifi cation

    Science.gov (United States)

    López del Fresno, M.; Sarro Baro, L. M.; Solano Márquez, E.

    2013-05-01

    The Optical Monitoring Camera (OMC), on-board the ESA mission INTEGRAL, has stored more than 190.000 light curves for almost 10 years. Among the targets included in its input catalogue there is a relevant amount of variable stars. In many cases, OMC has gathered photometric information of sufficient quality to enable a stellar variability analysis of those stars. In this contribution we show the full pipeline of our classification system, from the period calculation to the final membership classification. We also include relevant points of the system such as the parameters used for filtering good quality light curves, specific optimizations applied to the classification algorithms and an overview of the results obtained.

  15. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    OpenAIRE

    2005-01-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on...

  16. Construction of unsupervised sentiment classifier on idioms resources

    Institute of Scientific and Technical Information of China (English)

    谢松县; 王挺

    2014-01-01

    Sentiment analysis is the computational study of how opinions, attitudes, emotions, and perspectives are expressed in language, and has been the important task of natural language processing. Sentiment analysis is highly valuable for both research and practical applications. The focuses were put on the difficulties in the construction of sentiment classifiers which normally need tremendous labeled domain training data, and a novel unsupervised framework was proposed to make use of the Chinese idiom resources to develop a general sentiment classifier. Furthermore, the domain adaption of general sentiment classifier was improved by taking the general classifier as the base of a self-training procedure to get a domain self-training sentiment classifier. To validate the effect of the unsupervised framework, several experiments were carried out on publicly available Chinese online reviews dataset. The experiments show that the proposed framework is effective and achieves encouraging results. Specifically, the general classifier outperforms two baselines (a Naïve 50% baseline and a cross-domain classifier), and the bootstrapping self-training classifier approximates the upper bound domain-specific classifier with the lowest accuracy of 81.5%, but the performance is more stable and the framework needs no labeled training dataset.

  17. Multimodal fusion of polynomial classifiers for automatic person recgonition

    Science.gov (United States)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late

  18. Mesh Learning for Classifying Cognitive Processes

    CERN Document Server

    Ozay, Mete; Öztekin, Uygar; Vural, Fatos T Yarman

    2012-01-01

    The major goal of this study is to model the encoding and retrieval operations of the brain during memory processing, using statistical learning tools. The suggested method assumes that the memory encoding and retrieval processes can be represented by a supervised learning system, which is trained by the brain data collected from the functional Magnetic Resonance (fMRI) measurements, during the encoding stage. Then, the system outputs the same class labels as that of the fMRI data collected during the retrieval stage. The most challenging problem of modeling such a learning system is the design of the interactions among the voxels to extract the information about the underlying patterns of brain activity. In this study, we suggest a new method called Mesh Learning, which represents each voxel by a mesh of voxels in a neighborhood system. The nodes of the mesh are a set of neighboring voxels, whereas the arc weights are estimated by a linear regression model. The estimated arc weights are used to form Local Re...

  19. Multi-input distributed classifiers for synthetic genetic circuits.

    Directory of Open Access Journals (Sweden)

    Oleg Kanakov

    Full Text Available For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of

  20. Multi-input distributed classifiers for synthetic genetic circuits.

    Science.gov (United States)

    Kanakov, Oleg; Kotelnikov, Roman; Alsaedi, Ahmed; Tsimring, Lev; Huerta, Ramón; Zaikin, Alexey; Ivanchenko, Mikhail

    2015-01-01

    For practical construction of complex synthetic genetic networks able to perform elaborate functions it is important to have a pool of relatively simple modules with different functionality which can be compounded together. To complement engineering of very different existing synthetic genetic devices such as switches, oscillators or logical gates, we propose and develop here a design of synthetic multi-input classifier based on a recently introduced distributed classifier concept. A heterogeneous population of cells acts as a single classifier, whose output is obtained by summarizing the outputs of individual cells. The learning ability is achieved by pruning the population, instead of tuning parameters of an individual cell. The present paper is focused on evaluating two possible schemes of multi-input gene classifier circuits. We demonstrate their suitability for implementing a multi-input distributed classifier capable of separating data which are inseparable for single-input classifiers, and characterize performance of the classifiers by analytical and numerical results. The simpler scheme implements a linear classifier in a single cell and is targeted at separable classification problems with simple class borders. A hard learning strategy is used to train a distributed classifier by removing from the population any cell answering incorrectly to at least one training example. The other scheme implements a circuit with a bell-shaped response in a single cell to allow potentially arbitrary shape of the classification border in the input space of a distributed classifier. Inseparable classification problems are addressed using soft learning strategy, characterized by probabilistic decision to keep or discard a cell at each training iteration. We expect that our classifier design contributes to the development of robust and predictable synthetic biosensors, which have the potential to affect applications in a lot of fields, including that of medicine and industry.

  1. Quantum classifying spaces and universal quantum characteristic classes

    CERN Document Server

    Durdevic, M

    1996-01-01

    A construction of the noncommutative-geometric counterparts of classical classifying spaces is presented, for general compact matrix quantum structure groups. A quantum analogue of the classical concept of the classifying map is introduced and analyzed. Interrelations with the abstract algebraic theory of quantum characteristic classes are discussed. Various non-equivalent approaches to defining universal characteristic classes are outlined.

  2. Classifying queries submitted to a vertical search engine

    NARCIS (Netherlands)

    Berendsen, R.; Kovachev, B.; Meij, E.; de Rijke, M.; Weerkamp, W.

    2011-01-01

    We propose and motivate a scheme for classifying queries submitted to a people search engine. We specify a number of features for automatically classifying people queries into the proposed classes and examine the eectiveness of these features. Our main nding is that classication is feasible and that

  3. 16 CFR 1610.4 - Requirements for classifying textiles.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Requirements for classifying textiles. 1610... REGULATIONS STANDARD FOR THE FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.4 Requirements for classifying textiles. (a) Class 1, Normal Flammability. Class 1 textiles exhibit normal flammability and...

  4. Classifying spaces with virtually cyclic stabilizers for linear groups

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen

    2015-01-01

    We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...

  5. 25 CFR 304.3 - Classifying and marking of silver.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Classifying and marking of silver. 304.3 Section 304.3 Indians INDIAN ARTS AND CRAFTS BOARD, DEPARTMENT OF THE INTERIOR NAVAJO, PUEBLO, AND HOPI SILVER, USE OF GOVERNMENT MARK § 304.3 Classifying and marking of silver. For the present the Indian Arts and Crafts...

  6. Classifying and comparing fundraising performance for nonprofit hospitals.

    Science.gov (United States)

    Erwin, Cathleen O

    2013-01-01

    Charitable contributions are becoming increasingly important to nonprofit hospitals, yet fundraising can sometimes be one of the more troublesome aspects of management for nonprofit organizations. This study utilizes an organizational effectiveness and performance framework to identify groups of nonprofit organizations as a method of classifying organizations for performance evaluation and benchmarking that may be more informative than commonly used characteristics such as organizational age and size. Cluster analysis, ANOVA and chi-square analysis are used to study 401 organizations, which includes hospital foundations as well as nonprofit hospitals directly engaged in fundraising. Three distinct clusters of organizations are identified based on performance measures of productivity, efficiency, and complexity. A general profile is developed for each cluster based upon the cluster analysis variables and subsequent analysis of variance on measures of structure, maturity, and legitimacy as well as selected institutional characteristics. This is one of only a few studies to examine fundraising performance in hospitals and hospital foundations, and is the first to utilize data from an industry survey conducted by the leading general professional association for healthcare philanthropy. It has methodological implications for the study of fundraising as well as practical implications for the strategic management of fundraising for nonprofit hospital and hospital foundations.

  7. Analysis of Sequence Based Classifier Prediction for HIV Subtypes

    Directory of Open Access Journals (Sweden)

    S. Santhosh Kumar

    2012-10-01

    Full Text Available Human immunodeficiency virus (HIV is a lent virus that causes acquired immunodeficiency syndrome (AIDS. The main drawback in HIV treatment process is its sub type prediction. The sub type and group classification of HIV is based on its genetic variability and location. HIV can be divided into two major types, HIV type 1 (HIV-1 and HIV type 2 (HIV-2. Many classifier approaches have been used to classify HIV subtypes based on their group, but some of cases are having two groups in one; in such cases the classification becomes more complex. The methodology used is this paper based on the HIV sequences. For this work several classifier approaches are used to classify the HIV1 and HIV2. For implementation of the work a real time patient database is taken and the patient records are experimented and the final best classifier is identified with quick response time and least error rate.

  8. 76 FR 40296 - Declassification of National Security Information

    Science.gov (United States)

    2011-07-08

    ... RECORDS ADMINISTRATION 36 CFR Part 1260 RIN 3095-AB64 Declassification of National Security Information... would update NARA's regulations related to declassification of classified national security information... of Executive Order 13526, Classified National Security Information, and its Implementing...

  9. Improving Accelerometer-Based Activity Recognition by Using Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Tahani Daghistani

    2016-05-01

    Full Text Available In line with the increasing use of sensors and health application, there are huge efforts on processing of collected data to extract valuable information such as accelerometer data. This study will propose activity recognition model aim to detect the activities by employing ensemble of classifiers techniques using the Wireless Sensor Data Mining (WISDM. The model will recognize six activities namely walking, jogging, upstairs, downstairs, sitting, and standing. Many experiments are conducted to determine the best classifier combination for activity recognition. An improvement is observed in the performance when the classifiers are combined than when used individually. An ensemble model is built using AdaBoost in combination with decision tree algorithm C4.5. The model effectively enhances the performance with an accuracy level of 94.04 %.

  10. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    Science.gov (United States)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  11. Web Page Classification using an ensemble of support vector machine classifiers

    Directory of Open Access Journals (Sweden)

    Shaobo Zhong

    2011-11-01

    Full Text Available Web Page Classification (WPC is both an important and challenging topic in data mining. The knowledge of WPC can help users to obtain useable information from the huge internet dataset automatically and efficiently. Many efforts have been made to WPC. However, there is still room for improvement of current approaches. One particular challenge in training classifiers comes from the fact that the available dataset is usually unbalanced. Standard machine learning algorithms tend to be overwhelmed by the major class and ignore the minor one and thus lead to high false negative rate. In this paper, a novel approach for Web page classification was proposed to address this problem by using an ensemble of support vector machine classifiers to perform this work. Principal Component Analysis (PCA is used for feature reduction and Independent Component Analysis (ICA for feature selection. The experimental results indicate that the proposed approach outperforms other existing classifiers widely used in WPC.

  12. 78 FR 962 - Agency Information Collection Activities

    Science.gov (United States)

    2013-01-07

    ... NATIONAL INTELLIGENCE Agency Information Collection Activities AGENCY: Office of the Director of National... Information Security Oversight Office for the maintenance of Standard Form 312: Classified Information... updated as described in the Supplementary Information. FOR FURTHER INFORMATION CONTACT: Requests...

  13. On the generalizability of resting-state fMRI machine learning classifiers.

    Science.gov (United States)

    Huf, Wolfgang; Kalcher, Klaudius; Boubela, Roland N; Rath, Georg; Vecsei, Andreas; Filzmoser, Peter; Moser, Ewald

    2014-01-01

    Machine learning classifiers have become increasingly popular tools to generate single-subject inferences from fMRI data. With this transition from the traditional group level difference investigations to single-subject inference, the application of machine learning methods can be seen as a considerable step forward. Existing studies, however, have given scarce or no information on the generalizability to other subject samples, limiting the use of such published classifiers in other research projects. We conducted a simulation study using publicly available resting-state fMRI data from the 1000 Functional Connectomes and COBRE projects to examine the generalizability of classifiers based on regional homogeneity of resting-state time series. While classification accuracies of up to 0.8 (using sex as the target variable) could be achieved on test datasets drawn from the same study as the training dataset, the generalizability of classifiers to different study samples proved to be limited albeit above chance. This shows that on the one hand a certain amount of generalizability can robustly be expected, but on the other hand this generalizability should not be overestimated. Indeed, this study substantiates the need to include data from several sites in a study investigating machine learning classifiers with the aim of generalizability.

  14. Intelligent and Effective Heart Disease Prediction System using Weighted Associative Classifiers

    Directory of Open Access Journals (Sweden)

    Jyoti soni,

    2011-06-01

    Full Text Available The healthcare environment is still ‘information rich’ But ‘knowledge poor’. There is a wealth of data available within the health care systems. However, there is a lack of effective analysis tools todiscover hidden relationships in data. The aim of this work is to design a GUI based Interface to enter the patient record and predict whether the patient is having Heart disease or not using Weighted Association rule based Classifier. The prediction is performed from mining the patient’s historical data or data repository. In Weighted Associative Classifier (WAC, different weights are assigned to different attributes according to their predicting capability. It has already been proved that the Associative Classifiers are performing well than traditional classifiers approaches such as decision tree and rule induction. Further from experimental results it has been found that WAC is providing improved accuracy as compare to other already existing Associative Classifiers. Hence the system is using WAC as a Data mining technique to generate rule base. The system has been implemented in java Platform and trained using benchmark data from UCI machine learning repository. The system is expandable for thenew dataset.

  15. PERFORMANCE EVALUATION OF VARIOUS STATISTICAL CLASSIFIERS IN DETECTING THE DISEASED CITRUS LEAVES

    Directory of Open Access Journals (Sweden)

    SUDHEER REDDY BANDI

    2013-02-01

    Full Text Available Citrus fruits are in lofty obligation because the humans consume them daily. This research aims to amend citrus production, which knows a low upshot bourgeois on the production and complex during measurements. Nowadays citrus plants grappling some traits/diseases. Harm of the insect is one of the major trait/disease. Insecticides are not ever evidenced effectual because insecticides may be toxic to some gracious of birds. Farmers get outstanding difficulties in detecting the diseases ended open eye and also it is quite expensive.Machine vision and Image processing techniques helps in sleuthing the disease mark in citrus leaves and sound job. In this search, Citrus leaves of four classes like Normal, Greasy spot, Melanose and Scab are collected and investigated using texture analysis based on the Color Co-occurrence Method (CCM to take Hue, Saturation and Intensity (HSI features. In the arrangement form, the features are categorised for all leafage conditions using k-Nearest Neighbor (kNN, Naive Bayes classifier (NBC, Linear Discriminate Analysis (LDA classifier and Random Forest Tree Algorithm classifier (RFT. The experimental results inform that proposed attack significantly supports 98.75% quality in automated detection of regular and struck leaves using texture psychotherapy based CCM method using LDA formula. Eventually all the classifiers are compared using Earphone Operative Characteristic contour and analyzed the performance of all the classifiers.

  16. Classifying transcription factor targets and discovering relevant biological features

    Directory of Open Access Journals (Sweden)

    DeLisi Charles

    2008-05-01

    Full Text Available Abstract Background An important goal in post-genomic research is discovering the network of interactions between transcription factors (TFs and the genes they regulate. We have previously reported the development of a supervised-learning approach to TF target identification, and used it to predict targets of 104 transcription factors in yeast. We now include a new sequence conservation measure, expand our predictions to include 59 new TFs, introduce a web-server, and implement an improved ranking method to reveal the biological features contributing to regulation. The classifiers combine 8 genomic datasets covering a broad range of measurements including sequence conservation, sequence overrepresentation, gene expression, and DNA structural properties. Principal Findings (1 Application of the method yields an amplification of information about yeast regulators. The ratio of total targets to previously known targets is greater than 2 for 11 TFs, with several having larger gains: Ash1(4, Ino2(2.6, Yaf1(2.4, and Yap6(2.4. (2 Many predicted targets for TFs match well with the known biology of their regulators. As a case study we discuss the regulator Swi6, presenting evidence that it may be important in the DNA damage response, and that the previously uncharacterized gene YMR279C plays a role in DNA damage response and perhaps in cell-cycle progression. (3 A procedure based on recursive-feature-elimination is able to uncover from the large initial data sets those features that best distinguish targets for any TF, providing clues relevant to its biology. An analysis of Swi6 suggests a possible role in lipid metabolism, and more specifically in metabolism of ceramide, a bioactive lipid currently being investigated for anti-cancer properties. (4 An analysis of global network properties highlights the transcriptional network hubs; the factors which control the most genes and the genes which are bound by the largest set of regulators. Cell-cycle and

  17. Malignancy and Abnormality Detection of Mammograms using Classifier Ensembling

    Directory of Open Access Journals (Sweden)

    Nawazish Naveed

    2011-07-01

    Full Text Available The breast cancer detection and diagnosis is a critical and complex procedure that demands high degree of accuracy. In computer aided diagnostic systems, the breast cancer detection is a two stage procedure. First, to classify the malignant and benign mammograms, while in second stage, the type of abnormality is detected. In this paper, we have developed a novel architecture to enhance the classification of malignant and benign mammograms using multi-classification of malignant mammograms into six abnormality classes. DWT (Discrete Wavelet Transformation features are extracted from preprocessed images and passed through different classifiers. To improve accuracy, results generated by various classifiers are ensembled. The genetic algorithm is used to find optimal weights rather than assigning weights to the results of classifiers on the basis of heuristics. The mammograms declared as malignant by ensemble classifiers are divided into six classes. The ensemble classifiers are further used for multiclassification using one-against-all technique for classification. The output of all ensemble classifiers is combined by product, median and mean rule. It has been observed that the accuracy of classification of abnormalities is more than 97% in case of mean rule. The Mammographic Image Analysis Society dataset is used for experimentation.

  18. Faint spatial object classifier construction based on data mining technology

    Science.gov (United States)

    Lou, Xin; Zhao, Yang; Liao, Yurong; Nie, Yong-ming

    2016-11-01

    Data mining can effectively obtain the faint spatial object's patterns and characteristics, the universal relations and other implicated data characteristics, the key of which is classifier construction. Faint spatial object classifier construction with spatial data mining technology for faint spatial target detection is proposed based on theoretical analysis of design procedures and guidelines in detail. For the one-sidedness weakness during dealing with the fuzziness and randomness using this method, cloud modal classifier is proposed. Simulating analyzing results indicate that this method can realize classification quickly through feature combination and effectively resolve the one-sidedness weakness problem.

  19. Representation of classifier distributions in terms of hypergeometric functions

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper derives alternative analytical expressions for classifier product distributions in terms of Gauss hypergeometric function, 2F1, by considering feed distribution defined in terms of Gates-Gaudin-Schumann function and efficiency curve defined in terms of a logistic function. It is shown that classifier distributions under dispersed conditions of classification pivot at a common size and the distributions are difference similar.The paper also addresses an inverse problem of classifier distributions wherein the feed distribution and efficiency curve are identified from the measured product distributions without needing to know the solid flow split of particles to any of the product streams.

  20. Alternative Methods of Classifying Eating Disorders: Models Incorporating Comorbid Psychopathology and Associated Features

    Science.gov (United States)

    Wildes, Jennifer E.; Marcus, Marsha D.

    2013-01-01

    There is increasing recognition of the limitations of current approaches to psychiatric classification. Nowhere is this more apparent than in the eating disorders (EDs). Several alternative methods of classifying EDs have been proposed, which can be divided into two major groups: 1) those that have classified individuals on the basis of disordered eating symptoms; and, 2) those that have classified individuals on the basis of comorbid psychopathology and associated features. Several reviews have addressed symptom-based approaches to ED classification, but we are aware of no paper that has critically examined comorbidity-based systems. Thus, in this paper, we review models of classifying EDs that incorporate information about comorbid psychopathology and associated features. Early approaches are described first, followed by more recent scholarly contributions to comorbidity-based ED classification. Importantly, several areas of overlap among the classification schemes are identified that may have implications for future research. In particular, we note similarities between early models and newer studies in the salience of impulsivity, compulsivity, distress, and inhibition versus risk taking. Finally, we close with directions for future work, with an emphasis on neurobiologically-informed research to elucidate basic behavioral and neuropsychological correlates of comorbidity-based ED classes, as well as implications for treatment. PMID:23416343

  1. A multiple classifier system based on Ant-Colony Optimization for Hyperspectral image classification

    Science.gov (United States)

    Tang, Ke; Xie, Li; Li, Guangyao

    2017-01-01

    Hyperspectral images which hold a large quantity of land information enables image classification. Traditional classification methods usually works on multispectral images. However, the high dimensionality in feature space influences the accuracy while using these classification algorithms, such as statistical classifiers or decision trees. This paper proposes a multiple classifier system (MCS) based on ant colony optimization (ACO) algorithm to improve the classification ability. ACO method has been implemented on multispectral images in researches, but seldom to hyperspectral images. In order to overcome the limitation of ACO method on dealing with high dimensionality, MCS is introduced to combine the outputs of each single ACO classifier based on the credibility of rules. Mutual information is applied to discretizing features from the data set and provides the criterion of band selection and band grouping algorithms. The performance of the proposed method is validated with ROSIS Pavia data set, and compared to k-nearest neighbour (KNN) algorithm. Experimental results prove that the proposed method is feasible to classify hyperspectral images.

  2. Parallel implementation of a hyperspectral image linear SVM classifier using RVC-CAL

    Science.gov (United States)

    Madroñal, D.; Fabelo, H.; Lazcano, R.; Callicó, G. M.; Juárez, E.; Sanz, C.

    2016-10-01

    Hyperspectral Imaging (HI) collects high resolution spectral information consisting of hundreds of bands across the electromagnetic spectrum -from the ultraviolet to the infrared range-. Thanks to this huge amount of information, an identification of the different elements that compound the hyperspectral image is feasible. Initially, HI was developed for remote sensing applications and, nowadays, its use has been spread to research fields such as security and medicine. In all of them, new applications that demand the specific requirement of real-time processing have appear. In order to fulfill this requirement, the intrinsic parallelism of the algorithms needs to be explicitly exploited. In this paper, a Support Vector Machine (SVM) classifier with a linear kernel has been implemented using a dataflow language called RVC-CAL. Specifically, RVC-CAL allows the scheduling of functional actors onto the target platform cores. Once the parallelism of the classifier has been extracted, a comparison of the SVM classifier implementation using LibSVM -a specific library for SVM applications- and RVC-CAL has been performed. The speedup results obtained for the image classifier depends on the number of blocks in which the image is divided; concretely, when 3 image blocks are processed in parallel, an average speed up above 2.50, with regard to the RVC-CAL sequential version, is achieved.

  3. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1989-09-01

    Certain governmental information must be classified for national security reasons. However, the national security benefits from classifying information are usually accompanied by significant costs -- those due to a citizenry not fully informed on governmental activities, the extra costs of operating classified programs and procuring classified materials (e.g., weapons), the losses to our nation when advances made in classified programs cannot be utilized in unclassified programs. The goal of a classification system should be to clearly identify that information which must be protected for national security reasons and to ensure that information not needing such protection is not classified. This document was prepared to help attain that goal. This document is the first of a planned four-volume work that comprehensively discusses the security classification of information. Volume 1 broadly describes the need for classification, the basis for classification, and the history of classification in the United States from colonial times until World War 2. Classification of information since World War 2, under Executive Orders and the Atomic Energy Acts of 1946 and 1954, is discussed in more detail, with particular emphasis on the classification of atomic energy information. Adverse impacts of classification are also described. Subsequent volumes will discuss classification principles, classification management, and the control of certain unclassified scientific and technical information. 340 refs., 6 tabs.

  4. A NON-PARAMETER BAYESIAN CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Liu Qingshan; Lu Hanqing; Ma Songde

    2003-01-01

    A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.

  5. NUMERICAL SIMULATION OF PARTICLE MOTION IN TURBO CLASSIFIER

    Institute of Scientific and Technical Information of China (English)

    Ning Xu; Guohua Li; Zhichu Huang

    2005-01-01

    Research on the flow field inside a turbo classifier is complicated though important. According to the stochastic trajectory model of particles in gas-solid two-phase flow, and adopting the PHOENICS code, numerical simulation is carried out on the flow field, including particle trajectory, in the inner cavity of a turbo classifier, using both straight and backward crooked elbow blades. Computation results show that when the backward crooked elbow blades are used, the mixed stream that passes through the two blades produces a vortex in the positive direction which counteracts the attached vortex in the opposite direction due to the high-speed turbo rotation, making the flow steadier, thus improving both the grade efficiency and precision of the turbo classifier. This research provides positive theoretical evidences for designing sub-micron particle classifiers with high efficiency and accuracy.

  6. 42 CFR 37.50 - Interpreting and classifying chest roentgenograms.

    Science.gov (United States)

    2010-10-01

    ... interpreted and classified in accordance with the ILO Classification system and recorded on a Roentgenographic... under the Act, shall have immediately available for reference a complete set of the ILO...

  7. A novel statistical method for classifying habitat generalists and specialists

    DEFF Research Database (Denmark)

    Chazdon, Robin L; Chao, Anne; Colwell, Robert K

    2011-01-01

    We develop a novel statistical approach for classifying generalists and specialists in two distinct habitats. Using a multinomial model based on estimated species relative abundance in two habitats, our method minimizes bias due to differences in sampling intensities between two habitat types...... as well as bias due to insufficient sampling within each habitat. The method permits a robust statistical classification of habitat specialists and generalists, without excluding rare species a priori. Based on a user-defined specialization threshold, the model classifies species into one of four groups...... fraction (57.7%) of bird species with statistical confidence. Based on a conservative specialization threshold and adjustment for multiple comparisons, 64.4% of tree species in the full sample were too rare to classify with confidence. Among the species classified, OG specialists constituted the largest...

  8. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance.

  9. Locating and classifying defects using an hybrid data base

    Energy Technology Data Exchange (ETDEWEB)

    Luna-Aviles, A; Diaz Pineda, A [Tecnologico de Estudios Superiores de Coacalco. Av. 16 de Septiembre 54, Col. Cabecera Municipal. C.P. 55700 (Mexico); Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G [Instituto Politecnico Nacional. ESIME-SEPI. Unidad Profesional ' Adolfo Lopez Mateos' Edificio 5, 30 Piso, Colonia Lindavista. Gustavo A. Madero. 07738 Mexico D.F. (Mexico); Durodola, J F [School of Technology, Oxford Brookes University, Headington Campus, Gipsy Lane, Oxford OX3 0BP (United Kingdom); Beltran Fernandez, J A, E-mail: alelunaav@hotmail.com, E-mail: luishector56@hotmail.com, E-mail: jdurodola@brookes.ac.uk

    2011-07-19

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  10. A cardiorespiratory classifier of voluntary and involuntary electrodermal activity

    Directory of Open Access Journals (Sweden)

    Sejdic Ervin

    2010-02-01

    Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.

  11. LESS: a model-based classifier for sparse subspaces.

    Science.gov (United States)

    Veenman, Cor J; Tax, David M J

    2005-09-01

    In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.

  12. Solid waste bin detection and classification using Dynamic Time Warping and MLP classifier

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hannan, M.A., E-mail: hannan@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Basri, Hassan [Dept. of Civil and Structural Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hussain, Aini; Arebey, Maher [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia)

    2014-02-15

    Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.

  13. Decision Tree Classifiers for Star/Galaxy Separation

    Science.gov (United States)

    Vasconcellos, E. C.; de Carvalho, R. R.; Gal, R. R.; LaBarbera, F. L.; Capelato, H. V.; Frago Campos Velho, H.; Trevisan, M.; Ruiz, R. S. R.

    2011-06-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 = 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination (~2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.

  14. Representative Vector Machines: A Unified Framework for Classical Classifiers.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu

    2016-08-01

    Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.

  15. A GIS semiautomatic tool for classifying and mapping wetland soils

    Science.gov (United States)

    Moreno-Ramón, Héctor; Marqués-Mateu, Angel; Ibáñez-Asensio, Sara

    2016-04-01

    generated a set of layers with the geographical information, which corresponded with each diagnostic criteria. Finally, the superposition of layers generated the different homogeneous soil units where the soil scientist should locate the soil profiles. Historically, the Albufera of Valencia has been classified as a soil homogeneous unit, but it was demonstrated that there were six homogeneous units after the methodology and the GIS tool application. In that regard, the outcome reveals that it had been necessary to open only six profiles, against the 19 profiles opened when the real study was carried out. As a conclusion, the methodology and the SIG tool demonstrated that could be employed in areas where the soil forming-factors cannot be distinguished. The application of rapid measurement methods and this methodology could economise the definition process of homogeneous units.

  16. ECLogger: Cross-Project Catch-Block Logging Prediction Using Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Sangeeta Lal

    2017-01-01

    Full Text Available Background: Software developers insert log statements in the source code to record program execution information. However, optimizing the number of log statements in the source code is challenging. Machine learning based within-project logging prediction tools, proposed in previous studies, may not be suitable for new or small software projects. For such software projects, we can use cross-project logging prediction. Aim: The aim of the study presented here is to investigate cross-project logging prediction methods and techniques. Method: The proposed method is ECLogger, which is a novel, ensemble-based, cross-project, catch-block logging prediction model. In the research We use 9 base classifiers were used and combined using ensemble techniques. The performance of ECLogger was evaluated on on three open-source Java projects: Tomcat, CloudStack and Hadoop. Results: ECLogger Bagging, ECLogger AverageVote, and ECLogger MajorityVote show a considerable improvement in the average Logged F-measure (LF on 3, 5, and 4 source -> target project pairs, respectively, compared to the baseline classifiers. ECLogger AverageVote performs best and shows improvements of 3.12% (average LF and 6.08% (average ACC – Accuracy. Conclusion: The classifier based on ensemble techniques, such as bagging, average vote, and majority vote outperforms the baseline classifier. Overall, the ECLogger AverageVote model performs best. The results show that the CloudStack project is more generalizable than the other projects.

  17. Dynamic weighted voting for multiple classifier fusion: a generalized rough set method

    Institute of Scientific and Technical Information of China (English)

    Sun Liang; Han Chongzhao

    2006-01-01

    To improve the performance of multiple classifier system, a knowledge discovery based dynamic weighted voting (KD-DWV) is proposed based on knowledge discovery. In the method, all base classifiers may be allowed to operate in different measurement/feature spaces to make the most of diverse classification information. The weights assigned to each output of a base classifier are estimated by the separability of training sample sets in relevant feature space. For this purpose, some decision tables (DTs) are established in terms of the diverse feature sets. And then the uncertainty measures of the separability are induced, in the form of mass functions in Dempster-Shafer theory (DST), from each DTs based on generalized rough set model. From the mass functions, all the weights are calculated by a modified heuristic fusion function and assigned dynamically to each classifier varying with its output. The comparison experiment is performed on the hyperspectral remote sensing images. And the experimental results show that the performance of the classification can be improved by using the proposed method compared with the plurality voting (PV).

  18. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  19. Massively Multi-core Acceleration of a Document-Similarity Classifier to Detect Web Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, C; Gokhale, M; Top, P; Gallagher, B; Eliassi-Rad, T

    2010-01-14

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to two massively multi-core hardware platforms. The TFIDF classifier is used to detect web attacks in HTTP data. In our parallel hardware approaches, we design streaming, real time classifiers by simplifying the sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. Parallel implementations on the Tilera 64-core System on Chip and the Xilinx Virtex 5-LX FPGA are presented. For the Tilera, we employ a reduced state machine to recognize dictionary terms without requiring explicit tokenization, and achieve throughput of 37MB/s at slightly reduced accuracy. For the FPGA, we have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires 0.2% of the memory used by the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  20. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    2016-01-01

    Full Text Available The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN. To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs and intuitionistic fuzzy cross-entropy (IFCE with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories.

  1. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification.

    Science.gov (United States)

    Zhao, Jing; Lin, Lo-Yi; Lin, Chih-Min

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories.

  2. [Horticultural plant diseases multispectral classification using combined classified methods].

    Science.gov (United States)

    Feng, Jie; Li, Hong-Ning; Yang, Wei-Ping; Hou, De-Dong; Liao, Ning-Fang

    2010-02-01

    The research on multispectral data disposal is getting more and more attention with the development of multispectral technique, capturing data ability and application of multispectral technique in agriculture practice. In the present paper, a cultivated plant cucumber' familiar disease (Trichothecium roseum, Sphaerotheca fuliginea, Cladosporium cucumerinum, Corynespora cassiicola, Pseudoperonospora cubensis) is the research objects. The cucumber leaves multispectral images of 14 visible light channels, near infrared channel and panchromatic channel were captured using narrow-band multispectral imaging system under standard observation and illumination environment, and 210 multispectral data samples which are the 16 bands spectral reflectance of different cucumber disease were obtained. The 210 samples were classified by distance, relativity and BP neural network to discuss effective combination of classified methods for making a diagnosis. The result shows that the classified effective combination of distance and BP neural network classified methods has superior performance than each method, and the advantage of each method is fully used. And the flow of recognizing horticultural plant diseases using combined classified methods is presented.

  3. A Rules-Based Approach for Configuring Chains of Classifiers in Real-Time Stream Mining Systems

    Directory of Open Access Journals (Sweden)

    Brian Foo

    2009-01-01

    Full Text Available Networks of classifiers can offer improved accuracy and scalability over single classifiers by utilizing distributed processing resources and analytics. However, they also pose a unique combination of challenges. First, classifiers may be located across different sites that are willing to cooperate to provide services, but are unwilling to reveal proprietary information about their analytics, or are unable to exchange their analytics due to the high transmission overheads involved. Furthermore, processing of voluminous stream data across sites often requires load shedding approaches, which can lead to suboptimal classification performance. Finally, real stream mining systems often exhibit dynamic behavior and thus necessitate frequent reconfiguration of classifier elements to ensure acceptable end-to-end performance and delay under resource constraints. Under such informational constraints, resource constraints, and unpredictable dynamics, utilizing a single, fixed algorithm for reconfiguring classifiers can often lead to poor performance. In this paper, we propose a new optimization framework aimed at developing rules for choosing algorithms to reconfigure the classifier system under such conditions. We provide an adaptive, Markov model-based solution for learning the optimal rule when stream dynamics are initially unknown. Furthermore, we discuss how rules can be decomposed across multiple sites and propose a method for evolving new rules from a set of existing rules. Simulation results are presented for a speech classification system to highlight the advantages of using the rules-based framework to cope with stream dynamics.

  4. Classifying genes to the correct Gene Ontology Slim term in Saccharomyces cerevisiae using neighbouring genes with classification learning

    Directory of Open Access Journals (Sweden)

    Tsatsoulis Costas

    2010-05-01

    Full Text Available Abstract Background There is increasing evidence that gene location and surrounding genes influence the functionality of genes in the eukaryotic genome. Knowing the Gene Ontology Slim terms associated with a gene gives us insight into a gene's functionality by informing us how its gene product behaves in a cellular context using three different ontologies: molecular function, biological process, and cellular component. In this study, we analyzed if we could classify a gene in Saccharomyces cerevisiae to its correct Gene Ontology Slim term using information about its location in the genome and information from its nearest-neighbouring genes using classification learning. Results We performed experiments to establish that the MultiBoostAB algorithm using the J48 classifier could correctly classify Gene Ontology Slim terms of a gene given information regarding the gene's location and information from its nearest-neighbouring genes for training. Different neighbourhood sizes were examined to determine how many nearest neighbours should be included around each gene to provide better classification rules. Our results show that by just incorporating neighbour information from each gene's two-nearest neighbours, the percentage of correctly classified genes to their correct Gene Ontology Slim term for each ontology reaches over 80% with high accuracy (reflected in F-measures over 0.80 of the classification rules produced. Conclusions We confirmed that in classifying genes to their correct Gene Ontology Slim term, the inclusion of neighbour information from those genes is beneficial. Knowing the location of a gene and the Gene Ontology Slim information from neighbouring genes gives us insight into that gene's functionality. This benefit is seen by just including information from a gene's two-nearest neighbouring genes.

  5. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen [Tianjin Key Laboratory of Process Measurement and Control, Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-05-15

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  6. WORD SENSE DISAMBIGUATION BASED ON IMPROVED BAYESIAN CLASSIFIERS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in practical application. In this paper, we perform WSD study based on large scale real-world corpus using two unsupervised learning algorithms based on ±n-improved Bayesian model and Dependency Grammar(DG)-improved Bayesian model. ±n-improved classifiers reduce the window size of context of ambiguous words with close-distance feature extraction method, and decrease the jamming of useless features, thus obviously improve the accuracy, reaching 83.18% (in open test). DG-improved classifier can more effectively conquer the noise effect existing in Naive-Bayesian classifier. Experimental results show that this approach does better on Chinese WSD, and the open test achieved an accuracy of 86.27%.

  7. Iris Recognition Based on LBP and Combined LVQ Classifier

    CERN Document Server

    Shams, M Y; Nomir, O; El-Awady, R M; 10.5121/ijcsit.2011.3506

    2011-01-01

    Iris recognition is considered as one of the best biometric methods used for human identification and verification, this is because of its unique features that differ from one person to another, and its importance in the security field. This paper proposes an algorithm for iris recognition and classification using a system based on Local Binary Pattern and histogram properties as a statistical approaches for feature extraction, and Combined Learning Vector Quantization Classifier as Neural Network approach for classification, in order to build a hybrid model depends on both features. The localization and segmentation techniques are presented using both Canny edge detection and Hough Circular Transform in order to isolate an iris from the whole eye image and for noise detection .Feature vectors results from LBP is applied to a Combined LVQ classifier with different classes to determine the minimum acceptable performance, and the result is based on majority voting among several LVQ classifier. Different iris da...

  8. Optimal threshold estimation for binary classifiers using game theory.

    Science.gov (United States)

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  9. An ɴ-ary λ-averaging based similarity classifier

    Directory of Open Access Journals (Sweden)

    Kurama Onesfole

    2016-06-01

    Full Text Available We introduce a new n-ary λ similarity classifier that is based on a new n-ary λ-averaging operator in the aggregation of similarities. This work is a natural extension of earlier research on similarity based classification in which aggregation is commonly performed by using the OWA-operator. So far λ-averaging has been used only in binary aggregation. Here the λ-averaging operator is extended to the n-ary aggregation case by using t-norms and t-conorms. We examine four different n-ary norms and test the new similarity classifier with five medical data sets. The new method seems to perform well when compared with the similarity classifier.

  10. A History of Classified Activities at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    2001-01-30

    The facilities that became Oak Ridge National Laboratory (ORNL) were created in 1943 during the United States' super-secret World War II project to construct an atomic bomb (the Manhattan Project). During World War II and for several years thereafter, essentially all ORNL activities were classified. Now, in 2000, essentially all ORNL activities are unclassified. The major purpose of this report is to provide a brief history of ORNL's major classified activities from 1943 until the present (September 2000). This report is expected to be useful to the ORNL Classification Officer and to ORNL's Authorized Derivative Classifiers and Authorized Derivative Declassifiers in their classification review of ORNL documents, especially those documents that date from the 1940s and 1950s.

  11. A native Bayesian classifier based routing protocol for VANETS

    Science.gov (United States)

    Bao, Zhenshan; Zhou, Keqin; Zhang, Wenbo; Gong, Xiaolei

    2016-12-01

    Geographic routing protocols are one of the most hot research areas in VANET (Vehicular Ad-hoc Network). However, there are few routing protocols can take both the transmission efficient and the usage of ratio into account. As we have noticed, different messages in VANET may ask different quality of service. So we raised a Native Bayesian Classifier based routing protocol (Naive Bayesian Classifier-Greedy, NBC-Greedy), which can classify and transmit different messages by its emergency degree. As a result, we can balance the transmission efficient and the usage of ratio with this protocol. Based on Matlab simulation, we can draw a conclusion that NBC-Greedy is more efficient and stable than LR-Greedy and GPSR.

  12. Automatically Classifying the Role of Citations in Biomedical Articles

    Science.gov (United States)

    Agarwal, Shashank; Choubey, Lisha; Yu, Hong

    2010-01-01

    Citations are widely used in scientific literature. The traditional model of referencing considers all citations to be the same; however, semantically, citations play different roles. By studying the context in which citations appear, it is possible to determine the role that they play. Here, we report on the development of an eight-category classification scheme, annotation using that scheme, and development and evaluation of supervised machine-learning classifiers using the annotated data. We annotated 1,710 sentences using the annotation schema and our trained classifier obtained an average F1-score of 76.5%. The classifier is available for free as a Java API from http://citation.askhermes.org. PMID:21346931

  13. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  14. COMPARISON OF SVM AND FUZZY CLASSIFIER FOR AN INDIAN SCRIPT

    Directory of Open Access Journals (Sweden)

    M. J. Baheti

    2012-01-01

    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  15. Comparison of machine learning classifiers for influenza detection from emergency department free-text reports.

    Science.gov (United States)

    López Pineda, Arturo; Ye, Ye; Visweswaran, Shyam; Cooper, Gregory F; Wagner, Michael M; Tsui, Fuchiang Rich

    2015-12-01

    Influenza is a yearly recurrent disease that has the potential to become a pandemic. An effective biosurveillance system is required for early detection of the disease. In our previous studies, we have shown that electronic Emergency Department (ED) free-text reports can be of value to improve influenza detection in real time. This paper studies seven machine learning (ML) classifiers for influenza detection, compares their diagnostic capabilities against an expert-built influenza Bayesian classifier, and evaluates different ways of handling missing clinical information from the free-text reports. We identified 31,268 ED reports from 4 hospitals between 2008 and 2011 to form two different datasets: training (468 cases, 29,004 controls), and test (176 cases and 1620 controls). We employed Topaz, a natural language processing (NLP) tool, to extract influenza-related findings and to encode them into one of three values: Acute, Non-acute, and Missing. Results show that all ML classifiers had areas under ROCs (AUC) ranging from 0.88 to 0.93, and performed significantly better than the expert-built Bayesian model. Missing clinical information marked as a value of missing (not missing at random) had a consistently improved performance among 3 (out of 4) ML classifiers when it was compared with the configuration of not assigning a value of missing (missing completely at random). The case/control ratios did not affect the classification performance given the large number of training cases. Our study demonstrates ED reports in conjunction with the use of ML and NLP with the handling of missing value information have a great potential for the detection of infectious diseases.

  16. Silicon nanowire arrays as learning chemical vapour classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Niskanen, A O; Colli, A; White, R; Li, H W; Spigone, E; Kivioja, J M, E-mail: antti.niskanen@nokia.com [Nokia Research Center, Broers Building, 21 JJ Thomson Avenue, Cambridge CB3 0FA (United Kingdom)

    2011-07-22

    Nanowire field-effect transistors are a promising class of devices for various sensing applications. Apart from detecting individual chemical or biological analytes, it is especially interesting to use multiple selective sensors to look at their collective response in order to perform classification into predetermined categories. We show that non-functionalised silicon nanowire arrays can be used to robustly classify different chemical vapours using simple statistical machine learning methods. We were able to distinguish between acetone, ethanol and water with 100% accuracy while methanol, ethanol and 2-propanol were classified with 96% accuracy in ambient conditions.

  17. Text Classification: Classifying Plain Source Files with Neural Network

    Directory of Open Access Journals (Sweden)

    Jaromir Veber

    2010-10-01

    Full Text Available The automated text file categorization has an important place in computer engineering, particularly in the process called data management automation. A lot has been written about text classification and the methods allowing classification of these files are well known. Unfortunately most studies are theoretical and for practical implementation more research is needed. I decided to contribute with a research focused on creating of a classifier for different kinds of programs (source files, scripts…. This paper will describe practical implementation of the classifier for text files depending on file content.

  18. Classifying depth of anesthesia using EEG features, a comparison.

    Science.gov (United States)

    Esmaeili, Vahid; Shamsollahi, Mohammad Bagher; Arefian, Noor Mohammad; Assareh, Amin

    2007-01-01

    Various EEG features have been used in depth of anesthesia (DOA) studies. The objective of this study was to find the excellent features or combination of them than can discriminate between different anesthesia states. Conducting a clinical study on 22 patients we could define 4 distinct anesthetic states: awake, moderate, general anesthesia, and isoelectric. We examined features that have been used in earlier studies using single-channel EEG signal processing method. The maximum accuracy (99.02%) achieved using approximate entropy as the feature. Some other features could well discriminate a particular state of anesthesia. We could completely classify the patterns by means of 3 features and Bayesian classifier.

  19. Online classifier adaptation for cost-sensitive learning

    OpenAIRE

    Zhang, Junlin; Garcia, Jose

    2015-01-01

    In this paper, we propose the problem of online cost-sensitive clas- sifier adaptation and the first algorithm to solve it. We assume we have a base classifier for a cost-sensitive classification problem, but it is trained with respect to a cost setting different to the desired one. Moreover, we also have some training data samples streaming to the algorithm one by one. The prob- lem is to adapt the given base classifier to the desired cost setting using the steaming training samples online. ...

  20. Learning Continuous Time Bayesian Network Classifiers Using MapReduce

    Directory of Open Access Journals (Sweden)

    Simone Villa

    2014-12-01

    Full Text Available Parameter and structural learning on continuous time Bayesian network classifiers are challenging tasks when you are dealing with big data. This paper describes an efficient scalable parallel algorithm for parameter and structural learning in the case of complete data using the MapReduce framework. Two popular instances of classifiers are analyzed, namely the continuous time naive Bayes and the continuous time tree augmented naive Bayes. Details of the proposed algorithm are presented using Hadoop, an open-source implementation of a distributed file system and the MapReduce framework for distributed data processing. Performance evaluation of the designed algorithm shows a robust parallel scaling.

  1. Face recognition using composite classifier with 2DPCA

    Science.gov (United States)

    Li, Jia; Yan, Ding

    2017-01-01

    In the conventional face recognition, most researchers focused on enhancing the precision which input data was already the member of database. However, they paid less necessary attention to confirm whether the input data belonged to database. This paper proposed an approach of face recognition using two-dimensional principal component analysis (2DPCA). It designed a novel composite classifier founded by statistical technique. Moreover, this paper utilized the advantages of SVM and Logic Regression in field of classification and therefore made its accuracy improved a lot. To test the performance of the composite classifier, the experiments were implemented on the ORL and the FERET database and the result was shown and evaluated.

  2. Classifiers in Japanese-to-English Machine Translation

    CERN Document Server

    Bond, F; Ikehara, S; Bond, Francis; Ogura, Kentaro; Ikehara, Satoru

    1996-01-01

    This paper proposes an analysis of classifiers into four major types: UNIT, METRIC, GROUP and SPECIES, based on properties of both Japanese and English. The analysis makes possible a uniform and straightforward treatment of noun phrases headed by classifiers in Japanese-to-English machine translation, and has been implemented in the MT system ALT-J/E. Although the analysis is based on the characteristics of, and differences between, Japanese and English, it is shown to be also applicable to the unrelated language Thai.

  3. Airborne Dual-Wavelength LiDAR Data for Classifying Land Cover

    Directory of Open Access Journals (Sweden)

    Cheng-Kai Wang

    2014-01-01

    Full Text Available This study demonstrated the potential of using dual-wavelength airborne light detection and ranging (LiDAR data to classify land cover. Dual-wavelength LiDAR data were acquired from two airborne LiDAR systems that emitted pulses of light in near-infrared (NIR and middle-infrared (MIR lasers. The major features of the LiDAR data, such as surface height, echo width, and dual-wavelength amplitude, were used to represent the characteristics of land cover. Based on the major features of land cover, a support vector machine was used to classify six types of suburban land cover: road and gravel, bare soil, low vegetation, high vegetation, roofs, and water bodies. Results show that using dual-wavelength LiDAR-derived information (e.g., amplitudes at NIR and MIR wavelengths could compensate for the limitations of using single-wavelength LiDAR information (i.e., poor discrimination of low vegetation when classifying land cover.

  4. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition

    Science.gov (United States)

    Fong, Simon; Song, Wei; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K. L.

    2017-01-01

    In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR) is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z) of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called ‘shadow features’ are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research. PMID:28264470

  5. Training Classifiers with Shadow Features for Sensor-Based Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2017-02-01

    Full Text Available In this paper, a novel training/testing process for building/using a classification model based on human activity recognition (HAR is proposed. Traditionally, HAR has been accomplished by a classifier that learns the activities of a person by training with skeletal data obtained from a motion sensor, such as Microsoft Kinect. These skeletal data are the spatial coordinates (x, y, z of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training a classifier. In addition to the spatial features that describe current positions in the skeletal data, new features called ‘shadow features’ are used to improve the supervised learning efficacy of the classifier. Shadow features are inferred from the dynamics of body movements, and thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterising activities in the classification process, and thereby significantly improve the classification accuracy. Two cases of HAR are tested using a classification model trained with shadow features: one is by using wearable sensor and the other is by a Kinect-based remote sensor. Our experiments can demonstrate the advantages of the new method, which will have an impact on human activity detection research.

  6. Multiple-instance learning as a classifier combining problem

    DEFF Research Database (Denmark)

    Li, Yan; Tax, David M. J.; Duin, Robert P. W.

    2013-01-01

    In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the ass......In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL...... with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification...... posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model...

  7. Subtractive fuzzy classifier based driver distraction levels classification using EEG.

    Science.gov (United States)

    Wali, Mousa Kadhim; Murugappan, Murugappan; Ahmad, Badlishah

    2013-09-01

    [Purpose] In earlier studies of driver distraction, researchers classified distraction into two levels (not distracted, and distracted). This study classified four levels of distraction (neutral, low, medium, high). [Subjects and Methods] Fifty Asian subjects (n=50, 43 males, 7 females), age range 20-35 years, who were free from any disease, participated in this study. Wireless EEG signals were recorded by 14 electrodes during four types of distraction stimuli (Global Position Systems (GPS), music player, short message service (SMS), and mental tasks). We derived the amplitude spectrum of three different frequency bands, theta, alpha, and beta of EEG. Then, based on fusion of discrete wavelet packet transforms and fast fourier transform yield, we extracted two features (power spectral density, spectral centroid frequency) of different wavelets (db4, db8, sym8, and coif5). Mean ± SD was calculated and analysis of variance (ANOVA) was performed. A fuzzy inference system classifier was applied to different wavelets using the two extracted features. [Results] The results indicate that the two features of sym8 posses highly significant discrimination across the four levels of distraction, and the best average accuracy achieved by the subtractive fuzzy classifier was 79.21% using the power spectral density feature extracted using the sym8 wavelet. [Conclusion] These findings suggest that EEG signals can be used to monitor distraction level intensity in order to alert drivers to high levels of distraction.

  8. Gene-expression Classifier in Papillary Thyroid Carcinoma

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;

    2016-01-01

    BACKGROUND: No reliable biomarker for metastatic potential in the risk stratification of papillary thyroid carcinoma exists. We aimed to develop a gene-expression classifier for metastatic potential. MATERIALS AND METHODS: Genome-wide expression analyses were used. Development cohort: freshly...

  9. 18 CFR 367.18 - Criteria for classifying leases.

    Science.gov (United States)

    2010-04-01

    ... classification of the lease under the criteria in paragraph (a) of this section had the changed terms been in... the lessee) must not give rise to a new classification of a lease for accounting purposes. ... classifying leases. 367.18 Section 367.18 Conservation of Power and Water Resources FEDERAL ENERGY...

  10. Automatic Classification of Cetacean Vocalizations Using an Aural Classifier

    Science.gov (United States)

    2013-09-30

    were inspired by research directed at discriminating the timbre of different musical instruments – a passive classification problem – which suggests...the method should be able to classify marine mammal vocalizations since these calls possess many of the acoustic attributes of music . APPROACH

  11. Using predictive distributions to estimate uncertainty in classifying landmine targets

    Science.gov (United States)

    Close, Ryan; Watford, Ken; Glenn, Taylor; Gader, Paul; Wilson, Joseph

    2011-06-01

    Typical classification models used for detection of buried landmines estimate a singular discriminative output. This classification is based on a model or technique trained with a given set of training data available during system development. Regardless of how well the technique performs when classifying objects that are 'similar' to the training set, most models produce undesirable (and many times unpredictable) responses when presented with object classes different from the training data. This can cause mines or other explosive objects to be misclassified as clutter, or false alarms. Bayesian regression and classification models produce distributions as output, called the predictive distribution. This paper will discuss predictive distributions and their application to characterizing uncertainty in the classification decision, from the context of landmine detection. Specifically, experiments comparing the predictive variance produced by relevance vector machines and Gaussian processes will be described. We demonstrate that predictive variance can be used to determine the uncertainty of the model in classifying an object (i.e., the classifier will know when it's unable to reliably classify an object). The experimental results suggest that degenerate covariance models (such as the relevance vector machine) are not reliable in estimating the predictive variance. This necessitates the use of the Gaussian Process in creating the predictive distribution.

  12. Recognition of Characters by Adaptive Combination of Classifiers

    Institute of Scientific and Technical Information of China (English)

    WANG Fei; LI Zai-ming

    2004-01-01

    In this paper, the visual feature space based on the long Horizontals, the long Verticals,and the radicals are given. An adaptive combination of classifiers, whose coefficients vary with the input pattern, is also proposed. Experiments show that the approach is promising for character recognition in video sequences.

  13. Classifying aquatic macrophytes as indicators of eutrophication in European lakes

    NARCIS (Netherlands)

    Penning, W.E.; Mjelde, M.; Dudley, B.; Hellsten, S.; Hanganu, J.; Kolada, A.; van den Berg, Marcel S.; Poikane, S.; Phillips, G.; Willby, N.; Ecke, F.

    2008-01-01

    Aquatic macrophytes are one of the biological quality elements in the Water Framework Directive (WFD) for which status assessments must be defined. We tested two methods to classify macrophyte species and their response to eutrophication pressure: one based on percentiles of occurrence along a phosp

  14. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  15. A Two-Stage Combining Classifier Model for the Development of Adaptive Dialog Systems.

    Science.gov (United States)

    Griol, David; Iglesias, José Antonio; Ledezma, Agapito; Sanchis, Araceli

    2016-02-01

    This paper proposes a statistical framework to develop user-adapted spoken dialog systems. The proposed framework integrates two main models. The first model is used to predict the user's intention during the dialog. The second model uses this prediction and the history of dialog up to the current moment to predict the next system response. This prediction is performed with an ensemble-based classifier trained for each of the tasks considered, so that a better selection of the next system can be attained weighting the outputs of these specialized classifiers. The codification of the information and the definition of data structures to store the data supplied by the user throughout the dialog makes the estimation of the models from the training data and practical domains manageable. We describe our proposal and its application and detailed evaluation in a practical spoken dialog system.

  16. The Nutraceutical Bioavailability Classification Scheme: Classifying Nutraceuticals According to Factors Limiting their Oral Bioavailability.

    Science.gov (United States)

    McClements, David Julian; Li, Fang; Xiao, Hang

    2015-01-01

    The oral bioavailability of a health-promoting dietary component (nutraceutical) may be limited by various physicochemical and physiological phenomena: liberation from food matrices, solubility in gastrointestinal fluids, interaction with gastrointestinal components, chemical degradation or metabolism, and epithelium cell permeability. Nutraceutical bioavailability can therefore be improved by designing food matrices that control their bioaccessibility (B*), absorption (A*), and transformation (T*) within the gastrointestinal tract (GIT). This article reviews the major factors influencing the gastrointestinal fate of nutraceuticals, and then uses this information to develop a new scheme to classify the major factors limiting nutraceutical bioavailability: the nutraceutical bioavailability classification scheme (NuBACS). This new scheme is analogous to the biopharmaceutical classification scheme (BCS) used by the pharmaceutical industry to classify drug bioavailability, but it contains additional factors important for understanding nutraceutical bioavailability in foods. The article also highlights potential strategies for increasing the oral bioavailability of nutraceuticals based on their NuBACS designation (B*A*T*).

  17. Combined Approach of PNN and Time-Frequency as the Classifier for Power System Transient Problems

    Directory of Open Access Journals (Sweden)

    Aslam Pervez Memon

    2013-04-01

    Full Text Available The transients in power system cause serious disturbances in the reliability, safety and economy of the system. The transient signals possess the nonstationary characteristics in which the frequency as well as varying time information is compulsory for the analysis. Hence, it is vital, first to detect and classify the type of transient fault and then to mitigate them. This article proposes time-frequency and FFNN (Feedforward Neural Network approach for the classification of power system transients problems. In this work it is suggested that all the major categories of transients are simulated, de-noised, and decomposed with DWT (Discrete Wavelet and MRA (Multiresolution Analysis algorithm and then distinctive features are extracted to get optimal vector as input for training of PNN (Probabilistic Neural Network classifier. The simulation results of proposed approach prove their simplicity, accurateness and effectiveness for the automatic detection and classification of PST (Power System Transient types

  18. Optimizing A syndromic surveillance text classifier for influenza-like illness: Does document source matter?

    Science.gov (United States)

    South, Brett R; South, Brett Ray; Chapman, Wendy W; Chapman, Wendy; Delisle, Sylvain; Shen, Shuying; Kalp, Ericka; Perl, Trish; Samore, Matthew H; Gundlapalli, Adi V

    2008-11-06

    Syndromic surveillance systems that incorporate electronic free-text data have primarily focused on extracting concepts of interest from chief complaint text, emergency department visit notes, and nurse triage notes. Due to availability and access, there has been limited work in the area of surveilling the full text of all electronic note documents compared with more specific document sources. This study provides an evaluation of the performance of a text classifier for detection of influenza-like illness (ILI) by document sources that are commonly used for biosurveillance by comparing them to routine visit notes, and a full electronic note corpus approach. Evaluating the performance of an automated text classifier for syndromic surveillance by source document will inform decisions regarding electronic textual data sources for potential use by automated biosurveillance systems. Even when a full electronic medical record is available, commonly available surveillance source documents provide acceptable statistical performance for automated ILI surveillance.

  19. Bayesian network classifiers for categorizing cortical GABAergic interneurons.

    Science.gov (United States)

    Mihaljević, Bojan; Benavides-Piccione, Ruth; Bielza, Concha; DeFelipe, Javier; Larrañaga, Pedro

    2015-04-01

    An accepted classification of GABAergic interneurons of the cerebral cortex is a major goal in neuroscience. A recently proposed taxonomy based on patterns of axonal arborization promises to be a pragmatic method for achieving this goal. It involves characterizing interneurons according to five axonal arborization features, called F1-F5, and classifying them into a set of predefined types, most of which are established in the literature. Unfortunately, there is little consensus among expert neuroscientists regarding the morphological definitions of some of the proposed types. While supervised classifiers were able to categorize the interneurons in accordance with experts' assignments, their accuracy was limited because they were trained with disputed labels. Thus, here we automatically classify interneuron subsets with different label reliability thresholds (i.e., such that every cell's label is backed by at least a certain (threshold) number of experts). We quantify the cells with parameters of axonal and dendritic morphologies and, in order to predict the type, also with axonal features F1-F4 provided by the experts. Using Bayesian network classifiers, we accurately characterize and classify the interneurons and identify useful predictor variables. In particular, we discriminate among reliable examples of common basket, horse-tail, large basket, and Martinotti cells with up to 89.52% accuracy, and single out the number of branches at 180 μm from the soma, the convex hull 2D area, and the axonal features F1-F4 as especially useful predictors for distinguishing among these types. These results open up new possibilities for an objective and pragmatic classification of interneurons.

  20. Computer-aided diagnosis system for classifying benign and malignant thyroid nodules in multi-stained FNAB cytological images.

    Science.gov (United States)

    Gopinath, Balasubramanian; Shanthi, Natesan

    2013-06-01

    An automated computer-aided diagnosis system is developed to classify benign and malignant thyroid nodules using multi-stained fine needle aspiration biopsy (FNAB) cytological images. In the first phase, the image segmentation is performed to remove the background staining information and retain the appropriate foreground cell objects in cytological images using mathematical morphology and watershed transform segmentation methods. Subsequently, statistical features are extracted using two-level discrete wavelet transform (DWT) decomposition, gray level co-occurrence matrix (GLCM) and Gabor filter based methods. The classifiers k-nearest neighbor (k-NN), Elman neural network (ENN) and support vector machine (SVM) are tested for classifying benign and malignant thyroid nodules. The combination of watershed segmentation, GLCM features and k-NN classifier results a lowest diagnostic accuracy of 60 %. The highest diagnostic accuracy of 93.33 % is achieved by ENN classifier trained with the statistical features extracted by Gabor filter bank from the images segmented by morphology and watershed transform segmentation methods. It is also observed that SVM classifier results its highest diagnostic accuracy of 90 % for DWT and Gabor filter based features along with morphology and watershed transform segmentation methods. The experimental results suggest that the developed system with multi-stained thyroid FNAB images would be useful for identifying thyroid cancer irrespective of staining protocol used.

  1. Immigrants' health in Europe: A cross-classified multilevel approach to examine origin country, destination country, and community effects

    NARCIS (Netherlands)

    Huijts, T.H.M.; Kraaykamp, G.L.M.

    2012-01-01

    In this study, we examined origin, destination, and community effects on first- and second-generation immigrants health in Europe. We used information from the European Social Surveys (20022008) on 19,210 immigrants from 123 countries of origin, living in 31 European countries. Cross-classified mult

  2. Face Detection Using Adaboosted SVM-Based Component Classifier

    CERN Document Server

    Valiollahzadeh, Seyyed Majid; Nazari, Mohammad

    2008-01-01

    Recently, Adaboost has been widely used to improve the accuracy of any given learning algorithm. In this paper we focus on designing an algorithm to employ combination of Adaboost with Support Vector Machine as weak component classifiers to be used in Face Detection Task. To obtain a set of effective SVM-weaklearner Classifier, this algorithm adaptively adjusts the kernel parameter in SVM instead of using a fixed one. Proposed combination outperforms in generalization in comparison with SVM on imbalanced classification problem. The proposed here method is compared, in terms of classification accuracy, to other commonly used Adaboost methods, such as Decision Trees and Neural Networks, on CMU+MIT face database. Results indicate that the performance of the proposed method is overall superior to previous Adaboost approaches.

  3. Feasibility study for banking loan using association rule mining classifier

    Directory of Open Access Journals (Sweden)

    Agus Sasmito Aribowo

    2015-03-01

    Full Text Available The problem of bad loans in the koperasi can be reduced if the koperasi can detect whether member can complete the mortgage debt or decline. The method used for identify characteristic patterns of prospective lenders in this study, called Association Rule Mining Classifier. Pattern of credit member will be converted into knowledge and used to classify other creditors. Classification process would separate creditors into two groups: good credit and bad credit groups. Research using prototyping for implementing the design into an application using programming language and development tool. The process of association rule mining using Weighted Itemset Tidset (WIT–tree methods. The results shown that the method can predict the prospective customer credit. Training data set using 120 customers who already know their credit history. Data test used 61 customers who apply for credit. The results concluded that 42 customers will be paying off their loans and 19 clients are decline

  4. Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition

    Energy Technology Data Exchange (ETDEWEB)

    Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail: eechl53@cc.kyu.edu.tw; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)

    2009-11-30

    This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.

  5. Efficient iris recognition via ICA feature and SVM classifier

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Xu Luping

    2007-01-01

    To improve flexibility and reliability of iris recognition algorithm while keeping iris recognition success rate, an iris recognition approach for combining SVM with ICA feature extraction model is presented. SVM is a kind of classifier which has demonstrated high generalization capabilities in the object recognition problem. And ICA is a feature extraction technique which can be considered a generalization of principal component analysis. In this paper, ICA is used to generate a set of subsequences of feature vectors for iris feature extraction. Then each subsequence is classified using support vector machine sequence kernels. Experiments are made on CASIA iris database, the result indicates combination of SVM and ICA can improve iris recognition flexibility and reliability while keeping recognition success rate.

  6. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  7. Scoring and Classifying Examinees Using Measurement Decision Theory

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2009-04-01

    Full Text Available This paper describes and evaluates the use of measurement decision theory (MDT to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1 the classification accuracy of tests scored using decision theory; (2 the effectiveness of different sequential testing procedures; and (3 the number of items needed to make a classification. A large percentage of examinees can be classified accurately with very few items using decision theory. A Java Applet for self instruction and software for generating, calibrating and scoring MDT data are provided.

  8. The fuzzy gene filter: A classifier performance assesment

    CERN Document Server

    Perez, Meir

    2011-01-01

    The Fuzzy Gene Filter (FGF) is an optimised Fuzzy Inference System designed to rank genes in order of differential expression, based on expression data generated in a microarray experiment. This paper examines the effectiveness of the FGF for feature selection using various classification architectures. The FGF is compared to three of the most common gene ranking algorithms: t-test, Wilcoxon test and ROC curve analysis. Four classification schemes are used to compare the performance of the FGF vis-a-vis the standard approaches: K Nearest Neighbour (KNN), Support Vector Machine (SVM), Naive Bayesian Classifier (NBC) and Artificial Neural Network (ANN). A nested stratified Leave-One-Out Cross Validation scheme is used to identify the optimal number top ranking genes, as well as the optimal classifier parameters. Two microarray data sets are used for the comparison: a prostate cancer data set and a lymphoma data set.

  9. Security Enrichment in Intrusion Detection System Using Classifier Ensemble

    Directory of Open Access Journals (Sweden)

    Uma R. Salunkhe

    2017-01-01

    Full Text Available In the era of Internet and with increasing number of people as its end users, a large number of attack categories are introduced daily. Hence, effective detection of various attacks with the help of Intrusion Detection Systems is an emerging trend in research these days. Existing studies show effectiveness of machine learning approaches in handling Intrusion Detection Systems. In this work, we aim to enhance detection rate of Intrusion Detection System by using machine learning technique. We propose a novel classifier ensemble based IDS that is constructed using hybrid approach which combines data level and feature level approach. Classifier ensembles combine the opinions of different experts and improve the intrusion detection rate. Experimental results show the improved detection rates of our system compared to reference technique.

  10. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  11. Logarithmic Spiral-based Construction of RBF Classifiers

    Directory of Open Access Journals (Sweden)

    Mohamed Wajih Guerfala

    2017-02-01

    Full Text Available Clustering process is defined as grouping similar objects together into homogeneous groups or clusters. Objects that belong to one cluster should be very similar to each other, but objects in different clusters will be dissimilar. It aims to simplify the representation of the initial data. The automatic classification recovers all the methods allowing the automatic construction of such groups. This paper describes the design of radial basis function (RBF neural classifiers using a new algorithm for characterizing the hidden layer structure. This algorithm, called k-means Mahalanobis distance, groups the training data class by class in order to calculate the optimal number of clusters of the hidden layer, using two validity indexes. To initialize the initial clusters of k-means algorithm, the method of logarithmic spiral golden angle has been used. Two real data sets (Iris and Wine are considered to improve the efficiency of the proposed approach and the obtained results are compared with basic literature classifier

  12. Dendritic spine detection using curvilinear structure detector and LDA classifier.

    Science.gov (United States)

    Zhang, Yong; Zhou, Xiaobo; Witt, Rochelle M; Sabatini, Bernardo L; Adjeroh, Donald; Wong, Stephen T C

    2007-06-01

    Dendritic spines are small, bulbous cellular compartments that carry synapses. Biologists have been studying the biochemical pathways by examining the morphological and statistical changes of the dendritic spines at the intracellular level. In this paper a novel approach is presented for automated detection of dendritic spines in neuron images. The dendritic spines are recognized as small objects of variable shape attached or detached to multiple dendritic backbones in the 2D projection of the image stack along the optical direction. We extend the curvilinear structure detector to extract the boundaries as well as the centerlines for the dendritic backbones and spines. We further build a classifier using Linear Discriminate Analysis (LDA) to classify the attached spines into valid and invalid types to improve the accuracy of the spine detection. We evaluate the proposed approach by comparing with the manual results in terms of backbone length, spine number, spine length, and spine density.

  13. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... to a state-of-the-art method for cartilage segmentation using one stage nearest neighbour classifier. Our method achieved better results than the state-of-the-art method for tibial as well as femoral cartilage segmentation. The next main contribution of the thesis deals with learning features autonomously...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  14. Neural Networks Classifier for Data Selection in Statistical Machine Translation

    OpenAIRE

    Peris, Álvaro; Chinea-Rios, Mara; Casacuberta, Francisco

    2016-01-01

    We address the data selection problem in statistical machine translation (SMT) as a classification task. The new data selection method is based on a neural network classifier. We present a new method description and empirical results proving that our data selection method provides better translation quality, compared to a state-of-the-art method (i.e., Cross entropy). Moreover, the empirical results reported are coherent across different language pairs.

  15. Manually Classified Errors in Czech-Slovak Translation

    OpenAIRE

    Galuščáková, Petra; Bojar, Ondřej

    2012-01-01

    Outputs of five Czech-Slovak machine translation systems (Česílko, Česílko 2, Google Translate and Moses with different settings) for first 50 sentences of WMT 2010 testing set. The translations were manually processed and the errors were marked and classified according to the scheme by Vilar et al. (David Vilar, Jia Xu, Luis Fernando D’Haro, Hermann Ney: Error Analysis of Statistical Machine Translation Output, Proceedings of LREC-2006, 2006)

  16. Review of the US Department of Energy Classified Visits Program

    Energy Technology Data Exchange (ETDEWEB)

    Martin, S W; Killinger, M H; Segura, M A

    1992-07-01

    This review examines the US Department of Energy (DOE) Classified Visits Program, which is administered by the Office of Safeguards and Security. The overall purpose of this analysis is to (1) ensure that DOE policy and implementing procedures are appropriate to maintain US national security intentions; (2) evaluate the effectiveness of the process used across the DOE complex; and (3) recommend changes which will enhance the overall efficiency of the process while maintaining the program's integrity.

  17. Dynamical Logic Driven by Classified Inferences Including Abduction

    Science.gov (United States)

    Sawa, Koji; Gunji, Yukio-Pegio

    2010-11-01

    We propose a dynamical model of formal logic which realizes a representation of logical inferences, deduction and induction. In addition, it also represents abduction which is classified by Peirce as the third inference following deduction and induction. The three types of inference are represented as transformations of a directed graph. The state of a relation between objects of the model fluctuates between the collective and the distinctive. In addition, the location of the relation in the sequence of the relation influences its state.

  18. Classifying Floating Potential Measurement Unit Data Products as Science Data

    Science.gov (United States)

    Coffey, Victoria; Minow, Joseph

    2015-01-01

    We are Co-Investigators for the Floating Potential Measurement Unit (FPMU) on the International Space Station (ISS) and members of the FPMU operations and data analysis team. We are providing this memo for the purpose of classifying raw and processed FPMU data products and ancillary data as NASA science data with unrestricted, public availability in order to best support science uses of the data.

  19. Face Recognition Combining Eigen Features with a Parzen Classifier

    Institute of Scientific and Technical Information of China (English)

    SUN Xin; LIU Bing; LIU Ben-yong

    2005-01-01

    A face recognition scheme is proposed, wherein a face image is preprocessed by pixel averaging and energy normalizing to reduce data dimension and brightness variation effect, followed by the Fourier transform to estimate the spectrum of the preprocessed image. The principal component analysis is conducted on the spectra of a face image to obtain eigen features. Combining eigen features with a Parzen classifier, experiments are taken on the ORL face database.

  20. Classifying prosthetic use via accelerometry in persons with transtibial amputations

    Directory of Open Access Journals (Sweden)

    Morgan T. Redfield, MSEE

    2013-12-01

    Full Text Available Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor, record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+ was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing, standing (i.e., standing upright with limited leg motion, or sitting (i.e., seated with limited leg motion. Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/– standard deviation accuracy of 96.6% +/– 3.0%.

  1. Multimodal biometric fusion using multiple-input correlation filter classifiers

    Science.gov (United States)

    Hennings, Pablo; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-03-01

    In this work we apply a computationally efficient, closed form design of a jointly optimized filter bank of correlation filter classifiers for biometric verification with the use of multiple biometrics from individuals. Advanced correlation filters have been used successfully for biometric classification, and have shown robustness in verifying faces, palmprints and fingerprints. In this study we address the issues of performing robust biometric verification when multiple biometrics from the same person are available at the moment of authentication; we implement biometric fusion by using a filter bank of correlation filter classifiers which are jointly optimized with each biometric, instead of designing separate independent correlation filter classifiers for each biometric and then fuse the resulting match scores. We present results using fingerprint and palmprint images from a data set of 40 people, showing a considerable advantage in verification performance producing a large margin of separation between the impostor and authentic match scores. The method proposed in this paper is a robust and secure method for authenticating an individual.

  2. GS-TEC: the Gaia Spectrophotometry Transient Events Classifier

    CERN Document Server

    Blagorodnova, Nadejda; Wyrzykowski, \\Lukasz; Irwin, Mike; Walton, Nicholas A

    2014-01-01

    We present an algorithm for classifying the nearby transient objects detected by the Gaia satellite. The algorithm will use the low-resolution spectra from the blue and red spectro-photometers on board of the satellite. Taking a Bayesian approach we model the spectra using the newly constructed reference spectral library and literature-driven priors. We find that for magnitudes brighter than 19 in Gaia $G$ magnitude, around 75\\% of the transients will be robustly classified. The efficiency of the algorithm for SNe type I is higher than 80\\% for magnitudes $G\\leq$18, dropping to approximately 60\\% at magnitude $G$=19. For SNe type II, the efficiency varies from 75 to 60\\% for $G\\leq$18, falling to 50\\% at $G$=19. The purity of our classifier is around 95\\% for SNe type I for all magnitudes. For SNe type II it is over 90\\% for objects with $G \\leq$19. GS-TEC also estimates the redshifts with errors of $\\sigma_z \\le$ 0.01 and epochs with uncertainties $\\sigma_t \\simeq$ 13 and 32 days for type SNe I and SNe II re...

  3. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  4. Self-organizing map classifier for stressed speech recognition

    Science.gov (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2016-05-01

    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  5. Binary Classifier Calibration Using an Ensemble of Linear Trend Estimation

    Science.gov (United States)

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.

    2017-01-01

    Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this paper we present a new non-parametric calibration method called ensemble of linear trend estimation (ELiTE). ELiTE utilizes the recently proposed ℓ1 trend ltering signal approximation method [22] to find the mapping from uncalibrated classification scores to the calibrated probability estimates. ELiTE is designed to address the key limitations of the histogram binning-based calibration methods which are (1) the use of a piecewise constant form of the calibration mapping using bins, and (2) the assumption of independence of predicted probabilities for the instances that are located in different bins. The method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus, it can be applied with many existing classification models. We demonstrate the performance of ELiTE on real datasets for commonly used binary classification models. Experimental results show that the method outperforms several common binary-classifier calibration methods. In particular, ELiTE commonly performs statistically significantly better than the other methods, and never worse. Moreover, it is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is practically O(N log N) time, where N is the number of samples.

  6. Classifying prosthetic use via accelerometry in persons with transtibial amputations.

    Science.gov (United States)

    Redfield, Morgan T; Cagle, John C; Hafner, Brian J; Sanders, Joan E

    2013-01-01

    Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor), record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+) was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing), standing (i.e., standing upright with limited leg motion), or sitting (i.e., seated with limited leg motion). Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/- standard deviation accuracy of 96.6% +/- 3.0%.

  7. ASYMBOOST-BASED FISHER LINEAR CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Wang Xianji; Ye Xueyi; Li Bin; Li Xin; Zhuang Zhenquan

    2008-01-01

    When using AdaBoost to select discriminant features from some feature space (e.g. Gabor feature space) for face recognition, cascade structure is usually adopted to leverage the asymmetry in the distribution of positive and negative samples. Each node in the cascade structure is a classifier trained by AdaBoost with an asymmetric learning goal of high recognition rate but only moderate low false positive rate. One limitation of AdaBoost arises in the context of skewed example distribution and cascade classifiers: AdaBoost minimizes the classification error, which is not guaranteed to achieve the asymmetric node learning goal. In this paper, we propose to use the asymmetric AdaBoost (Asym-Boost) as a mechanism to address the asymmetric node learning goal. Moreover, the two parts of the selecting features and forming ensemble classifiers are decoupled, both of which occur simultaneously in AsymBoost and AdaBoost. Fisher Linear Discriminant Analysis (FLDA) is used on the selected features to learn a linear discriminant function that maximizes the separability of data among the different classes, which we think can improve the recognition performance. The proposed algorithm is dem onstrated with face recognition using a Gabor based representation on the FERET database. Experimental results show that the proposed algorithm yields better recognition performance than AdaBoost itself.

  8. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Directory of Open Access Journals (Sweden)

    Seung Yeon Shin

    Full Text Available In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC. Our framework comprises three classification stages: i a random forest (RF classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii a more complex discriminative restricted Boltzmann machine (DRBM classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC curve for detection of clustered μCs.

  9. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  10. Classifying Human Body Acceleration Patterns Using a Hierarchical Temporal Memory

    Science.gov (United States)

    Sassi, Federico; Ascari, Luca; Cagnoni, Stefano

    This paper introduces a novel approach to the detection of human body movements during daily life. With the sole use of one wearable wireless triaxial accelerometer attached to one's chest, this approach aims at classifying raw acceleration data robustly, to detect many common human behaviors without requiring any specific a-priori knowledge about movements. The proposed approach consists of feeding sensory data into a specifically trained Hierarchical Temporal Memory (HTM) to extract invariant spatial-temporal patterns that characterize different body movements. The HTM output is then classified using a Support Vector Machine (SVM) into different categories. The performance of this new HTM+SVM combination is compared with a single SVM using real-word data corresponding to movements like "standing", "walking", "jumping" and "falling", acquired from a group of different people. Experimental results show that the HTM+SVM approach can detect behaviors with very high accuracy and is more robust, with respect to noise, than a classifier based solely on SVMs.

  11. A radial basis classifier for the automatic detection of aspiration in children with dysphagia

    Directory of Open Access Journals (Sweden)

    Blain Stefanie

    2006-07-01

    Full Text Available Abstract Background Silent aspiration or the inhalation of foodstuffs without overt physiological signs presents a serious health issue for children with dysphagia. To date, there are no reliable means of detecting aspiration in the home or community. An assistive technology that performs in these environments could inform caregivers of adverse events and potentially reduce the morbidity and anxiety of the feeding experience for the child and caregiver, respectively. This paper proposes a classifier for automatic classification of aspiration and swallow vibration signals non-invasively recorded on the neck of children with dysphagia. Methods Vibration signals associated with safe swallows and aspirations, both identified via videofluoroscopy, were collected from over 100 children with neurologically-based dysphagia using a single-axis accelerometer. Five potentially discriminatory mathematical features were extracted from the accelerometry signals. All possible combinations of the five features were investigated in the design of radial basis function classifiers. Performance of different classifiers was compared and the best feature sets were identified. Results Optimal feature combinations for two, three and four features resulted in statistically comparable adjusted accuracies with a radial basis classifier. In particular, the feature pairing of dispersion ratio and normality achieved an adjusted accuracy of 79.8 ± 7.3%, a sensitivity of 79.4 ± 11.7% and specificity of 80.3 ± 12.8% for aspiration detection. Addition of a third feature, namely energy, increased adjusted accuracy to 81.3 ± 8.5% but the change was not statistically significant. A closer look at normality and dispersion ratio features suggest leptokurticity and the frequency and magnitude of atypical values as distinguishing characteristics between swallows and aspirations. The achieved accuracies are 30% higher than those reported for bedside cervical auscultation. Conclusion

  12. Drosophila olfactory receptors as classifiers for volatiles from disparate real world applications.

    Science.gov (United States)

    Nowotny, Thomas; de Bruyne, Marien; Berna, Amalia Z; Warr, Coral G; Trowell, Stephen C

    2014-10-14

    Olfactory receptors evolved to provide animals with ecologically and behaviourally relevant information. The resulting extreme sensitivity and discrimination has proven useful to humans, who have therefore co-opted some animals' sense of smell. One aim of machine olfaction research is to replace the use of animal noses and one avenue of such research aims to incorporate olfactory receptors into artificial noses. Here, we investigate how well the olfactory receptors of the fruit fly, Drosophila melanogaster, perform in classifying volatile odourants that they would not normally encounter. We collected a large number of in vivo recordings from individual Drosophila olfactory receptor neurons in response to an ecologically relevant set of 36 chemicals related to wine ('wine set') and an ecologically irrelevant set of 35 chemicals related to chemical hazards ('industrial set'), each chemical at a single concentration. Resampled response sets were used to classify the chemicals against all others within each set, using a standard linear support vector machine classifier and a wrapper approach. Drosophila receptors appear highly capable of distinguishing chemicals that they have not evolved to process. In contrast to previous work with metal oxide sensors, Drosophila receptors achieved the best recognition accuracy if the outputs of all 20 receptor types were used.

  13. Scene-Layout Compatible Conditional Random Field for Classifying Terrestrial Laser Point Clouds

    Science.gov (United States)

    Luo, C.; Sohn, G.

    2014-08-01

    Terrestrial Laser Scanning (TLS) rapidly becomes a primary surveying tool due to its fast acquisition of highly dense threedimensional point clouds. For fully utilizing its benefits, developing a robust method to classify many objects of interests from huge amounts of laser point clouds is urgently required. Conditional Random Field (CRF) is a well-known discriminative classifier, which integrates local appearance of the observation (laser point) with spatial interactions among its neighbouring points in classification process. Typical CRFs employ generic label consistency using short-range dependency only, which often causes locality problem. In this paper, we present a multi-range and asymmetric Conditional Random Field (CRF) (maCRF), which adopts a priori information of scene-layout compatibility addressing long-range dependency. The proposed CRF constructs two graphical models, one for enhancing a local labelling smoothness within short-range (srCRF) and the other for favouring a global and asymmetric regularity of spatial arrangement between different object classes within long-range (lrCRF). This maCRF classifier assumes two graphical models (srCRF and lrCRF) are independent of each other. Final labelling decision was accomplished by probabilistically combining prediction results obtained from two CRF models. We validated maCRF's performance with TLS point clouds acquired from RIEGL LMS-Z390i scanner using cross validation. Experiment results demonstrate that synergetic classification improvement can be achievable by incorporating two CRF models.

  14. Use of multivariate analysis to minimize collecting of infrared images and classify detected objects

    Science.gov (United States)

    Svensson, Thomas; Letalick, Dietmar

    2016-10-01

    An infrared image contains spatial and radiative information about objects in a scene. Two challenges are to classify pixels in a cluttered environment and to detect partly obscured or buried objects like mines and IEDs. Infrared image sequences provide additional temporal information, which can be utilized for a more robust object detection and an improved classification of object pixels. A manual evaluation of multi-dimensional data is generally time consuming and inefficient and therefore various algorithms are used. By a principal component analysis (PCA) most of the information is retained in a new, reduced system with fewer dimensions. The principal component coefficients (loadings) are here used both for classifying detected object pixels and for reducing the number of images in the analysis by computing of score vectors. For the datasets studied, the number of required images can be reduced significantly without loss of detection and classification ability. This allows for a more sparse sampling and scanning of larger areas when using a UAV, for example.

  15. Classified and Clustered Data Constellation: An Efficient Approach of 3D Urban Data Management

    DEFF Research Database (Denmark)

    Azri, Suhaibah; Ujang, Uznir; Antón Castro, Francesc;

    2016-01-01

    it involves various types of data, such as multiple types of zoning themes in the case of urban mixed-use development. Thus, a special technique for efficient handling and management of urban data is necessary. This paper proposes a structure called Classified and Clustered Data Constellation (CCDC) for urban...... on several urban mixed-use development datasets using CCDC verify that it efficiently retrieves their semantic and spatial information. Further, comparisons conducted between CCDC and existing clustering and data constellation techniques, from the aspect of preservation of minimal overlap and coverage...

  16. Understanding and classifying metabolite space and metabolite-likeness.

    Directory of Open Access Journals (Sweden)

    Julio E Peironcely

    Full Text Available While the entirety of 'Chemical Space' is huge (and assumed to contain between 10(63 and 10(200 'small molecules', distinct subsets of this space can nonetheless be defined according to certain structural parameters. An example of such a subspace is the chemical space spanned by endogenous metabolites, defined as 'naturally occurring' products of an organisms' metabolism. In order to understand this part of chemical space in more detail, we analyzed the chemical space populated by human metabolites in two ways. Firstly, in order to understand metabolite space better, we performed Principal Component Analysis (PCA, hierarchical clustering and scaffold analysis of metabolites and non-metabolites in order to analyze which chemical features are characteristic for both classes of compounds. Here we found that heteroatom (both oxygen and nitrogen content, as well as the presence of particular ring systems was able to distinguish both groups of compounds. Secondly, we established which molecular descriptors and classifiers are capable of distinguishing metabolites from non-metabolites, by assigning a 'metabolite-likeness' score. It was found that the combination of MDL Public Keys and Random Forest exhibited best overall classification performance with an AUC value of 99.13%, a specificity of 99.84% and a selectivity of 88.79%. This performance is slightly better than previous classifiers; and interestingly we found that drugs occupy two distinct areas of metabolite-likeness, the one being more 'synthetic' and the other being more 'metabolite-like'. Also, on a truly prospective dataset of 457 compounds, 95.84% correct classification was achieved. Overall, we are confident that we contributed to the tasks of classifying metabolites, as well as to understanding metabolite chemical space better. This knowledge can now be used in the development of new drugs that need to resemble metabolites, and in our work particularly for assessing the metabolite

  17. Classifying Cubic Edge-Transitive Graphs of Order 8

    Indian Academy of Sciences (India)

    Mehdi Alaeiyan; M K Hosseinipoor

    2009-11-01

    A simple undirected graph is said to be semisymmetric if it is regular and edge-transitive but not vertex-transitive. Let be a prime. It was shown by Folkman (J. Combin. Theory 3(1967) 215--232) that a regular edge-transitive graph of order 2 or 22 is necessarily vertex-transitive. In this paper, an extension of his result in the case of cubic graphs is given. It is proved that, every cubic edge-transitive graph of order 8 is symmetric, and then all such graphs are classified.

  18. Support vector machine classifiers for large data sets.

    Energy Technology Data Exchange (ETDEWEB)

    Gertz, E. M.; Griffin, J. D.

    2006-01-31

    This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.

  19. On the statistical assessment of classifiers using DNA microarray data

    Directory of Open Access Journals (Sweden)

    Carella M

    2006-08-01

    Full Text Available Abstract Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22 and tumor (25 specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045 as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS and Support Vector Machines (SVM classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035 and e = 18% (p = 0.037 respectively. Moreover, the error rate

  20. A Fast Scalable Classifier Tightly Integrated with RDBMS

    Institute of Scientific and Technical Information of China (English)

    刘红岩; 陆宏钧; 陈剑

    2002-01-01

    In this paper, we report our success in building efficient scalable classifiers by exploring the capabilities of modern relational database management systems(RDBMS).In addition to high classification accuracy, the unique features of theapproach include its high training speed, linear scalability, and simplicity in implementation. More importantly,the major computation required in the approachcan be implemented using standard functions provided by the modern relational DBMS.Besides, with the effective rule pruning strategy, the algorithm proposed inthis paper can produce a compact set of classification rules. The results of experiments conducted for performance evaluation and analysis are presented.

  1. DFRFT: A Classified Review of Recent Methods with Its Application

    Directory of Open Access Journals (Sweden)

    Ashutosh Kumar Singh

    2013-01-01

    Full Text Available In the literature, there are various algorithms available for computing the discrete fractional Fourier transform (DFRFT. In this paper, all the existing methods are reviewed, classified into four categories, and subsequently compared to find out the best alternative from the view point of minimal computational error, computational complexity, transform features, and additional features like security. Subsequently, the correlation theorem of FRFT has been utilized to remove significantly the Doppler shift caused due to motion of receiver in the DSB-SC AM signal. Finally, the role of DFRFT has been investigated in the area of steganography.

  2. BIOPHARMACEUTICS CLASSIFICATION SYSTEM: A STRATEGIC TOOL FOR CLASSIFYING DRUG SUBSTANCES

    Directory of Open Access Journals (Sweden)

    Rohilla Seema

    2011-07-01

    Full Text Available The biopharmaceutical classification system (BCS is a scientific approach for classifying drug substances based on their dose/solubility ratio and intestinal permeability. The BCS has been developed to allow prediction of in vivo pharmacokinetic performance of drug products from measurements of permeability and solubility. Moreover, the drugs can be categorized into four classes of BCS on the basis of permeability and solubility namely; high permeability high solubility, high permeability low solubility, low permeability high solubility and low permeability low solubility. The present review summarizes the principles, objectives, benefits, classification and applications of BCS.

  3. Decision Bayes Criteria for Optimal Classifier Based on Probabilistic Measures

    Institute of Scientific and Technical Information of China (English)

    Wissal Drira; Faouzi Ghorbel

    2014-01-01

    This paper addresses the high dimension sample problem in discriminate analysis under nonparametric and supervised assumptions. Since there is a kind of equivalence between the probabilistic dependence measure and the Bayes classification error probability, we propose to use an iterative algorithm to optimize the dimension reduction for classification with a probabilistic approach to achieve the Bayes classifier. The estimated probabilities of different errors encountered along the different phases of the system are realized by the Kernel estimate which is adjusted in a means of the smoothing parameter. Experiment results suggest that the proposed approach performs well.

  4. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  5. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  6. Classifying human audiometric phenotypes of age-related hearing loss from animal models.

    Science.gov (United States)

    Dubno, Judy R; Eckert, Mark A; Lee, Fu-Shing; Matthews, Lois J; Schmiedt, Richard A

    2013-10-01

    Age-related hearing loss (presbyacusis) has a complex etiology. Results from animal models detailing the effects of specific cochlear injuries on audiometric profiles may be used to understand the mechanisms underlying hearing loss in older humans and predict cochlear pathologies associated with certain audiometric configurations ("audiometric phenotypes"). Patterns of hearing loss associated with cochlear pathology in animal models were used to define schematic boundaries of human audiograms. Pathologies included evidence for metabolic, sensory, and a mixed metabolic + sensory phenotype; an older normal phenotype without threshold elevation was also defined. Audiograms from a large sample of older adults were then searched by a human expert for "exemplars" (best examples) of these phenotypes, without knowledge of the human subject demographic information. Mean thresholds and slopes of higher frequency thresholds of the audiograms assigned to the four phenotypes were consistent with the predefined schematic boundaries and differed significantly from each other. Significant differences in age, gender, and noise exposure history provided external validity for the four phenotypes. Three supervised machine learning classifiers were then used to assess reliability of the exemplar training set to estimate the probability that newly obtained audiograms exhibited one of the four phenotypes. These procedures classified the exemplars with a high degree of accuracy; classifications of the remaining cases were consistent with the exemplars with respect to average thresholds and demographic information. These results suggest that animal models of age-related hearing loss can be used to predict human cochlear pathology by classifying audiograms into phenotypic classifications that reflect probable etiologies for hearing loss in older humans.

  7. 32 CFR 154.6 - Standards for access to classified information or assignment to sensitive duties.

    Science.gov (United States)

    2010-07-01

    ...) General. Only U.S. citizens shall be granted a personnel security clearance, assigned to sensitive duties... OF THE SECRETARY OF DEFENSE SECURITY DEPARTMENT OF DEFENSE PERSONNEL SECURITY PROGRAM REGULATION... Department of Defense mission, including, special expertise, to assign an individual who is not a citizen...

  8. 32 CFR 154.18 - Certain positions not necessarily requiring access to classified information.

    Science.gov (United States)

    2010-07-01

    ... favorably reviewed by the appropriate component agency or activity prior to permitting such access. DoD... service or employment greater than 12 months in accordance with DoD Manual 1401.1-M. An individual who... accordance with § 154.16(b). (d) Customs inspectors. DoD employees appointed as customs inspectors,...

  9. Using Bayesian neural networks to classify forest scenes

    Science.gov (United States)

    Vehtari, Aki; Heikkonen, Jukka; Lampinen, Jouko; Juujarvi, Jouni

    1998-10-01

    We present results that compare the performance of Bayesian learning methods for neural networks on the task of classifying forest scenes into trees and background. Classification task is demanding due to the texture richness of the trees, occlusions of the forest scene objects and diverse lighting conditions under operation. This makes it difficult to determine which are optimal image features for the classification. A natural way to proceed is to extract many different types of potentially suitable features, and to evaluate their usefulness in later processing stages. One approach to cope with large number of features is to use Bayesian methods to control the model complexity. Bayesian learning uses a prior on model parameters, combines this with evidence from a training data, and the integrates over the resulting posterior to make predictions. With this method, we can use large networks and many features without fear of overfitting. For this classification task we compare two Bayesian learning methods for multi-layer perceptron (MLP) neural networks: (1) The evidence framework of MacKay uses a Gaussian approximation to the posterior weight distribution and maximizes with respect to hyperparameters. (2) In a Markov Chain Monte Carlo (MCMC) method due to Neal, the posterior distribution of the network parameters is numerically integrated using the MCMC method. As baseline classifiers for comparison we use (3) MLP early stop committee, (4) K-nearest-neighbor and (5) Classification And Regression Tree.

  10. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    Directory of Open Access Journals (Sweden)

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara

    2011-12-01

    Full Text Available This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion and lexical sophistication and then modeled using statistical techniques. The paragraphs were also classified by human raters based on paragraph positioning. The performance of the reported model was well above chance and reported an accuracy of classification that was similar to human judgments of paragraph type (66% accuracy for human versus 65% accuracy for our model. The model's accuracy increased when longer paragraphs that provided more linguistic coverage and paragraphs judged by human raters to be of higher quality were examined. The findings support the notions that paragraph types contain specific linguistic features that allow them to be distinguished from one another. The finding reported in this study should prove beneficial in classroom writing instruction and in automated writing assessment.

  11. Using Narrow Band Photometry to Classify Stars and Brown Dwarfs

    CERN Document Server

    Mainzer, A K; Sievers, J L; Young, E T; Lean, Ian S. Mc

    2004-01-01

    We present a new system of narrow band filters in the near infrared that can be used to classify stars and brown dwarfs. This set of four filters, spanning the H band, can be used to identify molecular features unique to brown dwarfs, such as H2O and CH4. The four filters are centered at 1.495 um (H2O), 1.595 um (continuum), 1.66 um (CH4), and 1.75 um (H2O). Using two H2O filters allows us to solve for individual objects' reddenings. This can be accomplished by constructing a color-color-color cube and rotating it until the reddening vector disappears. We created a model of predicted color-color-color values for different spectral types by integrating filter bandpass data with spectra of known stars and brown dwarfs. We validated this model by making photometric measurements of seven known L and T dwarfs, ranging from L1 - T7.5. The photometric measurements agree with the model to within +/-0.1 mag, allowing us to create spectral indices for different spectral types. We can classify A through early M stars to...

  12. Even more Chironomid species for classifying lake nutrient status

    Directory of Open Access Journals (Sweden)

    Les Ruse

    2015-07-01

    Full Text Available The European Union Water Framework Directive (WFD classifies ecological status of a waterbody by the determination of its natural reference state to provide a measure of perturbation by human impacts based on taxonomic composition and abundance of aquatic species. Ruse (2010; 2011 has provided methods of assessing anthropogenic perturbations to lake ecological status, in terms of nutrient enrichment and acidification, by analysing collections of floating pupal exuviae discarded by emerging adult Chironomidae. The previous nutrient assessment method was derived from chironomid and environmental data collected during 178 lake surveys of all WFD types found in Britain. Canonical Correspondence Analysis provided species optima in relation to phosphate and nitrogen concentrations. Species found in less than three surveys were excluded from analysis in case of spurious association with environmental values. Since Ruse (2010 an additional 72 lakes have been surveyed adding 31 more species for use in nutrient status assessment. These additional scoring species are reported here. The practical application of the Chironomid Pupal Exuvial Technique (CPET to classify WFD lake nutrient status is demonstrated using CPET survey data from lakes in Poland.

  13. An Ocular Protein Triad Can Classify Four Complex Retinal Diseases

    Science.gov (United States)

    Kuiper, J. J. W.; Beretta, L.; Nierkens, S.; van Leeuwen, R.; Ten Dam-van Loon, N. H.; Ossewaarde-van Norel, J.; Bartels, M. C.; de Groot-Mijnes, J. D. F.; Schellekens, P.; de Boer, J. H.; Radstake, T. R. D. J.

    2017-01-01

    Retinal diseases generally are vision-threatening conditions that warrant appropriate clinical decision-making which currently solely dependents upon extensive clinical screening by specialized ophthalmologists. In the era where molecular assessment has improved dramatically, we aimed at the identification of biomarkers in 175 ocular fluids to classify four archetypical ocular conditions affecting the retina (age-related macular degeneration, idiopathic non-infectious uveitis, primary vitreoretinal lymphoma, and rhegmatogenous retinal detachment) with one single test. Unsupervised clustering of ocular proteins revealed a classification strikingly similar to the clinical phenotypes of each disease group studied. We developed and independently validated a parsimonious model based merely on three proteins; interleukin (IL)-10, IL-21, and angiotensin converting enzyme (ACE) that could correctly classify patients with an overall accuracy, sensitivity and specificity of respectively, 86.7%, 79.4% and 92.5%. Here, we provide proof-of-concept for molecular profiling as a diagnostic aid for ophthalmologists in the care for patients with retinal conditions.

  14. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    Directory of Open Access Journals (Sweden)

    Hossein Bashashati

    Full Text Available A problem that impedes the progress in Brain-Computer Interface (BCI research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA as the classifier of choice for BCI systems.

  15. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    Science.gov (United States)

    Bashashati, Hossein; Ward, Rabab K; Birch, Gary E; Bashashati, Ali

    2015-01-01

    A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems.

  16. An Ocular Protein Triad Can Classify Four Complex Retinal Diseases

    Science.gov (United States)

    Kuiper, J. J. W.; Beretta, L.; Nierkens, S.; van Leeuwen, R.; ten Dam-van Loon, N. H.; Ossewaarde-van Norel, J.; Bartels, M. C.; de Groot-Mijnes, J. D. F.; Schellekens, P.; de Boer, J. H.; Radstake, T. R. D. J.

    2017-01-01

    Retinal diseases generally are vision-threatening conditions that warrant appropriate clinical decision-making which currently solely dependents upon extensive clinical screening by specialized ophthalmologists. In the era where molecular assessment has improved dramatically, we aimed at the identification of biomarkers in 175 ocular fluids to classify four archetypical ocular conditions affecting the retina (age-related macular degeneration, idiopathic non-infectious uveitis, primary vitreoretinal lymphoma, and rhegmatogenous retinal detachment) with one single test. Unsupervised clustering of ocular proteins revealed a classification strikingly similar to the clinical phenotypes of each disease group studied. We developed and independently validated a parsimonious model based merely on three proteins; interleukin (IL)-10, IL-21, and angiotensin converting enzyme (ACE) that could correctly classify patients with an overall accuracy, sensitivity and specificity of respectively, 86.7%, 79.4% and 92.5%. Here, we provide proof-of-concept for molecular profiling as a diagnostic aid for ophthalmologists in the care for patients with retinal conditions. PMID:28128370

  17. Discriminating complex networks through supervised NDR and Bayesian classifier

    Science.gov (United States)

    Yan, Ke-Sheng; Rong, Li-Li; Yu, Kai

    2016-12-01

    Discriminating complex networks is a particularly important task for the purpose of the systematic study of networks. In order to discriminate unknown networks exactly, a large set of network measurements are needed to be taken into account for comprehensively considering network properties. However, as we demonstrate in this paper, these measurements are nonlinear correlated with each other in general, resulting in a wide variety of redundant measurements which unintentionally explain the same aspects of network properties. To solve this problem, we adopt supervised nonlinear dimensionality reduction (NDR) to eliminate the nonlinear redundancy and visualize networks in a low-dimensional projection space. Though unsupervised NDR can achieve the same aim, we illustrate that supervised NDR is more appropriate than unsupervised NDR for discrimination task. After that, we perform Bayesian classifier (BC) in the projection space to discriminate the unknown network by considering the projection score vectors as the input of the classifier. We also demonstrate the feasibility and effectivity of this proposed method in six extensive research real networks, ranging from technological to social or biological. Moreover, the effectiveness and advantage of the proposed method is proved by the contrast experiments with the existing method.

  18. Automated morphological analysis approach for classifying colorectal microscopic images

    Science.gov (United States)

    Marghani, Khaled A.; Dlay, Satnam S.; Sharif, Bayan S.; Sims, Andrew J.

    2003-10-01

    Automated medical image diagnosis using quantitative measurements is extremely helpful for cancer prognosis to reach a high degree of accuracy and thus make reliable decisions. In this paper, six morphological features based on texture analysis were studied in order to categorize normal and cancer colon mucosa. They were derived after a series of pre-processing steps to generate a set of different shape measurements. Based on the shape and the size, six features known as Euler Number, Equivalent Diamater, Solidity, Extent, Elongation, and Shape Factor AR were extracted. Mathematical morphology is used firstly to remove background noise from segmented images and then to obtain different morphological measures to describe shape, size, and texture of colon glands. The automated system proposed is tested to classifying 102 microscopic samples of colorectal tissues, which consist of 44 normal color mucosa and 58 cancerous. The results were first statistically evaluated, using one-way ANOVA method in order to examine the significance of each feature extracted. Then significant features are selected in order to classify the dataset into two categories. Finally, using two discrimination methods; linear method and k-means clustering, important classification factors were estimated. In brief, this study demonstrates that abnormalities in low-level power tissue morphology can be distinguished using quantitative image analysis. This investigation shows the potential of an automated vision system in histopathology. Furthermore, it has the advantage of being objective, and more importantly a valuable diagnostic decision support tool.

  19. Occlusion Handling via Random Subspace Classifiers for Human Detection.

    Science.gov (United States)

    Marín, Javier; Vázquez, David; López, Antonio M; Amores, Jaume; Kuncheva, Ludmila I

    2014-03-01

    This paper describes a general method to address partial occlusions for human detection in still images. The random subspace method (RSM) is chosen for building a classifier ensemble robust against partial occlusions. The component classifiers are chosen on the basis of their individual and combined performance. The main contribution of this work lies in our approach's capability to improve the detection rate when partial occlusions are present without compromising the detection performance on non occluded data. In contrast to many recent approaches, we propose a method which does not require manual labeling of body parts, defining any semantic spatial components, or using additional data coming from motion or stereo. Moreover, the method can be easily extended to other object classes. The experiments are performed on three large datasets: the INRIA person dataset, the Daimler Multicue dataset, and a new challenging dataset, called PobleSec, in which a considerable number of targets are partially occluded. The different approaches are evaluated at the classification and detection levels for both partially occluded and non-occluded data. The experimental results show that our detector outperforms state-of-the-art approaches in the presence of partial occlusions, while offering performance and reliability similar to those of the holistic approach on non-occluded data. The datasets used in our experiments have been made publicly available for benchmarking purposes.

  20. Decision Tree Classifiers for Star/Galaxy Separation

    CERN Document Server

    Vasconcellos, E C; Gal, R R; LaBarbera, F L; Capelato, H V; Velho, H F Campos; Trevisan, M; Ruiz, R S R

    2010-01-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of $884,126$ SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: $14\\le r\\le21$ ($85.2%$) and $r\\ge19$ ($82.1%$). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT and Ball et al. (2006). We find that our FT classifier is comparable or better in completeness over the full magnitude range $15\\le r\\le21$, with m...

  1. 14 CFR 1203.400 - Specific classifying guidance.

    Science.gov (United States)

    2010-01-01

    ..., technical, operational, intelligence, strategic, tactical or economic advantage related to national security... is capable of obtaining the information or material; i.e., through intelligence activities, sources.... counterpart or competitive programs. (j) Operational information pertaining to the command and control...

  2. 48 CFR 3004.470 - Security requirements for access to unclassified facilities, Information Technology resources...

    Science.gov (United States)

    2010-10-01

    ... access to unclassified facilities, Information Technology resources, and sensitive information. 3004.470... Technology resources, and sensitive information. ... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive...

  3. Gain ratio based fuzzy weighted association rule mining classifier for medical diagnostic interface

    Indian Academy of Sciences (India)

    N S Nithya; K Duraiswamy

    2014-02-01

    The health care environment still needs knowledge based discovery for handling wealth of data. Extraction of the potential causes of the diseases is the most important factor for medical data mining. Fuzzy association rule mining is wellperformed better than traditional classifiers but it suffers from the exponential growth of the rules produced. In the past, we have proposed an information gain based fuzzy association rule mining algorithm for extracting both association rules and membership functions of medical data to reduce the rules. It used a ranking based weight value to identify the potential attribute. When we take a large number of distinct values, the computation of information gain value is not feasible. In this paper, an enhanced approach, called gain ratio based fuzzy weighted association rule mining, is thus proposed for distinct diseases and also increase the learning time of the previous one. Experimental results show that there is a marginal improvement in the attribute selection process and also improvement in the classifier accuracy. The system has been implemented in Java platform and verified by using benchmark data from the UCI machine learning repository.

  4. Evaluation of the Diagnostic Power of Thermography in Breast Cancer Using Bayesian Network Classifiers

    Science.gov (United States)

    Nicandro, Cruz-Ramírez; Efrén, Mezura-Montes; María Yaneli, Ameca-Alducin; Enrique, Martín-Del-Campo-Mena; Héctor Gabriel, Acosta-Mesa; Nancy, Pérez-Castro; Alejandro, Guerra-Hernández; Guillermo de Jesús, Hoyos-Rivera; Rocío Erandi, Barrientos-Martínez

    2013-01-01

    Breast cancer is one of the leading causes of death among women worldwide. There are a number of techniques used for diagnosing this disease: mammography, ultrasound, and biopsy, among others. Each of these has well-known advantages and disadvantages. A relatively new method, based on the temperature a tumor may produce, has recently been explored: thermography. In this paper, we will evaluate the diagnostic power of thermography in breast cancer using Bayesian network classifiers. We will show how the information provided by the thermal image can be used in order to characterize patients suspected of having cancer. Our main contribution is the proposal of a score, based on the aforementioned information, that could help distinguish sick patients from healthy ones. Our main results suggest the potential of this technique in such a goal but also show its main limitations that have to be overcome to consider it as an effective diagnosis complementary tool. PMID:23762182

  5. Linear classifier and textural analysis of optical scattering images for tumor classification during breast cancer extraction

    Science.gov (United States)

    Eguizabal, Alma; Laughney, Ashley M.; Garcia Allende, Pilar Beatriz; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.

    2013-02-01

    Texture analysis of light scattering in tissue is proposed to obtain diagnostic information from breast cancer specimens. Light scattering measurements are minimally invasive, and allow the estimation of tissue morphology to guide the surgeon in resection surgeries. The usability of scatter signatures acquired with a micro-sampling reflectance spectral imaging system was improved utilizing an empirical approximation to the Mie theory to estimate the scattering power on a per-pixel basis. Co-occurrence analysis is then applied to the scattering power images to extract the textural features. A statistical analysis of the features demonstrated the suitability of the autocorrelation for the classification of notmalignant (normal epithelia and stroma, benign epithelia and stroma, inflammation), malignant (DCIS, IDC, ILC) and adipose tissue, since it reveals morphological information of tissue. Non-malignant tissue shows higher autocorrelation values while adipose tissue presents a very low autocorrelation on its scatter texture, being malignant the middle ground. Consequently, a fast linear classifier based on the consideration of just one straightforward feature is enough for providing relevant diagnostic information. A leave-one-out validation of the linear classifier on 29 samples with 48 regions of interest showed classification accuracies of 98.74% on adipose tissue, 82.67% on non-malignant tissue and 72.37% on malignant tissue, in comparison with the biopsy H and E gold standard. This demonstrates that autocorrelation analysis of scatter signatures is a very computationally efficient and automated approach to provide pathological information in real-time to guide surgeon during tissue resection.

  6. Least Square Support Vector Machine Classifier vs a Logistic Regression Classifier on the Recognition of Numeric Digits

    Directory of Open Access Journals (Sweden)

    Danilo A. López-Sarmiento

    2013-11-01

    Full Text Available In this paper is compared the performance of a multi-class least squares support vector machine (LSSVM mc versus a multi-class logistic regression classifier to problem of recognizing the numeric digits (0-9 handwritten. To develop the comparison was used a data set consisting of 5000 images of handwritten numeric digits (500 images for each number from 0-9, each image of 20 x 20 pixels. The inputs to each of the systems were vectors of 400 dimensions corresponding to each image (not done feature extraction. Both classifiers used OneVsAll strategy to enable multi-classification and a random cross-validation function for the process of minimizing the cost function. The metrics of comparison were precision and training time under the same computational conditions. Both techniques evaluated showed a precision above 95 %, with LS-SVM slightly more accurate. However the computational cost if we found a marked difference: LS-SVM training requires time 16.42 % less than that required by the logistic regression model based on the same low computational conditions.

  7. Road network extraction in classified SAR images using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    肖志强; 鲍光淑; 蒋晓确

    2004-01-01

    Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.

  8. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław

    2016-01-01

    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  9. Refining and classifying finite-time Lyapunov exponent ridges

    CERN Document Server

    Allshouse, Michael R

    2015-01-01

    While more rigorous and sophisticated methods for identifying Lagrangian based coherent structures exist, the finite-time Lyapunov exponent (FTLE) field remains a straightforward and popular method for gaining some insight into transport by complex, time-dependent two-dimensional flows. In light of its enduring appeal, and in support of good practice, we begin by investigating the effects of discretization and noise on two numerical approaches for calculating the FTLE field. A practical method to extract and refine FTLE ridges in two-dimensional flows, which builds on previous methods, is then presented. Seeking to better ascertain the role of an FTLE ridge in flow transport, we adapt an existing classification scheme and provide a thorough treatment of the challenges of classifying the types of deformation represented by an FTLE ridge. As a practical demonstration, the methods are applied to an ocean surface velocity field data set generated by a numerical model.

  10. Sex Bias in Classifying Borderline and Narcissistic Personality Disorder.

    Science.gov (United States)

    Braamhorst, Wouter; Lobbestael, Jill; Emons, Wilco H M; Arntz, Arnoud; Witteman, Cilia L M; Bekker, Marrie H J

    2015-10-01

    This study investigated sex bias in the classification of borderline and narcissistic personality disorders. A sample of psychologists in training for a post-master degree (N = 180) read brief case histories (male or female version) and made DSM classification. To differentiate sex bias due to sex stereotyping or to base rate variation, we used different case histories, respectively: (1) non-ambiguous case histories with enough criteria of either borderline or narcissistic personality disorder to meet the threshold for classification, and (2) an ambiguous case with subthreshold features of both borderline and narcissistic personality disorder. Results showed significant differences due to sex of the patient in the ambiguous condition. Thus, when the diagnosis is not straightforward, as in the case of mixed subthreshold features, sex bias is present and is influenced by base-rate variation. These findings emphasize the need for caution in classifying personality disorders, especially borderline or narcissistic traits.

  11. Object localization based on smoothing preprocessing and cascade classifier

    Science.gov (United States)

    Zhang, Xingfu; Liu, Lei; Zhao, Feng

    2017-01-01

    An improved algorithm for image location is proposed in this paper. Firstly, the image is smoothed and the partial noise is removed. Then use the cascade classifier to train a template. Finally, the template is used to detect the related images. The advantage of the algorithm is that it is robust to noise and the proportion of the image is not sensitive to change. At the same time, the algorithm also has the advantages of fast computation speed. In this paper, a real truck bottom picture is chosen as the experimental object. Images of normal components and faulty components are all included in the image sample. Experimental results show that the accuracy rate of the image is more than 90 percent when the grade is more than 40. So we can draw a conclusion that the algorithm proposed in this paper can be applied to the actual image localization project.

  12. A non-parametric 2D deformable template classifier

    DEFF Research Database (Denmark)

    Schultz, Nette; Nielsen, Allan Aasbjerg; Conradsen, Knut;

    2005-01-01

    We introduce an interactive segmentation method for a sea floor survey. The method is based on a deformable template classifier and is developed to segment data from an echo sounder post-processor called RoxAnn. RoxAnn collects two different measures for each observation point, and in this 2D...... feature space the ship-master will be able to interactively define a segmentation map, which is refined and optimized by the deformable template algorithms. The deformable templates are defined as two-dimensional vector-cycles. Local random transformations are applied to the vector-cycles, and stochastic...... relaxation in a Bayesian scheme is used. In the Bayesian likelihood a class density function and its estimate hereof is introduced, which is designed to separate the feature space. The method is verified on data collected in Øresund, Scandinavia. The data come from four geographically different areas. Two...

  13. On the way of classifying new states of active matter

    Science.gov (United States)

    Menzel, Andreas M.

    2016-07-01

    With ongoing research into the collective behavior of self-propelled particles, new states of active matter are revealed. Some of them are entirely based on the non-equilibrium character and do not have an immediate equilibrium counterpart. In their recent work, Romanczuk et al (2016 New J. Phys. 18 063015) concentrate on the characterization of smectic-like states of active matter. A new type, referred to by the authors as smectic P, is described. In this state, the active particles form stacked layers and self-propel along them. Identifying and classifying states and phases of non-equilibrium matter, including the transitions between them, is an up-to-date effort that will certainly extend for a longer period into the future.

  14. Handwritten Bangla Alphabet Recognition using an MLP Based Classifier

    CERN Document Server

    Basu, Subhadip; Sarkar, Ram; Kundu, Mahantapas; Nasipuri, Mita; Basu, Dipak Kumar

    2012-01-01

    The work presented here involves the design of a Multi Layer Perceptron (MLP) based classifier for recognition of handwritten Bangla alphabet using a 76 element feature set Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten characters of Bangla alphabet includes 24 shadow features, 16 centroid features and 36 longest-run features. Recognition performances of the MLP designed to work with this feature set are experimentally observed as 86.46% and 75.05% on the samples of the training and the test sets respectively. The work has useful application in the development of a complete OCR system for handwritten Bangla text.

  15. Classifying algorithms for SIFT-MS technology and medical diagnosis.

    Science.gov (United States)

    Moorhead, K T; Lee, D; Chase, J G; Moot, A R; Ledingham, K M; Scotter, J; Allardyce, R A; Senthilmohan, S T; Endre, Z

    2008-03-01

    Selected Ion Flow Tube-Mass Spectrometry (SIFT-MS) is an analytical technique for real-time quantification of trace gases in air or breath samples. SIFT-MS system thus offers unique potential for early, rapid detection of disease states. Identification of volatile organic compound (VOC) masses that contribute strongly towards a successful classification clearly highlights potential new biomarkers. A method utilising kernel density estimates is thus presented for classifying unknown samples. It is validated in a simple known case and a clinical setting before-after dialysis. The simple case with nitrogen in Tedlar bags returned a 100% success rate, as expected. The clinical proof-of-concept with seven tests on one patient had an ROC curve area of 0.89. These results validate the method presented and illustrate the emerging clinical potential of this technology.

  16. Using point-set compression to classify folk songs

    DEFF Research Database (Denmark)

    Meredith, David

    2014-01-01

    -neighbour algorithm and leave-one-out cross-validation to classify the 360 melodies into tune families. The classifications produced by the algorithms were compared with a ground-truth classification prepared by expert musicologists. Twelve of the thirteen compressors used in the experiment were based...... on the discovery of translational equivalence classes (TECs) of maximal translatable patterns (MTPs) in point-set representations of the melodies. The twelve algorithms consisted of four variants of each of three basic algorithms, COSIATEC, SIATECCompress and Forth’s algorithm. The main difference between...... similarity between folk-songs for classification purposes is highly dependent upon the actual compressor chosen. Furthermore, it seems that compressors based on finding maximal repeated patterns in point-set representations of music show more promise for NCD-based music classification than general...

  17. Classifying orbits in the restricted three-body problem

    CERN Document Server

    Zotos, Euaggelos E

    2015-01-01

    The case of the planar circular restricted three-body problem is used as a test field in order to determine the character of the orbits of a small body which moves under the gravitational influence of the two heavy primary bodies. We conduct a thorough numerical analysis on the phase space mixing by classifying initial conditions of orbits and distinguishing between three types of motion: (i) bounded, (ii) escape and (iii) collisional. The presented outcomes reveal the high complexity of this dynamical system. Furthermore, our numerical analysis shows a remarkable presence of fractal basin boundaries along all the escape regimes. Interpreting the collisional motion as leaking in the phase space we related our results to both chaotic scattering and the theory of leaking Hamiltonian systems. We also determined the escape and collisional basins and computed the corresponding escape/collisional times. We hope our contribution to be useful for a further understanding of the escape and collisional mechanism of orbi...

  18. Support vector classifier based on principal component analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Support vector classifier (SVC) has the superior advantages for small sample learning problems with high dimensions,with especially better generalization ability.However there is some redundancy among the high dimensions of the original samples and the main features of the samples may be picked up first to improve the performance of SVC.A principal component analysis (PCA) is employed to reduce the feature dimensions of the original samples and the pre-selected main features efficiently,and an SVC is constructed in the selected feature space to improve the learning speed and identification rate of SVC.Furthermore,a heuristic genetic algorithm-based automatic model selection is proposed to determine the hyperparameters of SVC to evaluate the performance of the learning machines.Experiments performed on the Heart and Adult benchmark data sets demonstrate that the proposed PCA-based SVC not only reduces the test time drastically,but also improves the identify rates effectively.

  19. A Speedy Cardiovascular Diseases Classifier Using Multiple Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Wah Ching Lee

    2015-01-01

    Full Text Available Each year, some 30 percent of global deaths are caused by cardiovascular diseases. This figure is worsening due to both the increasing elderly population and severe shortages of medical personnel. The development of a cardiovascular diseases classifier (CDC for auto-diagnosis will help address solve the problem. Former CDCs did not achieve quick evaluation of cardiovascular diseases. In this letter, a new CDC to achieve speedy detection is investigated. This investigation incorporates the analytic hierarchy process (AHP-based multiple criteria decision analysis (MCDA to develop feature vectors using a Support Vector Machine. The MCDA facilitates the efficient assignment of appropriate weightings to potential patients, thus scaling down the number of features. Since the new CDC will only adopt the most meaningful features for discrimination between healthy persons versus cardiovascular disease patients, a speedy detection of cardiovascular diseases has been successfully implemented.

  20. Early Detection of Breast Cancer using SVM Classifier Technique

    CERN Document Server

    Rejani, Y Ireaneus Anna

    2009-01-01

    This paper presents a tumor detection algorithm from mammogram. The proposed system focuses on the solution of two problems. One is how to detect tumors as suspicious regions with a very weak contrast to their background and another is how to extract features which categorize tumors. The tumor detection method follows the scheme of (a) mammogram enhancement. (b) The segmentation of the tumor area. (c) The extraction of features from the segmented tumor area. (d) The use of SVM classifier. The enhancement can be defined as conversion of the image quality to a better and more understandable level. The mammogram enhancement procedure includes filtering, top hat operation, DWT. Then the contrast stretching is used to increase the contrast of the image. The segmentation of mammogram images has been playing an important role to improve the detection and diagnosis of breast cancer. The most common segmentation method used is thresholding. The features are extracted from the segmented breast area. Next stage include,...

  1. Should Hypersexual Disorder be Classified as an Addiction?

    Science.gov (United States)

    Kor, Ariel; Fogel, Yehuda; Reid, Rory C; Potenza, Marc N

    2013-01-01

    Hypersexual behavior has been documented within clinical and research settings over the past decade. Despite recent research on hypersexuality and its associated features, many questions remain how best to define and classify hypersexual behavior. Proposed diagnostic criteria for Hypersexual Disorder (HD) have been proposed for the DSM-5 and a preliminary field trial has lent some support to the reliability and validity of the HD diagnosis. However, debate exists with respect to the extent to which the disorder might be categorized as a non-substance or behavioral addiction. In this article, we will discuss this debate in the context of data citing similarities and differences between hypersexual disorder, drug addictions, and pathological gambling. The authors of this paper conclude that despite many similarities between the features of hypersexual behavior and substance-related disorders, the research on HD at this time is in its infancy and much remains to be learned before definitively characterizing HD as an addiction at this time.

  2. Statistical Mechanical Development of a Sparse Bayesian Classifier

    Science.gov (United States)

    Uda, Shinsuke; Kabashima, Yoshiyuki

    2005-08-01

    The demand for extracting rules from high dimensional real world data is increasing in various fields. However, the possible redundancy of such data sometimes makes it difficult to obtain a good generalization ability for novel samples. To resolve this problem, we provide a scheme that reduces the effective dimensions of data by pruning redundant components for bicategorical classification based on the Bayesian framework. First, the potential of the proposed method is confirmed in ideal situations using the replica method. Unfortunately, performing the scheme exactly is computationally difficult. So, we next develop a tractable approximation algorithm, which turns out to offer nearly optimal performance in ideal cases when the system size is large. Finally, the efficacy of the developed classifier is experimentally examined for a real world problem of colon cancer classification, which shows that the developed method can be practically useful.

  3. Building multiclass classifiers for remote homology detection and fold recognition

    Directory of Open Access Journals (Sweden)

    Karypis George

    2006-10-01

    Full Text Available Abstract Background Protein remote homology detection and fold recognition are central problems in computational biology. Supervised learning algorithms based on support vector machines are currently one of the most effective methods for solving these problems. These methods are primarily used to solve binary classification problems and they have not been extensively used to solve the more general multiclass remote homology prediction and fold recognition problems. Results We present a comprehensive evaluation of a number of methods for building SVM-based multiclass classification schemes in the context of the SCOP protein classification. These methods include schemes that directly build an SVM-based multiclass model, schemes that employ a second-level learning approach to combine the predictions generated by a set of binary SVM-based classifiers, and schemes that build and combine binary classifiers for various levels of the SCOP hierarchy beyond those defining the target classes. Conclusion Analyzing the performance achieved by the different approaches on four different datasets we show that most of the proposed multiclass SVM-based classification approaches are quite effective in solving the remote homology prediction and fold recognition problems and that the schemes that use predictions from binary models constructed for ancestral categories within the SCOP hierarchy tend to not only lead to lower error rates but also reduce the number of errors in which a superfamily is assigned to an entirely different fold and a fold is predicted as being from a different SCOP class. Our results also show that the limited size of the training data makes it hard to learn complex second-level models, and that models of moderate complexity lead to consistently better results.

  4. Multivariate analysis of quantitative traits can effectively classify rapeseed germplasm

    Directory of Open Access Journals (Sweden)

    Jankulovska Mirjana

    2014-01-01

    Full Text Available In this study, the use of different multivariate approaches to classify rapeseed genotypes based on quantitative traits has been presented. Tree regression analysis, PCA analysis and two-way cluster analysis were applied in order todescribe and understand the extent of genetic variability in spring rapeseed genotype by trait data. The traits which highly influenced seed and oil yield in rapeseed were successfully identified by the tree regression analysis. Principal predictor for both response variables was number of pods per plant (NP. NP and 1000 seed weight could help in the selection of high yielding genotypes. High values for both traits and oil content could lead to high oil yielding genotypes. These traits may serve as indirect selection criteria and can lead to improvement of seed and oil yield in rapeseed. Quantitative traits that explained most of the variability in the studied germplasm were classified using principal component analysis. In this data set, five PCs were identified, out of which the first three PCs explained 63% of the total variance. It helped in facilitating the choice of variables based on which the genotypes’ clustering could be performed. The two-way cluster analysissimultaneously clustered genotypes and quantitative traits. The final number of clusters was determined using bootstrapping technique. This approach provided clear overview on the variability of the analyzed genotypes. The genotypes that have similar performance regarding the traits included in this study can be easily detected on the heatmap. Genotypes grouped in the clusters 1 and 8 had high values for seed and oil yield, and relatively short vegetative growth duration period and those in cluster 9, combined moderate to low values for vegetative growth duration and moderate to high seed and oil yield. These genotypes should be further exploited and implemented in the rapeseed breeding program. The combined application of these multivariate methods

  5. Capability of geometric features to classify ships in SAR imagery

    Science.gov (United States)

    Lang, Haitao; Wu, Siwen; Lai, Quan; Ma, Li

    2016-10-01

    Ship classification in synthetic aperture radar (SAR) imagery has become a new hotspot in remote sensing community for its valuable potential in many maritime applications. Several kinds of ship features, such as geometric features, polarimetric features, and scattering features have been widely applied on ship classification tasks. Compared with polarimetric features and scattering features, which are subject to SAR parameters (e.g., sensor type, incidence angle, polarization, etc.) and environment factors (e.g., sea state, wind, wave, current, etc.), geometric features are relatively independent of SAR and environment factors, and easy to be extracted stably from SAR imagery. In this paper, the capability of geometric features to classify ships in SAR imagery with various resolution has been investigated. Firstly, the relationship between the geometric feature extraction accuracy and the SAR imagery resolution is analyzed. It shows that the minimum bounding rectangle (MBR) of ship can be extracted exactly in terms of absolute precision by the proposed automatic ship-sea segmentation method. Next, six simple but effective geometric features are extracted to build a ship representation for the subsequent classification task. These six geometric features are composed of length (f1), width (f2), area (f3), perimeter (f4), elongatedness (f5) and compactness (f6). Among them, two basic features, length (f1) and width (f2), are directly extracted based on the MBR of ship, the other four are derived from those two basic features. The capability of the utilized geometric features to classify ships are validated on two data set with different image resolutions. The results show that the performance of ship classification solely by geometric features is close to that obtained by the state-of-the-art methods, which obtained by a combination of multiple kinds of features, including scattering features and geometric features after a complex feature selection process.

  6. Boosting-Based On-Road Obstacle Sensing Using Discriminative Weak Classifiers

    Science.gov (United States)

    Adhikari, Shyam Prasad; Yoo, Hyeon-Joong; Kim, Hyongsuk

    2011-01-01

    This paper proposes an extension of the weak classifiers derived from the Haar-like features for their use in the Viola-Jones object detection system. These weak classifiers differ from the traditional single threshold ones, in that no specific threshold is needed and these classifiers give a more general solution to the non-trivial task of finding thresholds for the Haar-like features. The proposed quadratic discriminant analysis based extension prominently improves the ability of the weak classifiers to discriminate objects and non-objects. The proposed weak classifiers were evaluated by boosting a single stage classifier to detect rear of car. The experiments demonstrate that the object detector based on the proposed weak classifiers yields higher classification performance with less number of weak classifiers than the detector built with traditional single threshold weak classifiers. PMID:22163852

  7. Effective Network Intrusion Detection using Classifiers Decision Trees and Decision rules

    Directory of Open Access Journals (Sweden)

    G.MeeraGandhi

    2010-11-01

    Full Text Available In the era of information society, computer networks and their related applications are the emerging technologies. Network Intrusion Detection aims at distinguishing the behavior of the network. As the network attacks have increased in huge numbers over the past few years, Intrusion Detection System (IDS is increasingly becoming a critical component to secure the network. Owing to large volumes of security audit data in a network in addition to intricate and vibrant properties of intrusion behaviors, optimizing performance of IDS becomes an important open problem which receives more and more attention from the research community. In this work, the field of machine learning attempts to characterize how such changes can occur by designing, implementing, running, and analyzing algorithms that can be run on computers. The discipline draws on ideas, with the goal of understanding the computational character of learning. Learning always occurs in the context of some performance task, and that a learning method should always be coupled with a performance element that uses the knowledge acquired during learning. In this research, machine learning is being investigated as a technique for making the selection, using as training data and their outcome. In this paper, we evaluate the performance of a set of classifier algorithms of rules (JRIP, Decision Tabel, PART, and OneR and trees (J48, RandomForest, REPTree, NBTree. Based on the evaluation results, best algorithms for each attack category is chosen and two classifier algorithm selection models are proposed. The empirical simulation result shows the comparison between the noticeable performance improvements. The classification models were trained using the data collected from Knowledge Discovery Databases (KDD for Intrusion Detection. The trained models were then used for predicting the risk of the attacks in a web server environment or by any network administrator or any Security Experts. The

  8. 32 CFR 2001.50 - Telecommunications automated information systems and network security.

    Science.gov (United States)

    2010-07-01

    ... and network security. 2001.50 Section 2001.50 National Defense Other Regulations Relating to National Defense INFORMATION SECURITY OVERSIGHT OFFICE, NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED... network security. Each agency head shall ensure that classified information electronically...

  9. Classifying regional development in Iran (Application of Composite Index Approach

    Directory of Open Access Journals (Sweden)

    A. Sharifzadeh

    2012-01-01

    Full Text Available Extended abstract1- IntroductionThe spatial economy of Iran, like that of so many other developing countries, is characterized by an uneven spatial pattern of economic activities. The problem of spatial inequality emerged when efficiency-oriented sectoral policies came into conflict with the spatial dimension of development (Atash, 1988. Due to this conflict, extreme imbalanced development in Iran was created. Moreover spatial uneven distribution of economic activities in Iran is unknown and incomplete. So, there is an urgent need for more efficient and effective design, targeting and implementing interventions to manage spatial imbalances in development. Hence, the identification of development patterns at spatial scale and the factors generating them can help improve planning if development programs are focused on removing the constraints adversely affecting development in potentially good areas. There is a need for research that would describe and explain the problem of spatial development patterns as well as proposal of possible strategies, which can be used to develop the country and reduce the spatial imbalances. The main objective of this research was to determine spatial economic development level in order to identify spatial pattern of development and explain determinants of such imbalance in Iran based on methodology of composite index of development. Then, Iran provinces were ranked and classified according to the calculated composite index. To collect the required data, census of 2006 and yearbook in various times were used. 2- Theoretical basesTheories of regional inequality as well as empirical evidence regarding actual trends at the national or international level have been discussed and debated in the economic literature for over three decades. Early debates concerning the impact of market mechanisms on regional inequality in the West (Myrdal, 1957 have become popular again in the 1990s. There is a conflict on probable outcomes

  10. Role of Artificial Intelligence Techniques (Automatic Classifiers) in Molecular Imaging Modalities in Neurodegenerative Diseases.

    Science.gov (United States)

    Cascianelli, Silvia; Scialpi, Michele; Amici, Serena; Forini, Nevio; Minestrini, Matteo; Fravolini, Mario Luca; Sinzinger, Helmut; Schillaci, Orazio; Palumbo, Barbara

    2017-01-01

    Artificial Intelligence (AI) is a very active Computer Science research field aiming to develop systems that mimic human intelligence and is helpful in many human activities, including Medicine. In this review we presented some examples of the exploiting of AI techniques, in particular automatic classifiers such as Artificial Neural Network (ANN), Support Vector Machine (SVM), Classification Tree (ClT) and ensemble methods like Random Forest (RF), able to analyze findings obtained by positron emission tomography (PET) or single-photon emission tomography (SPECT) scans of patients with Neurodegenerative Diseases, in particular Alzheimer's Disease. We also focused our attention on techniques applied in order to preprocess data and reduce their dimensionality via feature selection or projection in a more representative domain (Principal Component Analysis - PCA - or Partial Least Squares - PLS - are examples of such methods); this is a crucial step while dealing with medical data, since it is necessary to compress patient information and retain only the most useful in order to discriminate subjects into normal and pathological classes. Main literature papers on the application of these techniques to classify patients with neurodegenerative disease extracting data from molecular imaging modalities are reported, showing that the increasing development of computer aided diagnosis systems is very promising to contribute to the diagnostic process.

  11. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review.

  12. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  13. Use of artificial neural networks and geographic objects for classifying remote sensing imagery

    Directory of Open Access Journals (Sweden)

    Pedro Resende Silva

    2014-06-01

    Full Text Available The aim of this study was to develop a methodology for mapping land use and land cover in the northern region of Minas Gerais state, where, in addition to agricultural land, the landscape is dominated by native cerrado, deciduous forests, and extensive areas of vereda. Using forest inventory data, as well as RapidEye, Landsat TM and MODIS imagery, three specific objectives were defined: 1 to test use of image segmentation techniques for an object-based classification encompassing spectral, spatial and temporal information, 2 to test use of high spatial resolution RapidEye imagery combined with Landsat TM time series imagery for capturing the effects of seasonality, and 3 to classify data using Artificial Neural Networks. Using MODIS time series and forest inventory data, time signatures were extracted from the dominant vegetation formations, enabling selection of the best periods of the year to be represented in the classification process. Objects created with the segmentation of RapidEye images, along with the Landsat TM time series images, were classified by ten different Multilayer Perceptron network architectures. Results showed that the methodology in question meets both the purposes of this study and the characteristics of the local plant life. With excellent accuracy values for native classes, the study showed the importance of a well-structured database for classification and the importance of suitable image segmentation to meet specific purposes.

  14. Texture feature selection with relevance learning to classify interstitial lung disease patterns

    Science.gov (United States)

    Huber, Markus B.; Bunte, Kerstin; Nagarajan, Mahesh B.; Biehl, Michael; Ray, Lawrence A.; Wismueller, Axel

    2011-03-01

    The Generalized Matrix Learning Vector Quantization (GMLVQ) is used to estimate the relevance of texture features in their ability to classify interstitial lung disease patterns in high-resolution computed tomography (HRCT) images. After a stochastic gradient descent, the GMLVQ algorithm provides a discriminative distance measure of relevance factors, which can account for pairwise correlations between different texture features and their importance for the classification of healthy and diseased patterns. Texture features were extracted from gray-level co-occurrence matrices (GLCMs), and were ranked and selected according to their relevance obtained by GMLVQ and, for comparison, to a mutual information (MI) criteria. A k-nearest-neighbor (kNN) classifier and a Support Vector Machine with a radial basis function kernel (SVMrbf) were optimized in a 10-fold crossvalidation for different texture feature sets. In our experiment with real-world data, the feature sets selected by the GMLVQ approach had a significantly better classification performance compared with feature sets selected by a MI ranking.

  15. Nearest Neighbor Classifier Method for Making Loan Decision in Commercial Bank

    Directory of Open Access Journals (Sweden)

    Md.Mahbubur Rahman

    2014-07-01

    Full Text Available Bank plays the central role for the economic development world-wide. The failure and success of the banking sector depends upon the ability to proper evaluation of credit risk. Credit risk evaluation of any potential credit application has remained a challenge for banks all over the world till today. Artificial neural network plays a tremendous role in the field of finance for making critical, enigmatic and sensitive decisions those are sometimes impossible for human being. Like other critical decision in the finance, the decision of sanctioning loan to the customer is also an enigmatic problem. The objective of this paper is to design such a Neural Network that can facilitate loan officers to make correct decision for providing loan to the proper client. This paper checks the applicability of one of the new integrated model with nearest neighbor classifier on a sample data taken from a Bangladeshi Bank named Brac Bank. The Neural network will consider several factors of the client of the bank and make the loan officer informed about client’s eligibility of getting a loan. Several effective methods of neural network can be used for making this bank decision such as back propagation learning, regression model, gradient descent algorithm, nearest neighbor classifier etc.

  16. A Multiple-Classifier Framework for Parkinson’s Disease Detection Based on Various Vocal Tests

    Directory of Open Access Journals (Sweden)

    Mahnaz Behroozi

    2016-01-01

    Full Text Available Recently, speech pattern analysis applications in building predictive telediagnosis and telemonitoring models for diagnosing Parkinson’s disease (PD have attracted many researchers. For this purpose, several datasets of voice samples exist; the UCI dataset named “Parkinson Speech Dataset with Multiple Types of Sound Recordings” has a variety of vocal tests, which include sustained vowels, words, numbers, and short sentences compiled from a set of speaking exercises for healthy and people with Parkinson’s disease (PWP. Some researchers claim that summarizing the multiple recordings of each subject with the central tendency and dispersion metrics is an efficient strategy in building a predictive model for PD. However, they have overlooked the point that a PD patient may show more difficulty in pronouncing certain terms than the other terms. Thus, summarizing the vocal tests may lead into loss of valuable information. In order to address this issue, the classification setting must take what has been said into account. As a solution, we introduced a new framework that applies an independent classifier for each vocal test. The final classification result would be a majority vote from all of the classifiers. When our methodology comes with filter-based feature selection, it enhances classification accuracy up to 15%.

  17. Machine-learning approaches for classifying haplogroup from Y chromosome STR data.

    Directory of Open Access Journals (Sweden)

    Joseph Schlecht

    2008-06-01

    Full Text Available Genetic variation on the non-recombining portion of the Y chromosome contains information about the ancestry of male lineages. Because of their low rate of mutation, single nucleotide polymorphisms (SNPs are the markers of choice for unambiguously classifying Y chromosomes into related sets of lineages known as haplogroups, which tend to show geographic structure in many parts of the world. However, performing the large number of SNP genotyping tests needed to properly infer haplogroup status is expensive and time consuming. A novel alternative for assigning a sampled Y chromosome to a haplogroup is presented here. We show that by applying modern machine-learning algorithms we can infer with high accuracy the proper Y chromosome haplogroup of a sample by scoring a relatively small number of Y-linked short tandem repeats (STRs. Learning is based on a diverse ground-truth data set comprising pairs of SNP test results (haplogroup and corresponding STR scores. We apply several independent machine-learning methods in tandem to learn formal classification functions. The result is an integrated high-throughput analysis system that automatically classifies large numbers of samples into haplogroups in a cost-effective and accurate manner.

  18. Prediction of small molecule binding property of protein domains with Bayesian classifiers based on Markov chains.

    Science.gov (United States)

    Bulashevska, Alla; Stein, Martin; Jackson, David; Eils, Roland

    2009-12-01

    Accurate computational methods that can help to predict biological function of a protein from its sequence are of great interest to research biologists and pharmaceutical companies. One approach to assume the function of proteins is to predict the interactions between proteins and other molecules. In this work, we propose a machine learning method that uses a primary sequence of a domain to predict its propensity for interaction with small molecules. By curating the Pfam database with respect to the small molecule binding ability of its component domains, we have constructed a dataset of small molecule binding and non-binding domains. This dataset was then used as training set to learn a Bayesian classifier, which should distinguish members of each class. The domain sequences of both classes are modelled with Markov chains. In a Jack-knife test, our classification procedure achieved the predictive accuracies of 77.2% and 66.7% for binding and non-binding classes respectively. We demonstrate the applicability of our classifier by using it to identify previously unknown small molecule binding domains. Our predictions are available as supplementary material and can provide very useful information to drug discovery specialists. Given the ubiquitous and essential role small molecules play in biological processes, our method is important for identifying pharmaceutically relevant components of complete proteomes. The software is available from the author upon request.

  19. Recognition of American Sign Language (ASL) Classifiers in a Planetarium Using a Head-Mounted Display

    Science.gov (United States)

    Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan

    2015-01-01

    A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.

  20. ARABIC PART OF SPEECH TAGGING USING K-NEAREST NEIGHBOUR AND NAIVE BAYES CLASSIFIERS COMBINATION

    Directory of Open Access Journals (Sweden)

    Rund Mahafdah

    2014-01-01

    Full Text Available Part Of Speech (POS tagging forms the important preprocessing step in many of the natural language processing applications such as text summarization, question answering and information retrieval system. It is the process of classifying every word in a given context to its appropriate part of speech. Different POS tagging techniques in the literature have been developed and experimented. Currently, it is well known that some POS tagging models are not performing well on the Quranic Arabic due to the complexity of the Quranic Arabic text. This complexity presents several challenges for POS tagging such as high ambiguity, data sparseness and large existence of unknown words. With this in mind, the main problem here is to find out how existing and efficient methods perform in Arabic and how can Quranic corpus be utilized to produce an efficient framework for Arabic POS tagging. We propose a classifiers combination experimental framework for Arabic POS tagger, by selecting two best diverse probabilistic classifiers used in numerous works in non-Arabic language; namely K-Nearest Neighbour (KNN and Naive Bayes (NB. The Majority voting is used here as the combination strategy to exploit classifiers advantages. In addition, an in-depth study has been conducted on a large list of features for exploiting effective features and investigating their role in enhancing the performance of POS taggers for the Quranic Arabic. Hence, this study aims to efficiently integrate different feature sets and tagging algorithms to synthesize more accurate POS tagging procedure. The data used in this study is the Arabic Quranic Corpus, an annotated linguistic resource consisting of 77,430 words with Arabic grammar, syntax and morphology for each word in the Holy Quran. The highest accuracy in the results achieved is 98.32%, which can be a significant enhancement for the state-of-the-art for Arabic Quranic text. The most effective features that yield this accuracy are a

  1. Wreck finding and classifying with a sonar filter

    Science.gov (United States)

    Agehed, Kenneth I.; Padgett, Mary Lou; Becanovic, Vlatko; Bornich, C.; Eide, Age J.; Engman, Per; Globoden, O.; Lindblad, Thomas; Lodgberg, K.; Waldemark, Karina E.

    1999-03-01

    Sonar detection and classification of sunken wrecks and other objects is of keen interest to many. This paper describes the use of neural networks (NN) for locating, classifying and determining the alignment of objects on a lakebed in Sweden. A complex program for data preprocessing and visualization was developed. Part of this program, The Sonar Viewer, facilitates training and testing of the NN using (1) the MATLAB Neural Networks Toolbox for multilayer perceptrons with backpropagation (BP) and (2) the neural network O-Algorithm (OA) developed by Age Eide and Thomas Lindblad. Comparison of the performance of the two neural networks approaches indicates that, for this data BP generalizes better than OA, but use of OA eliminates the need for training on non-target (lake bed) images. The OA algorithm does not work well with the smaller ships. Increasing the resolution to counteract this problem would slow down processing and require interpolation to suggest data values between the actual sonar measurements. In general, good results were obtained for recognizing large wrecks and determining their alignment. The programs developed a useful tool for further study of sonar signals in many environments. Recent developments in pulse coupled neural networks techniques provide an opportunity to extend the use in real-world applications where experimental data is difficult, expensive or time consuming to obtain.

  2. Estimating the crowding level with a neuro-fuzzy classifier

    Science.gov (United States)

    Boninsegna, Massimo; Coianiz, Tarcisio; Trentin, Edmondo

    1997-07-01

    This paper introduces a neuro-fuzzy system for the estimation of the crowding level in a scene. Monitoring the number of people present in a given indoor environment is a requirement in a variety of surveillance applications. In the present work, crowding has to be estimated from the image processing of visual scenes collected via a TV camera. A suitable preprocessing of the images, along with an ad hoc feature extraction process, is discussed. Estimation of the crowding level in the feature space is described in terms of a fuzzy decision rule, which relies on the membership of input patterns to a set of partially overlapping crowding classes, comprehensive of doubt classifications and outliers. A society of neural networks, either multilayer perceptrons or hyper radial basis functions, is trained to model individual class-membership functions. Integration of the neural nets within the fuzzy decision rule results in an overall neuro-fuzzy classifier. Important topics concerning the generalization ability, the robustness, the adaptivity and the performance evaluation of the system are explored. Experiments with real-world data were accomplished, comparing the present approach with statistical pattern recognition techniques, namely linear discriminant analysis and nearest neighbor. Experimental results validate the neuro-fuzzy approach to a large extent. The system is currently working successfully as a part of a monitoring system in the Dinegro underground station in Genoa, Italy.

  3. A system-awareness decision classifier to automated MSN forensics

    Science.gov (United States)

    Chu, Yin-Teshou Tsao; Fan, Kuo-Pao; Cheng, Ya-Wen; Tseng, Po-Kai; Chen, Huan; Cheng, Bo-Chao

    2007-09-01

    Data collection is the most important stage in network forensics; but under the resource constrained situations, a good evidence collection mechanism is required to provide effective event collections in a high network traffic environment. In literatures, a few network forensic tools offer MSN-messenger behavior reconstruction. Moreover, they do not have classification strategies at the collection stage when the system becomes saturated. The emphasis of this paper is to address the shortcomings of the above situations and pose a solution to select a better classification in order to ensure the integrity of the evidences in the collection stage under high-traffic network environments. A system-awareness decision classifier (SADC) mechanism is proposed in this paper. MSN-shot sensor is able to adjust the amount of data to be collected according to the current system status and to keep evidence integrity as much as possible according to the file format and the current system status. Analytical results show that proposed SADC to implement selective collection (SC) consumes less cost than full collection (FC) under heavy traffic scenarios. With the deployment of the proposed SADC mechanism, we believe that MSN-shot is able to reconstruct the MSN-messenger behaviors perfectly in the context of upcoming next generation network.

  4. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  5. Not Color Blind Using Multiband Photometry to Classify Supernovae

    CERN Document Server

    Poznanski, D; Maoz, D; Filippenko, A V; Leonard, D C; Matheson, T; Poznanski, Dovi; Gal-Yam, Avishay; Maoz, Dan; Filippenko, Alexei V.; Leonard, Douglas C.; Matheson, Thomas

    2002-01-01

    Large numbers of supernovae (SNe) have been discovered in recent years, and many more will be found in the near future. Once discovered, further study of a SN and its possible use as an astronomical tool (e.g., a distance estimator) require knowledge of the SN type. Current classification methods rely almost solely on the analysis of SN spectra to determine their type. However, spectroscopy may not be possible or practical when SNe are faint, very numerous, or discovered in archival studies. We present a classification method for SNe based on the comparison of their observed colors with synthetic ones, calculated from a large database of multi-epoch optical spectra of nearby events. We discuss the capabilities and limitations of this method. For example, type Ia SNe at redshifts z 100 days) stages. Broad-band photometry through standard Johnson-Cousins UBVRI filters can be useful to classify SNe up to z ~ 0.6. The use of Sloan Digital Sky Survey (SDSS) u'g'r'i'z' filters allows extending our classification m...

  6. A Novel Performance Metric for Building an Optimized Classifier

    Directory of Open Access Journals (Sweden)

    Mohammad Hossin

    2011-01-01

    Full Text Available Problem statement: Typically, the accuracy metric is often applied for optimizing the heuristic or stochastic classification models. However, the use of accuracy metric might lead the searching process to the sub-optimal solutions due to its less discriminating values and it is also not robust to the changes of class distribution. Approach: To solve these detrimental effects, we propose a novel performance metric which combines the beneficial properties of accuracy metric with the extended recall and precision metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP. Results: In this study, we demonstrate that the OARP metric is theoretically better than the accuracy metric using four generated examples. We also demonstrate empirically that a naïve stochastic classification algorithm, which is Monte Carlo Sampling (MCS algorithm trained with the OARP metric, is able to obtain better predictive results than the one trained with the conventional accuracy metric. Additionally, the t-test analysis also shows a clear advantage of the MCS model trained with the OARP metric over the accuracy metric alone for all binary data sets. Conclusion: The experiments have proved that the OARP metric leads stochastic classifiers such as the MCS towards a better training model, which in turn will improve the predictive results of any heuristic or stochastic classification models.

  7. Pulmonary nodule detection using a cascaded SVM classifier

    Science.gov (United States)

    Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias

    2016-03-01

    Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

  8. Quantum Hooke's law to classify pulse laser induced ultrafast melting.

    Science.gov (United States)

    Hu, Hao; Ding, Hepeng; Liu, Feng

    2015-02-03

    Ultrafast crystal-to-liquid phase transition induced by femtosecond pulse laser excitation is an interesting material's behavior manifesting the complexity of light-matter interaction. There exist two types of such phase transitions: one occurs at a time scale shorter than a picosecond via a nonthermal process mediated by electron-hole plasma formation; the other at a longer time scale via a thermal melting process mediated by electron-phonon interaction. However, it remains unclear what material would undergo which process and why? Here, by exploiting the property of quantum electronic stress (QES) governed by quantum Hooke's law, we classify the transitions by two distinct classes of materials: the faster nonthermal process can only occur in materials like ice having an anomalous phase diagram characterized with dTm/dP melting temperature and P is pressure, above a high threshold laser fluence; while the slower thermal process may occur in all materials. Especially, the nonthermal transition is shown to be induced by the QES, acting like a negative internal pressure, which drives the crystal into a "super pressing" state to spontaneously transform into a higher-density liquid phase. Our findings significantly advance fundamental understanding of ultrafast crystal-to-liquid phase transitions, enabling quantitative a priori predictions.

  9. Addressing the Challenge of Defining Valid Proteomic Biomarkers and Classifiers

    LENUS (Irish Health Repository)

    Dakna, Mohammed

    2010-12-10

    Abstract Background The purpose of this manuscript is to provide, based on an extensive analysis of a proteomic data set, suggestions for proper statistical analysis for the discovery of sets of clinically relevant biomarkers. As tractable example we define the measurable proteomic differences between apparently healthy adult males and females. We choose urine as body-fluid of interest and CE-MS, a thoroughly validated platform technology, allowing for routine analysis of a large number of samples. The second urine of the morning was collected from apparently healthy male and female volunteers (aged 21-40) in the course of the routine medical check-up before recruitment at the Hannover Medical School. Results We found that the Wilcoxon-test is best suited for the definition of potential biomarkers. Adjustment for multiple testing is necessary. Sample size estimation can be performed based on a small number of observations via resampling from pilot data. Machine learning algorithms appear ideally suited to generate classifiers. Assessment of any results in an independent test-set is essential. Conclusions Valid proteomic biomarkers for diagnosis and prognosis only can be defined by applying proper statistical data mining procedures. In particular, a justification of the sample size should be part of the study design.

  10. Impacts of classifying New York City students as overweight.

    Science.gov (United States)

    Almond, Douglas; Lee, Ajin; Schwartz, Amy Ellen

    2016-03-29

    US schools increasingly report body mass index (BMI) to students and their parents in annual fitness "report cards." We obtained 3,592,026 BMI reports for New York City public school students for 2007-2012. We focus on female students whose BMI puts them close to their age-specific cutoff for categorization as overweight. Overweight students are notified that their BMI "falls outside a healthy weight" and they should review their BMI with a health care provider. Using a regression discontinuity design, we compare those classified as overweight but near to the overweight cutoff to those whose BMI narrowly earned them a "healthy" BMI grouping. We find that overweight categorization generates small impacts on girls' subsequent BMI and weight. Whereas presumably an intent of BMI report cards was to slow BMI growth among heavier students, BMIs and weights did not decline relative to healthy peers when assessed the following academic year. Our results speak to the discrete categorization as overweight for girls with BMIs near the overweight cutoff, not to the overall effect of BMI reporting in New York City.

  11. Executed Movement Using EEG Signals through a Naive Bayes Classifier

    Directory of Open Access Journals (Sweden)

    Juliano Machado

    2014-11-01

    Full Text Available Recent years have witnessed a rapid development of brain-computer interface (BCI technology. An independent BCI is a communication system for controlling a device by human intension, e.g., a computer, a wheelchair or a neuroprosthes is, not depending on the brain’s normal output pathways of peripheral nerves and muscles, but on detectable signals that represent responsive or intentional brain activities. This paper presents a comparative study of the usage of the linear discriminant analysis (LDA and the naive Bayes (NB classifiers on describing both right- and left-hand movement through electroencephalographic signal (EEG acquisition. For the analysis, we considered the following input features: the energy of the segments of a band pass-filtered signal with the frequency band in sensorimotor rhythms and the components of the spectral energy obtained through the Welch method. We also used the common spatial pattern (CSP filter, so as to increase the discriminatory activity among movement classes. By using the database generated by this experiment, we obtained hit rates up to 70%. The results are compatible with previous studies.

  12. Linearly and Quadratically Separable Classifiers Using Adaptive Approach

    Institute of Scientific and Technical Information of China (English)

    Mohamed Abdel-Kawy Mohamed Ali Soliman; Rasha M. Abo-Bakr

    2011-01-01

    This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in Rn.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin e and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.

  13. Classifying EEG Signals during Stereoscopic Visualization to Estimate Visual Comfort.

    Science.gov (United States)

    Frey, Jérémy; Appriou, Aurélien; Lotte, Fabien; Hachet, Martin

    2016-01-01

    With stereoscopic displays a sensation of depth that is too strong could impede visual comfort and may result in fatigue or pain. We used Electroencephalography (EEG) to develop a novel brain-computer interface that monitors users' states in order to reduce visual strain. We present the first system that discriminates comfortable conditions from uncomfortable ones during stereoscopic vision using EEG. In particular, we show that either changes in event-related potentials' (ERPs) amplitudes or changes in EEG oscillations power following stereoscopic objects presentation can be used to estimate visual comfort. Our system reacts within 1 s to depth variations, achieving 63% accuracy on average (up to 76%) and 74% on average when 7 consecutive variations are measured (up to 93%). Performances are stable (≈62.5%) when a simplified signal processing is used to simulate online analyses or when the number of EEG channels is lessened. This study could lead to adaptive systems that automatically suit stereoscopic displays to users and viewing conditions. For example, it could be possible to match the stereoscopic effect with users' state by modifying the overlap of left and right images according to the classifier output.

  14. Using color histograms and SPA-LDA to classify bacteria.

    Science.gov (United States)

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  15. Two-categorical bundles and their classifying spaces

    DEFF Research Database (Denmark)

    Baas, Nils A.; Bökstedt, M.; Kro, T.A.

    2012-01-01

    For a 2-category 2C we associate a notion of a principal 2C-bundle. In case of the 2-category of 2-vector spaces in the sense of M.M. Kapranov and V.A. Voevodsky this gives the the 2-vector bundles of N.A. Baas, B.I. Dundas and J. Rognes. Our main result says that the geometric nerve of a good 2......-category is a classifying space for the associated principal 2-bundles. In the process of proving this we develop a lot of powerful machinery which may be useful in further studies of 2-categorical topology. As a corollary we get a new proof of the classification of principal bundles. A calculation based...... on the main theorem shows that the principal 2-bundles associated to the 2-category of 2-vector spaces in the sense of J.C. Baez and A.S. Crans split, up to concordance, as two copies of ordinary vector bundles. When 2C is a cobordism type 2-category we get a new notion of cobordism-bundles which turns out...

  16. Strategies for Transporting Data Between Classified and Unclassified Networks

    Science.gov (United States)

    2016-03-01

    Uses and Limitations of Unidirectional Network Bridges in a Secure Electronic Commerce Environment,” paper presented at the INC 2004 Conference...research. Among guards, the trusted information system Radiant Mercury appears promising. Further research is required in order to select an...Off-The-Shelf (COTS)]: Net Optics Tap 4 Guard (GOTS): Radiant Mercury 4 Guard (GOTS): Information Support Server Environment Guard 5 Guard (COTS

  17. MISR Level 2 FIRSTLOOK TOA/Cloud Classifier parameters V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 FIRSTLOOK TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Cloud Classifiers, and Support Vector Machine...

  18. The EB Factory Project I. A Fast, Neural Net Based, General Purpose Light Curve Classifier Optimized for Eclipsing Binaries

    CERN Document Server

    Paegert, M; Burger, D M

    2014-01-01

    We describe a new neural-net based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as LSST. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together with a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98\\% and a false-positive rate of 2\\% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes,...

  19. Using the joint transform correlator as the feature extractor for the nearest neighbor classifier

    Science.gov (United States)

    Soon, Boon Y.; Karim, Mohammad A.; Alam, Mohammad S.

    1999-01-01

    Financial transactions using credit cards have gained popularity but the growing number of counterfeits and frauds may defeat the purpose of the cards. The search for a superior method to curb the criminal acts has become urgent especially in the brilliant information age. Currently, neural-network-based pattern recognition techniques are employed for security verification. However, it has been a time consuming experience, as some techniques require a long period of training time. Here, a faster and more efficient method is proposed to perform security verification that verifies the fingerprint images using the joint transform correlator as a feature extractor for nearest neighbor classifier. The uniqueness comparison scheme is proposed to improve the accuracy of the system verification. The performance of the system under noise corruption, variable contrast, and rotation of the input image is verified with a computer simulation.

  20. Fine Needle Aspiration Cytology Evaluation for Classifying Breast Cancer Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Nor A.M.   Isa

    2007-01-01

    Full Text Available Thirteen cytology of fine needle aspiration image (i.e. cellularity, background information, cohesiveness, significant stromal component, clump thickness, nuclear membrane, bare nuclei, normal nuclei, mitosis, nucleus stain, uniformity of cell, fragility and number of cells in cluster are evaluated their possibility to be used as input data for artificial neural network in order to classify the breast pre-cancerous cases into four stages, namely malignant, fibroadenoma, fibrocystic disease, and other benign diseases. A total of 1300 reported breast pre-cancerous cases which was collected from Penang General Hospital and Hospital Universiti Sains Malaysia, Kelantan, Malaysia was used to train and test the artificial neural networks. The diagnosis system which was developed using the Hybrid Multilayered Perceptron and trained using Modified Recursive Prediction Error produced excellent diagnosis performance with 100% accuracy, 100% sensitivity and 100% specificity.

  1. A Statistical Approach of Texton Based Texture Classification Using LPboosting Classifier

    Directory of Open Access Journals (Sweden)

    C. Vivek

    2014-05-01

    Full Text Available The aim of the study in this research deals with the accurate texture classification and the image texture analysis has a voluminous errand prospective in real world applications. In this study, the texton co-occurrence matrix applied to the Broadatz database images that derive the template texton grid images and it undergoes to the discrete shearlet transform to decompose the image. The entropy lineage parameters of redundant and interpolate at a certain point which congregating adjacent regions based on geometric properties then the classification is apprehended by comparing the similarity between the estimated distributions of all detail sub bands through the strong LP boosting classification with various weak classifier configurations. We show that the resulted texture features while incurring the maximum of the discriminative information. Our hybrid classification method significantly outperforms the existing texture descriptors and stipulates classification accuracy in the state-of-the-art real world imaging applications.

  2. 48 CFR 52.227-10 - Filing of Patent Applications-Classified Subject Matter.

    Science.gov (United States)

    2010-10-01

    ... Applications-Classified Subject Matter. 52.227-10 Section 52.227-10 Federal Acquisition Regulations System... Text of Provisions and Clauses 52.227-10 Filing of Patent Applications—Classified Subject Matter. As prescribed at 27.203-2, insert the following clause: Filing of Patent Applications—Classified Subject...

  3. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah

    2013-06-01

    Full Text Available The free radical gene classification of cancer diseases is challenging job in biomedical data engineering. The improving of classification of gene selection of cancer diseases various classifier are used, but the classification of classifier are not validate. So ensemble classifier is used for cancer gene classification using neural network classifier with random forest tree. The random forest tree is ensembling technique of classifier in this technique the number of classifier ensemble of their leaf node of class of classifier. In this paper we combined neural network with random forest ensemble classifier for classification of cancer gene selection for diagnose analysis of cancer diseases. The proposed method is different from most of the methods of ensemble classifier, which follow an input output paradigm of neural network, where the members of the ensemble are selected from a set of neural network classifier. the number of classifiers is determined during the rising procedure of the forest. Furthermore, the proposed method produces an ensemble not only correct, but also assorted, ensuring the two important properties that should characterize an ensemble classifier. For empirical evaluation of our proposed method we used UCI cancer diseases data set for classification. Our experimental result shows that better result in compression of random forest tree classification.

  4. Learning Bayesian network classifiers for credit scoring using Markov Chain Monte Carlo search

    NARCIS (Netherlands)

    Baesens, B.; Egmont-Petersen, M.; Castelo, R.; Vanthienen, J.

    2002-01-01

    In this paper, we will evaluate the power and usefulness of Bayesian network classifiers for credit scoring. Various types of Bayesian network classifiers will be evaluated and contrasted including unrestricted Bayesian network classifiers learnt using Markov Chain Monte Carlo (MCMC) search. The exp

  5. Automatic discrimination between safe and unsafe swallowing using a reputation-based classifier

    Directory of Open Access Journals (Sweden)

    Nikjoo Mohammad S

    2011-11-01

    Full Text Available Abstract Background Swallowing accelerometry has been suggested as a potential non-invasive tool for bedside dysphagia screening. Various vibratory signal features and complementary measurement modalities have been put forth in the literature for the potential discrimination between safe and unsafe swallowing. To date, automatic classification of swallowing accelerometry has exclusively involved a single-axis of vibration although a second axis is known to contain additional information about the nature of the swallow. Furthermore, the only published attempt at automatic classification in adult patients has been based on a small sample of swallowing vibrations. Methods In this paper, a large corpus of dual-axis accelerometric signals were collected from 30 older adults (aged 65.47 ± 13.4 years, 15 male referred to videofluoroscopic examination on the suspicion of dysphagia. We invoked a reputation-based classifier combination to automatically categorize the dual-axis accelerometric signals into safe and unsafe swallows, as labeled via videofluoroscopic review. From these participants, a total of 224 swallowing samples were obtained, 164 of which were labeled as unsafe swallows (swallows where the bolus entered the airway and 60 as safe swallows. Three separate support vector machine (SVM classifiers and eight different features were selected for classification. Results With selected time, frequency and information theoretic features, the reputation-based algorithm distinguished between safe and unsafe swallowing with promising accuracy (80.48 ± 5.0%, high sensitivity (97.1 ± 2% and modest specificity (64 ± 8.8%. Interpretation of the most discriminatory features revealed that in general, unsafe swallows had lower mean vibration amplitude and faster autocorrelation decay, suggestive of decreased hyoid excursion and compromised coordination, respectively. Further, owing to its performance-based weighting of component classifiers, the static

  6. CLASSIFYING BENIGN AND MALIGNANT MASSES USING STATISTICAL MEASURES

    Directory of Open Access Journals (Sweden)

    B. Surendiran

    2011-11-01

    Full Text Available Breast cancer is the primary and most common disease found in women which causes second highest rate of death after lung cancer. The digital mammogram is the X-ray of breast captured for the analysis, interpretation and diagnosis. According to Breast Imaging Reporting and Data System (BIRADS benign and malignant can be differentiated using its shape, size and density, which is how radiologist visualize the mammograms. According to BIRADS mass shape characteristics, benign masses tend to have round, oval, lobular in shape and malignant masses are lobular or irregular in shape. Measuring regular and irregular shapes mathematically is found to be a difficult task, since there is no single measure to differentiate various shapes. In this paper, the malignant and benign masses present in mammogram are classified using Hue, Saturation and Value (HSV weight function based statistical measures. The weight function is robust against noise and captures the degree of gray content of the pixel. The statistical measures use gray weight value instead of gray pixel value to effectively discriminate masses. The 233 mammograms from the Digital Database for Screening Mammography (DDSM benchmark dataset have been used. The PASW data mining modeler has been used for constructing Neural Network for identifying importance of statistical measures. Based on the obtained important statistical measure, the C5.0 tree has been constructed with 60-40 data split. The experimental results are found to be encouraging. Also, the results will agree to the standard specified by the American College of Radiology-BIRADS Systems.

  7. Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers

    Science.gov (United States)

    Matoug, S.; Abdel-Dayem, A.; Passi, K.; Gross, W.; Alqarni, M.

    2012-02-01

    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging and pattern recognition techniques are good tools to create a learning database in the first step and to predict the class label of incoming data in order to assess the development of the disease, i.e., the conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease, which is the most critical brain disease for the senior population. Advanced medical imaging such as the volumetric MRI can detect changes in the size of brain regions due to the loss of the brain tissues. Measuring regions that atrophy during the progress of Alzheimer's disease can help neurologists in detecting and staging the disease. In the present investigation, we present a pseudo-automatic scheme that reads volumetric MRI, extracts the middle slices of the brain region, performs segmentation in order to detect the region of brain's ventricle, generates a feature vector that characterizes this region, creates an SQL database that contains the generated data, and finally classifies the images based on the extracted features. For our results, we have used the MRI data sets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.

  8. Determining the prediction limits of models and classifiers with applications for disruption prediction in JET

    Science.gov (United States)

    Murari, A.; Peluso, E.; Vega, J.; Gelfusa, M.; Lungaroni, M.; Gaudio, P.; Martínez, F. J.; Contributors, JET

    2017-01-01

    Understanding the many aspects of tokamak physics requires the development of quite sophisticated models. Moreover, in the operation of the devices, prediction of the future evolution of discharges can be of crucial importance, particularly in the case of the prediction of disruptions, which can cause serious damage to various parts of the machine. The determination of the limits of predictability is therefore an important issue for modelling, classifying and forecasting. In all these cases, once a certain level of performance has been reached, the question typically arises as to whether all the information available in the data has been exploited, or whether there are still margins for improvement of the tools being developed. In this paper, a theoretical information approach is proposed to address this issue. The excellent properties of the developed indicator, called the prediction factor (PF), have been proved with the help of a series of numerical tests. Its application to some typical behaviour relating to macroscopic instabilities in tokamaks has shown very positive results. The prediction factor has also been used to assess the performance of disruption predictors running in real time in the JET system, including the one systematically deployed in the feedback loop for mitigation purposes. The main conclusion is that the most advanced predictors basically exploit all the information contained in the locked mode signal on which they are based. Therefore, qualitative improvements in disruption prediction performance in JET would need the processing of additional signals, probably profiles.

  9. Classified and clustered data constellation: An efficient approach of 3D urban data management

    Science.gov (United States)

    Azri, Suhaibah; Ujang, Uznir; Castro, Francesc Antón; Rahman, Alias Abdul; Mioc, Darka

    2016-03-01

    The growth of urban areas has resulted in massive urban datasets and difficulties handling and managing issues related to urban areas. Huge and massive datasets can degrade data retrieval and information analysis performance. In addition, the urban environment is very difficult to manage because it involves various types of data, such as multiple types of zoning themes in the case of urban mixed-use development. Thus, a special technique for efficient handling and management of urban data is necessary. This paper proposes a structure called Classified and Clustered Data Constellation (CCDC) for urban data management. CCDC operates on the basis of two filters: classification and clustering. To boost up the performance of information retrieval, CCDC offers a minimal percentage of overlap among nodes and coverage area to avoid repetitive data entry and multipath query. The results of tests conducted on several urban mixed-use development datasets using CCDC verify that it efficiently retrieves their semantic and spatial information. Further, comparisons conducted between CCDC and existing clustering and data constellation techniques, from the aspect of preservation of minimal overlap and coverage, confirm that the proposed structure is capable of preserving the minimum overlap and coverage area among nodes. Our overall results indicate that CCDC is efficient in handling and managing urban data, especially urban mixed-use development applications.

  10. Genetic Algorithm with SRM SVM Classifier for Face Verification

    OpenAIRE

    Safiya K.M; Bhuvana, S.; P.TamijeSelvy; R. Radhakrishnan

    2012-01-01

    Face verification is an important problem. The problem of designing and evaluating discriminativeapproaches without explicit age modelling is used. To find the gradient orientation discard magnitudeinformation. Using hierarchical information this representation can be further improved which results inthe use of gradient orientation pyramid. When combined with a structural risk minimization support vectormachine with genetic algorithm, gradient orientation pyramid demonstrate excellent per...

  11. Mindfulness for Students Classified with Emotional/Behavioral Disorder

    Science.gov (United States)

    Malow, Micheline S.; Austin, Vance L.

    2016-01-01

    A six-week investigation utilizing a standard mindfulness for adolescents curriculum and norm-based standardized resiliency scale was implemented in a self-contained school for students with Emotional/Behavioral Disorders (E/BD). Informal integration of mindfulness activities into a classroom setting was examined for ecological appropriateness and…

  12. The Relationship Between Diversity and Accuracy in Multiple Classifier Systems

    Science.gov (United States)

    2012-03-22

    set is to predict whether an applicant will be approved for a loan given some information on their finances [33]. There are 690 observations, 6...Recurrence and Fractal Scaling Properties for Voice Disorder Detection”, BioMedical Engineering OnLine, 6(23):0, 2007. 143 [26] Lu, Y. “Knowledge Integration

  13. Classifying Software to Better Support Social Work Practice.

    Science.gov (United States)

    Nurius, Paula; Cnaan, Ram A.

    1991-01-01

    Notes that, as social work gradually enters electronic information era, interface between social work practice and computer world is often accompanied by disharmony. Presents current classification and terminology of software, identifies drawbacks, and proposes new classification approach based on needs of social workers. Discusses how combination…

  14. Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

    Science.gov (United States)

    Anaya, Leticia H.

    2011-01-01

    In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…

  15. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  16. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    Science.gov (United States)

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages.

  17. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.

    Directory of Open Access Journals (Sweden)

    Sebastian Bach

    Full Text Available Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.

  18. Combination of designed immune based classifiers for ERP assessment in a P300-based GKT

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Moradi

    2012-08-01

    Full Text Available Constructing a precise classifier is an important issue in pattern recognition task. Combination the decision of several competing classifiers to achieve improved classification accuracy has become interested in many research areas. In this study, Artificial Immune system (AIS as an effective artificial intelligence technique was used for designing of several efficient classifiers. Combination of multiple immune based classifiers was tested on ERP assessment in a P300-based GKT (Guilty Knowledge Test. Experiment results showed that the proposed classifier named Compact Artificial Immune System (CAIS was a successful classification method and could be competitive to other classifiers such as K-nearest neighbourhood (KNN, Linear Discriminant Analysis (LDA and Support Vector Machine (SVM. Also, in the experiments, it was observed that using the decision fusion techniques for multiple classifier combination lead to better recognition results. The best rate of recognition by CAIS was 80.90% that has been improved in compare to other applied classification methods in our study.

  19. Win percentage: a novel measure for assessing the suitability of machine classifiers for biological problems

    Science.gov (United States)

    2012-01-01

    Background Selecting an appropriate classifier for a particular biological application poses a difficult problem for researchers and practitioners alike. In particular, choosing a classifier depends heavily on the features selected. For high-throughput biomedical datasets, feature selection is often a preprocessing step that gives an unfair advantage to the classifiers built with the same modeling assumptions. In this paper, we seek classifiers that are suitable to a particular problem independent of feature selection. We propose a novel measure, called "win percentage", for assessing the suitability of machine classifiers to a particular problem. We define win percentage as the probability a classifier will perform better than its peers on a finite random sample of feature sets, giving each classifier equal opportunity to find suitable features. Results First, we illustrate the difficulty in evaluating classifiers after feature selection. We show that several classifiers can each perform statistically significantly better than their peers given the right feature set among the top 0.001% of all feature sets. We illustrate the utility of win percentage using synthetic data, and evaluate six classifiers in analyzing eight microarray datasets representing three diseases: breast cancer, multiple myeloma, and neuroblastoma. After initially using all Gaussian gene-pairs, we show that precise estimates of win percentage (within 1%) can be achieved using a smaller random sample of all feature pairs. We show that for these data no single classifier can be considered the best without knowing the feature set. Instead, win percentage captures the non-zero probability that each classifier will outperform its peers based on an empirical estimate of performance. Conclusions Fundamentally, we illustrate that the selection of the most suitable classifier (i.e., one that is more likely to perform better than its peers) not only depends on the dataset and application but also on the

  20. Heterogeneous classifier fusion for ligand-based virtual screening: or, how decision making by committee can be a good thing.

    Science.gov (United States)

    Riniker, Sereina; Fechner, Nikolas; Landrum, Gregory A

    2013-11-25

    The concept of data fusion - the combination of information from different sources describing the same object with the expectation to generate a more accurate representation - has found application in a very broad range of disciplines. In the context of ligand-based virtual screening (VS), data fusion has been applied to combine knowledge from either different active molecules or different fingerprints to improve similarity search performance. Machine-learning (ML) methods based on fusion of multiple homogeneous classifiers, in particular random forests, have also been widely applied in the ML literature. The heterogeneous version of classifier fusion - fusing the predictions from different model types - has been less explored. Here, we investigate heterogeneous classifier fusion for ligand-based VS using three different ML methods, RF, naïve Bayes (NB), and logistic regression (LR), with four 2D fingerprints, atom pairs, topological torsions, RDKit fingerprint, and circular fingerprint. The methods are compared using a previously developed benchmarking platform for 2D fingerprints which is extended to ML methods in this article. The original data sets are filtered for difficulty, and a new set of challenging data sets from ChEMBL is added. Data sets were also generated for a second use case: starting from a small set of related actives instead of diverse actives. The final fused model consistently outperforms the other approaches across the broad variety of targets studied, indicating that heterogeneous classifier fusion is a very promising approach for ligand-based VS. The new data sets together with the adapted source code for ML methods are provided in the Supporting Information .

  1. PENGENDALI POINTER DENGAN GAZE TRACKING MENGGUNAKAN METODE HAAR CLASSIFIER SEBAGAI ALAT BANTU PRESENTASI (EYE POINTER

    Directory of Open Access Journals (Sweden)

    Edi Satriyanto

    2013-03-01

    Full Text Available The application that builded in this research is a pointer controller using eye movement (eye pointer. This application is one of image processing applications, where the users just have to move their eye to control the computer pointer. This eye pointer is expected able to assist the usage of manual pointer during the presentation. Since the title of this research is using gaze tracking that follow the eye movement, so that is important to detect the center of the pupil. To track the gaze, it is necessary to detect the center of the pupil if the eye image is from the input camera. The gaze tracking is detected using the three-step hierarchy system. First, motion detection, object (eye detection, and then pupil detection. For motion detection, the used method is identify the movement by dynamic compare the pixel ago by current pixel at t time. The eye region is detected using the Haar-Like Feature Classifier, where the sistem must be trained first to get the cascade classifier that allow the sistem to detect the object in each frame that captured by camera. The center of pupil is detect using integral projection.The final step is mapping the position of center of pupil to the screen of monitor using comparison scale between eye resolution with screen resolution. When detecting the eye gaze on the screen, the information (the distance and angle between eyes and a screen is necessary to compute pointing coordinates on the screen. In this research, the accuracy of this application is equal to 80% at eye movement with speed 1-2 second. And the optimum mean value is between 5 and 10. The optimum distance of user and the webcam is 40 cm from webcam.

  2. Multivariate models to classify Tuscan virgin olive oils by zone.

    Directory of Open Access Journals (Sweden)

    Alessandri, Stefano

    1999-10-01

    Full Text Available In order to study and classify Tuscan virgin olive oils, 179 samples were collected. They were obtained from drupes harvested during the first half of November, from three different zones of the Region. The sampling was repeated for 5 years. Fatty acids, phytol, aliphatic and triterpenic alcohols, triterpenic dialcohols, sterols, squalene and tocopherols were analyzed. A subset of variables was considered. They were selected in a preceding work as the most effective and reliable, from the univariate point of view. The analytical data were transformed (except for the cycloartenol to compensate annual variations, the mean related to the East zone was subtracted from each value, within each year. Univariate three-class models were calculated and further variables discarded. Then multivariate three-zone models were evaluated, including phytol (that was always selected and all the combinations of palmitic, palmitoleic and oleic acid, tetracosanol, cycloartenol and squalene. Models including from two to seven variables were studied. The best model shows by-zone classification errors less than 40%, by-zone within-year classification errors that are less than 45% and a global classification error equal to 30%. This model includes phytol, palmitic acid, tetracosanol and cycloartenol.

    Para estudiar y clasificar aceites de oliva vírgenes Toscanos, se utilizaron 179 muestras, que fueron obtenidas de frutos recolectados durante la primera mitad de Noviembre, de tres zonas diferentes de la Región. El muestreo fue repetido durante 5 años. Se analizaron ácidos grasos, fitol, alcoholes alifáticos y triterpénicos, dialcoholes triterpénicos, esteroles, escualeno y tocoferoles. Se consideró un subconjunto de variables que fueron seleccionadas en un trabajo anterior como el más efectivo y fiable, desde el punto de vista univariado. Los datos analíticos se transformaron (excepto para el cicloartenol para compensar las variaciones anuales, rest

  3. Efficient Multi-Concept Visual Classifier Adaptation in Changing Environments

    Science.gov (United States)

    2016-09-01

    of-the-art deep learners11,12 now rely on millions of images for learning. Label collection for the ImageNet data set13 was performed in parallel via...classification with deep con- volutional neural networks. In: Advances in Neural Information Processing Systems; 2012; Lake Tahoe, NV. p. 1097–1105. 12. Szegedy C...Experimental Robotics; 2008; Rio de Janeiro, Brazil. p. 179–190. 48. Russell BC, Torralba A, Murphy KP, Freeman WT. LabelMe: a database and web -based tool

  4. New Hybrid (SVMs -CSOA Architecture for classifying Electrocardiograms Signals

    Directory of Open Access Journals (Sweden)

    Assist. Prof. Majida Ali Abed

    2015-05-01

    Full Text Available a medical test that provides diagnostic relevant information of the heart activity is obtained by means of an ElectroCardioGram (ECG. Many heart diseases can be found by analyzing ECG because this method with moral performance is very helpful for shaping human heart status. Support Vector Machines (SVM has been widely applied in classification. In this paper we present the SVM parameter optimization approach using novel metaheuristic for evolutionary optimization algorithms is Cat Swarm Optimization Algorithm (CSOA. The results obtained assess the feasibility of new hybrid (SVMs -CSOA architecture and demonstrate an improvement in terms of accuracy.

  5. Identifying Different Transportation Modes from Trajectory Data Using Tree-Based Ensemble Classifiers

    Directory of Open Access Journals (Sweden)

    Zhibin Xiao

    2017-02-01

    Full Text Available Recognition of transportation modes can be used in different applications including human behavior research, transport management and traffic control. Previous work on transportation mode recognition has often relied on using multiple sensors or matching Geographic Information System (GIS information, which is not possible in many cases. In this paper, an approach based on ensemble learning is proposed to infer hybrid transportation modes using only Global Position System (GPS data. First, in order to distinguish between different transportation modes, we used a statistical method to generate global features and extract several local features from sub-trajectories after trajectory segmentation, before these features were combined in the classification stage. Second, to obtain a better performance, we used tree-based ensemble models (Random Forest, Gradient Boosting Decision Tree, and XGBoost instead of traditional methods (K-Nearest Neighbor, Decision Tree, and Support Vector Machines to classify the different transportation modes. The experiment results on the later have shown the efficacy of our proposed approach. Among them, the XGBoost model produced the best performance with a classification accuracy of 90.77% obtained on the GEOLIFE dataset, and we used a tree-based ensemble method to ensure accurate feature selection to reduce the model complexity.

  6. CERTAIN INVESTIGATIONS ON VARIOUS ALGORITHMS THAT IS USED TO CLASSIFY MALWARE AND GOODWARE IN ANDROID APPLICATIONS

    Directory of Open Access Journals (Sweden)

    B.P. Sreejith Vignesh

    2016-10-01

    Full Text Available In recent trends, the mobile devices play a very vital role in day to day activities of human beings. Google Android OS appeared lately i.e., in September 2008 in mobile market and gains more popularity. Google Android OS offers more flexibility for the users by offering N number of free downloadable applications to the users, which in turn gets changed as the superlative target for the attackers . As a result, many android applications that may contain the malware applications which are capable of stealing privacy information of users are available in market as a (.apk file. The attackers started to target uneducated people and started stealing the information using applications. These applications request user to allow set of permissions during installation. For a new user it is difficult to identify the set of permissions that are harmful. This could be an advantage for malware intruders to access the data or infect the mobile device by introducing malware applications. Therefore, android malware detection various algorithms algorithm and Machine learning approaches is proposed to classify malware and goodware applications by analyzing the permission features.

  7. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    Science.gov (United States)

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  8. Inform@ed space

    DEFF Research Database (Denmark)

    Bjerrum, Peter; Olsen, Kasper Nefer

    2001-01-01

    Inform@ed space Sensorial Perception And Computer Enchancement - bidrag til Nordisk Arkitekturforskningsforenings IT-konference, AAA april 2001.......Inform@ed space Sensorial Perception And Computer Enchancement - bidrag til Nordisk Arkitekturforskningsforenings IT-konference, AAA april 2001....

  9. RS Information Extraction and Accuracy Evaluation Based on Multi-elements Data and Different Classify Method——A Case Study of the Eastern Section of Qilian Mountains%基于多元数据和不同分类算法的遥感影像信息提取及精度评价——以祁连山东段为例

    Institute of Scientific and Technical Information of China (English)

    别强; 赵传燕; 彭守璋; 冯兆东

    2009-01-01

    Taken the eastern section of Qilian Mountains,Shiyanghe Basin as the study area,the land use and land coverage was classified using TM data in the typical mountainous system. The study area is located in 37°5'37"N-37°6'30"N,102°3'37.2"E-102°6'E with elevation 2 016~4 318 m. Firstly,the retrieved data from TM image such as principal components, vegetation indexes, image textural feature by using gray co-occurrence matrix, the topographical data from DEM (elevation,aspect) were obtained in this study. The optimal bands combination was acquired using optimal band index method. The optimal bands combinations was used to classify the land use and coverage based on non-supervised classification,maximum likelihood method, support vector machine and the decision tree classification method. The conclusion can be drawn from the approach:① the multi-scale data mining is useful to improve the accuracy of classification; ② by comparing the four methods,the decision tree classification method had been selected due to its high accuracy which depended on the appropriate criteria selected.%以祁连山东段典型山地系统为研究区,通过提取研究区TM影像的主成分、各类植被指数、基于灰度共生矩阵的影像纹理特征以及研究区地形特征等数据,应用最优波段指数方法得到最优波段组合,并运用非监督分类、最大似然法、支持向量机分类法、决策树分类法对上述最优波段进行分类研究.结果表明多尺度数据挖掘有利于分类精度的提高,同时选取合适的判断标准的决策树分类方法在遥感信息提取中有比较直观意义和较高的分类精度.在上述分类方法中分类精度由高到低为决策树分类>支持向量机法>最大似然法>非监督分类法.决策树分类总体分类精度为94.50%,kappa系数为0.9122.

  10. Ensemble regularized linear discriminant analysis classifier for P300-based brain-computer interface.

    Science.gov (United States)

    Onishi, Akinari; Natsume, Kiyohisa

    2013-01-01

    This paper demonstrates a better classification performance of an ensemble classifier using a regularized linear discriminant analysis (LDA) for P300-based brain-computer interface (BCI). The ensemble classifier with an LDA is sensitive to the lack of training data because covariance matrices are estimated imprecisely. One of the solution against the lack of training data is to employ a regularized LDA. Thus we employed the regularized LDA for the ensemble classifier of the P300-based BCI. The principal component analysis (PCA) was used for the dimension reduction. As a result, an ensemble regularized LDA classifier showed significantly better classification performance than an ensemble un-regularized LDA classifier. Therefore the proposed ensemble regularized LDA classifier is robust against the lack of training data.

  11. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero, E-mail: piero.baraldi@polimi.i [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-04-15

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  12. The Entire Quantile Path of a Risk-Agnostic SVM Classifier

    CERN Document Server

    Yu, Jin; Zhang, Jian

    2012-01-01

    A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X = x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has been shown that Support Vector Machines (SVMs) in the limit are quantile classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost of misclassification SVMs can be appropriately extended to recover, in the limit, the quantile binary classifier for any t. We then present a principled algorithm to solve the extended SVM classifier for all values of t simultaneously. This has two implications: First, one can recover the entire conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build a risk-agnostic SVM classifier where the cost of misclassification need not be known apriori. Preliminary numerical experiments show the effectiveness of the proposed algorithm.

  13. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    Energy Technology Data Exchange (ETDEWEB)

    Wurtz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaplan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  14. Combining classifiers generated by multi-gene genetic programming for protein fold recognition using genetic algorithm.

    Science.gov (United States)

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi; Mousavi, Reza

    2015-01-01

    In this study the problem of protein fold recognition, that is a classification task, is solved via a hybrid of evolutionary algorithms namely multi-gene Genetic Programming (GP) and Genetic Algorithm (GA). Our proposed method consists of two main stages and is performed on three datasets taken from the literature. Each dataset contains different feature groups and classes. In the first step, multi-gene GP is used for producing binary classifiers based on various feature groups for each class. Then, different classifiers obtained for each class are combined via weighted voting so that the weights are determined through GA. At the end of the first step, there is a separate binary classifier for each class. In the second stage, the obtained binary classifiers are combined via GA weighting in order to generate the overall classifier. The final obtained classifier is superior to the previous works found in the literature in terms of classification accuracy.

  15. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Directory of Open Access Journals (Sweden)

    Yinan Zhang

    2016-01-01

    Full Text Available Diabetic retinopathy (DR screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  16. Opening the Black Box: Toward Classifying Care and Treatment for Children and Adolescents with Behavioral and Emotional Problems within and across Care Organizations

    Science.gov (United States)

    Evenboer, K. E.; Huyghen, A. M. N.; Tuinstra, J.; Reijneveld, S. A.; Knorth, E. J.

    2016-01-01

    Objective: The Taxonomy of Care for Youth was developed to gather information about the care offered to children and adolescents with behavioral and emotional problems in various care settings. The aim was to determine similarities and differences in the content of care and thereby to classify the care offered to these children and youth within…

  17. Opening the black box : Toward classifying care and treatment for children and adolescents with behavioral and emotional problems within and across care organizations

    NARCIS (Netherlands)

    Evenboer, K.E.; Huyghen, A.M.N.; Tuinstra, J.; Reijneveld, S.A.; Knorth, E.J.

    2016-01-01

    Objective: The Taxonomy of Care for Youth was developed to gather information about the care offered to children and adolescents with behavioral and emotional problems in various care settings. The aim was to determine similarities and differences in the content of care and thereby to classify the c

  18. Feature Selection Strategies for Classifying High Dimensional Astronomical Data Sets

    CERN Document Server

    Donalek, Ciro; Djorgovski, S G; Mahabal, Ashish A; Graham, Matthew J; Fuchs, Thomas J; Turmon, Michael J; Philip, N Sajeeth; Yang, Michael Ting-Chang; Longo, Giuseppe

    2013-01-01

    The amount of collected data in many scientific fields is increasing, all of them requiring a common task: extract knowledge from massive, multi parametric data sets, as rapidly and efficiently possible. This is especially true in astronomy where synoptic sky surveys are enabling new research frontiers in the time domain astronomy and posing several new object classification challenges in multi dimensional spaces; given the high number of parameters available for each object, feature selection is quickly becoming a crucial task in analyzing astronomical data sets. Using data sets extracted from the ongoing Catalina Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a variety of feature selection strategies used to identify the subsets that give the most information and the results achieved applying these techniques to three major astronomical problems.

  19. Classifying Gamma-Ray Bursts with Gaussian Mixture Model

    CERN Document Server

    Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-01-01

    Using Gaussian Mixture Model (GMM) and Expectation Maximization Algorithm, we perform an analysis of time duration ($T_{90}$) for \\textit{CGRO}/BATSE, \\textit{Swift}/BAT and \\textit{Fermi}/GBM Gamma-Ray Bursts. The $T_{90}$ distributions of 298 redshift-known \\textit{Swift}/BAT GRBs have also been studied in both observer and rest frames. Bayesian Information Criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the \\textit{CGRO}/BATSE and \\textit{Fermi}/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the \\textit{Swift}/BAT bursts in the rest frame, which is consistent with some previous results. However, \\textit{Swift} GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of \\textit{Swift}/BAT.

  20. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  1. Empirical Analysis of Bagged SVM Classifier for Data Mining Applications

    Directory of Open Access Journals (Sweden)

    M.Govindarajan

    2013-11-01

    Full Text Available Data mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. The feasibility and the benefits of the proposed approaches are demonstrated by the means of data mining applications like intrusion detection, direct marketing, and signature verification. A variety of techniques have been employed for analysis ranging from traditional statistical methods to data mining approaches. Bagging and boosting are two relatively new but popular methods for producing ensembles. In this work, bagging is evaluated on real and benchmark data sets of intrusion detection, direct marketing, and signature verification in conjunction with as the base learner. The proposed is superior to individual approach for data mining applications in terms of classification accuracy.

  2. Classifying the Basic Parameters of Ultraviolet Copper Bromide Laser

    Science.gov (United States)

    Gocheva-Ilieva, S. G.; Iliev, I. P.; Temelkov, K. A.; Vuchkov, N. K.; Sabotinov, N. V.

    2009-10-01

    The performance of deep ultraviolet copper bromide lasers is of great importance because of their applications in medicine, microbiology, high-precision processing of new materials, high-resolution laser lithography in microelectronics, high-density optical recording of information, laser-induced fluorescence in plasma and wide-gap semiconductors and more. In this paper we present a statistical study on the classification of 12 basic lasing parameters, by using different agglomerative methods of cluster analysis. The results are based on a big amount of experimental data for UV Cu+ Ne-CuBr laser with wavelengths 248.6 nm, 252.9 nm, 260.0 nm and 270.3 nm, obtained in Georgi Nadjakov Institute of Solid State Physics, Bulgarian Academy of Sciences. The relevant influence of parameters on laser generation is also evaluated. The results are applicable in computer modeling and planning the experiments and further laser development with improved output characteristics.

  3. Design and evaluation of neural classifiers application to skin lesion classification

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan

    1995-01-01

    Addresses design and evaluation of neural classifiers for the problem of skin lesion classification. By using Gauss Newton optimization for the entropic cost function in conjunction with pruning by Optimal Brain Damage and a new test error estimate, the authors show that this scheme is capable...... of optimizing the architecture of neural classifiers. Furthermore, error-reject tradeoff theory indicates, that the resulting neural classifiers for the skin lesion classification problem are near-optimal...

  4. A self-growing Bayesian network classifier for online learning of human motion patterns

    OpenAIRE

    Yung, NHC; Chen, Z

    2010-01-01

    This paper proposes a new self-growing Bayesian network classifier for online learning of human motion patterns (HMPs) in dynamically changing environments. The proposed classifier is designed to represent HMP classes based on a set of historical trajectories labeled by unsupervised clustering. It then assigns HMP class labels to current trajectories. Parameters of the proposed classifier are recalculated based on the augmented dataset of labeled trajectories and all HMP classes are according...

  5. Il quadrants for information access technology

    NARCIS (Netherlands)

    Olthof, W.; Willems, J.

    2008-01-01

    Information Access Technology is used for indexing and classifying information so it canbe stored and retrieved in a supported way. Information Logistics needs this technology in order to support the knowledge worker with the right information without him having to spend a lot of time searching for

  6. Bitcoin and Beyond: Exclusively Informational Money

    NARCIS (Netherlands)

    Bergstra, J.A.; de Leeuw, K.

    2013-01-01

    The famous new money Bitcoin is classified as a technical informational money (TIM). Besides introducing the idea of a TIM, a more extreme notion of informational money will be developed: exclusively informational money (EXIM). The informational coins (INCOs) of an EXIM can be in control of an agent

  7. Construction of Classifier Based on MPCA and QSA and Its Application on Classification of Pancreatic Diseases

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2013-01-01

    Full Text Available A novel method is proposed to establish the classifier which can classify the pancreatic images into normal or abnormal. Firstly, the brightness feature is used to construct high-order tensors, then using multilinear principal component analysis (MPCA extracts the eigentensors, and finally, the classifier is constructed based on support vector machine (SVM and the classifier parameters are optimized with quantum simulated annealing algorithm (QSA. In order to verify the effectiveness of the proposed algorithm, the normal SVM method has been chosen as comparing algorithm. The experimental results show that the proposed method can effectively extract the eigenfeatures and improve the classification accuracy of pancreatic images.

  8. Face Recognition Based on Support Vector Machine and Nearest Neighbor Classifier

    Institute of Scientific and Technical Information of China (English)

    张燕昆; 刘重庆

    2003-01-01

    Support vector machine (SVM), as a novel approach in pattern recognition, has demonstrated a success in face detection and face recognition. In this paper, a face recognition approach based on the SVM classifier with the nearest neighbor classifier (NNC) is proposed. The principal component analysis (PCA) is used to reduce the dimension and extract features. Then one-against-all stratedy is used to train the SVM classifiers. At the testing stage, we propose an algorithm by combining SVM classifier with NNC to improve the correct recognition rate. We conduct the experiment on the Cambridge ORL face database. The result shows that our approach outperforms the standard eigenface approach and some other approaches.

  9. Bagged ensemble of Fuzzy C-Means classifiers for nuclear transient identification

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero; Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico, E-mail: enrico.zio@polimi.it [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-05-15

    Research highlights: > A bagged ensemble of classifiers is applied for nuclear transient identification. > Fuzzy C-Means classifiers are used as base classifiers of the ensemble. > Transients are simulated in the feedwater system of a boiling water reactor. > Ensemble is compared with a supervised, evolutionary-optimized FCM classifier. > Ensemble improves classification accuracy in cases of large or very small sizes data. - Abstract: This paper presents an ensemble-based scheme for nuclear transient identification. The approach adopted to construct the ensemble of classifiers is bagging; the novelty consists in using supervised fuzzy C-means (FCM) classifiers as base classifiers of the ensemble. The performance of the proposed classification scheme has been verified by comparison with a single supervised, evolutionary-optimized FCM classifier with respect of the task of classifying artificial datasets. The results obtained indicate that in the cases of datasets of large or very small sizes and/or complex decision boundaries, the bagging ensembles can improve classification accuracy. Then, the approach has been applied to the identification of simulated transients in the feedwater system of a boiling water reactor (BWR).

  10. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah

    2013-06-01

    Full Text Available The free radical gene classification of cancerdiseasesis challenging job in biomedical dataengineering. The improving of classification of geneselection of cancer diseases various classifier areused, but the classification of classifier are notvalidate. So ensemble classifier is used for cancergene classification using neural network classifierwith random forest tree. The random forest tree isensembling technique of classifier in this techniquethe number of classifier ensemble of their leaf nodeof class of classifier. In this paper we combinedneuralnetwork with random forest ensembleclassifier for classification of cancer gene selectionfor diagnose analysis of cancer diseases.Theproposed method is different from most of themethods of ensemble classifier, which follow aninput output paradigm ofneural network, where themembers of the ensemble are selected from a set ofneural network classifier. the number of classifiersis determined during the rising procedure of theforest. Furthermore, the proposed method producesan ensemble not only correct, but also assorted,ensuring the two important properties that shouldcharacterize an ensemble classifier. For empiricalevaluation of our proposed method we used UCIcancer diseases data set for classification. Ourexperimental result shows that betterresult incompression of random forest tree classification

  11. Application of SVM classifier in thermographic image classification for early detection of breast cancer

    Science.gov (United States)

    Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał

    2016-09-01

    This article presents the application of machine learning algorithms for early detection of breast cancer on the basis of thermographic images. Supervised learning model: Support vector machine (SVM) and Sequential Minimal Optimization algorithm (SMO) for the training of SVM classifier were implemented. The SVM classifier was included in a client-server application which enables to create a training set of examinations and to apply classifiers (including SVM) for the diagnosis and early detection of the breast cancer. The sensitivity and specificity of SVM classifier were calculated based on the thermographic images from studies. Furthermore, the heuristic method for SVM's parameters tuning was proposed.

  12. Evaluation of the efficiency of biofield diagnostic system in breast cancer detection using clinical study results and classifiers.

    Science.gov (United States)

    Subbhuraam, Vinitha Sree; Ng, E Y K; Kaw, G; Acharya U, Rajendra; Chong, B K

    2012-02-01

    The division of breast cancer cells results in regions of electrical depolarisation within the breast. These regions extend to the skin surface from where diagnostic information can be obtained through measurements of the skin surface electropotentials using sensors. This technique is used by the Biofield Diagnostic System (BDS) to detect the presence of malignancy. This paper evaluates the efficiency of BDS in breast cancer detection and also evaluates the use of classifiers for improving the accuracy of BDS. 182 women scheduled for either mammography or ultrasound or both tests participated in the BDS clinical study conducted at Tan Tock Seng hospital, Singapore. Using the BDS index obtained from the BDS examination and the level of suspicion score obtained from mammography/ultrasound results, the final BDS result was deciphered. BDS demonstrated high values for sensitivity (96.23%), specificity (93.80%), and accuracy (94.51%). Also, we have studied the performance of five supervised learning based classifiers (back propagation network, probabilistic neural network, linear discriminant analysis, support vector machines, and a fuzzy classifier), by feeding selected features from the collected dataset. The clinical study results show that BDS can help physicians to differentiate benign and malignant breast lesions, and thereby, aid in making better biopsy recommendations.

  13. Classifying and recovering from sensing failures in autonomous mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, R.R.; Hershberger, D. [Colorado School of Mines, Golden, CO (United States)

    1996-12-31

    This paper presents a characterization of sensing failures in autonomous mobile robots, a methodology for classification and recovery, and a demonstration of this approach on a mobile robot performing landmark navigation. A sensing failure is any event leading to defective perception, including sensor malfunctions, software errors, environmental changes, and errant expectations. The approach demonstrated in this paper exploits the ability of the robot to interact with its environment to acquire additional information for classification (i.e., active perception). A Generate and Test strategy is used to generate hypotheses to explain the symptom resulting from the sensing failure. The recovery scheme replaces the affected sensing processes with an alternative logical sensor. The approach is implemented as the Sensor Fusion Effects Exception Handling (SFX-EH) architecture. The advantages of SFX-EH are that it requires only a partial causal model of sensing failure, the control scheme strives for a fast response, tests are constructed so as to prevent confounding from collaborating sensors which have also failed, and the logical sensor organization allows SFX-EH to be interfaced with the behavioral level of existing robot architectures.

  14. Convolutional Neural Networks with Batch Normalization for Classifying Hi-hat, Snare, and Bass Percussion Sound Samples

    DEFF Research Database (Denmark)

    Gajhede, Nicolai; Beck, Oliver; Purwins, Hendrik

    2016-01-01

    After having revolutionized image and speech processing, convolu- tional neural networks (CNN) are now starting to become more and more successful in music information retrieval as well. We compare four CNN types for classifying a dataset of more than 3000 acoustic and synthesized samples...... of the most prominent drum set instru- ments (bass, snare, hi-hat). We use the Mel scale log magnitudes (MLS) as a representation for the input of the CNN. We compare the classification results of 1) a CNN (3 conv/max-pool layers and 2 fully connected layers) without drop-out and batch normalization vs. three...

  15. Classifier Directed Data Hybridization for Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-11-01

    Full Text Available Quality segment generation is a well-known challenge and research objective within Geographic Object-based Image Analysis (GEOBIA. Although methodological avenues within GEOBIA are diverse, segmentation commonly plays a central role in most approaches, influencing and being influenced by surrounding processes. A general approach using supervised quality measures, specifically user provided reference segments, suggest casting the parameters of a given segmentation algorithm as a multidimensional search problem. In such a sample supervised segment generation approach, spatial metrics observing the user provided reference segments may drive the search process. The search is commonly performed by metaheuristics. A novel sample supervised segment generation approach is presented in this work, where the spectral content of provided reference segments is queried. A one-class classification process using spectral information from inside the provided reference segments is used to generate a probability image, which in turn is employed to direct a hybridization of the original input imagery. Segmentation is performed on such a hybrid image. These processes are adjustable, interdependent and form a part of the search problem. Results are presented detailing the performances of four method variants compared to the generic sample supervised segment generation approach, under various conditions in terms of resultant segment quality, required computing time and search process characteristics. Multiple metrics, metaheuristics and segmentation algorithms are tested with this approach. Using the spectral data contained within user provided reference segments to tailor the output generally improves the results in the investigated problem contexts, but at the expense of additional required computing time.

  16. Weakly supervised learning of a classifier for unusual event detection.

    Science.gov (United States)

    Jäger, Mark; Knoll, Christian; Hamprecht, Fred A

    2008-09-01

    In this paper, we present an automatic classification framework combining appearance based features and hidden Markov models (HMM) to detect unusual events in image sequences. One characteristic of the classification task is that anomalies are rare. This reflects the situation in the quality control of industrial processes, where error events are scarce by nature. As an additional restriction, class labels are only available for the complete image sequence, since frame-wise manual scanning of the recorded sequences for anomalies is too expensive and should, therefore, be avoided. The proposed framework reduces the feature space dimension of the image sequences by employing subspace methods and encodes characteristic temporal dynamics using continuous hidden Markov models (CHMMs). The applied learning procedure is as follows. 1) A generative model for the regular sequences is trained (one-class learning). 2) The regular sequence model (RSM) is used to locate potentially unusual segments within error sequences by means of a change detection algorithm (outlier detection). 3) Unusual segments are used to expand the RSM to an error sequence model (ESM). The complexity of the ESM is controlled by means of the Bayesian Information Criterion (BIC). The likelihood ratio of the data given the ESM and the RSM is used for the classification decision. This ratio is close to one for sequences without error events and increases for sequences containing error events. Experimental results are presented for image sequences recorded from industrial laser welding processes. We demonstrate that the learning procedure can significantly reduce the user interaction and that sequences with error events can be found with a small false positive rate. It has also been shown that a modeling of the temporal dynamics is necessary to reach these low error rates.

  17. Adaptation in P300 braincomputer interfaces: A two-classifier cotraining approach

    DEFF Research Database (Denmark)

    Panicker, Rajesh C.; Sun, Ying; Puthusserypady, Sadasivan

    2010-01-01

    A cotraining-based approach is introduced for constructing high-performance classifiers for P300-based braincomputer interfaces (BCIs), which were trained from very little data. It uses two classifiers: Fishers linear discriminant analysis and Bayesian linear discriminant analysis progressively...

  18. The Impact of Inappropriate Modeling of Cross-Classified Data Structures

    Science.gov (United States)

    Meyers, Jason L.; Beretvas, S. Natasha

    2006-01-01

    Cross-classified random effects modeling (CCREM) is used to model multilevel data from nonhierarchical contexts. These models are widely discussed but infrequently used in social science research. Because little research exists assessing when it is necessary to use CCREM, 2 studies were conducted. A real data set with a cross-classified structure…

  19. RIT-CIA Case Study: Classified Research in a University Context.

    Science.gov (United States)

    Carl, W. John, III

    A controversy at the Rochester Institute of Technology (RIT) in New York State over that institution's involvement with classified research for the Central Intelligence Agency (CIA) raised issues regarding classified research and institutional leadership. In 1991 M. Richard Rose, then president of RIT, took a 4-month sabbatical to work for the…

  20. The iPhyClassifier, an interactive online tool for phytoplasma classification and taxonomic assignment

    Science.gov (United States)

    The iPhyClassifier is an Internet-based research tool for quick identification and classification of diverse phytoplasmas. The iPhyClassifier simulates laboratory restriction enzyme digestions and subsequent gel electrophoresis and generates virtual restriction fragment length polymorphism (RFLP) p...

  1. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  2. 41 CFR 102-34.45 - How are passenger automobiles classified?

    Science.gov (United States)

    2010-07-01

    ... MANAGEMENT Obtaining Fuel Efficient Motor Vehicles § 102-34.45 How are passenger automobiles classified... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false How are passenger automobiles classified? 102-34.45 Section 102-34.45 Public Contracts and Property Management Federal...

  3. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    Science.gov (United States)

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  4. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Akinari Onishi

    Full Text Available A P300-based brain-computer interface (BCI enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300. In this study, we evaluated ensemble linear discriminant analysis (LDA classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA, or none. The results show that an ensemble stepwise LDA (SWLDA classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  5. 40 CFR 260.32 - Variances to be classified as a boiler.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Variances to be classified as a boiler... be classified as a boiler. In accordance with the standards and criteria in § 260.10 (definition of “boiler”), and the procedures in § 260.33, the Administrator may determine on a case-by-case basis...

  6. Should OCD be classified as an anxiety disorder in DSM-V?

    NARCIS (Netherlands)

    D.J. Stein; N.A. Fineberg; O.J. Bienvenu; D. Denys; C. Lochner; G. Nestadt; J.F. Leckman; S.L. Rauch; K.A. Phillips

    2010-01-01

    In DSM-III, DSM-III-R, and DSM-IV, obsessive-compulsive disorder (OCD) was classified as an anxiety disorder. In ICD-10, OCD is classified separately from the anxiety disorders, although within the same larger category as anxiety disorders (as one of the "neurotic, stress-related, and somatoform dis

  7. Results of the Fall 1984 Survey of Napa Valley College Administrators, Classified Staff, and Faculty.

    Science.gov (United States)

    Friedlander, Jack; Gocke, Sharon

    In November 1984, all administrators, classified staff, and faculty at Napa Valley College (NVC) were surveyed concerning a wide range of topics related to working at the institution. The survey, which was completed by 17 administrators (71%), 60 classified staff members (42%), 71 full-time faculty members (63%), and 79 part-time faculty members…

  8. 19 CFR 10.605 - Goods classifiable as goods put up in sets.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Goods classifiable as goods put up in sets. 10.605... put up in sets. Notwithstanding the specific rules set forth in General Note 29(n), HTSUS, goods classifiable as goods put up in sets for retail sale as provided for in General Rule of Interpretation 3,...

  9. A Hybrid Generative/Discriminative Classifier Design for Semi-supervised Learing

    Science.gov (United States)

    Fujino, Akinori; Ueda, Naonori; Saito, Kazumi

    Semi-supervised classifier design that simultaneously utilizes both a small number of labeled samples and a large number of unlabeled samples is a major research issue in machine learning. Existing semi-supervised learning methods for probabilistic classifiers belong to either generative or discriminative approaches. This paper focuses on a semi-supervised probabilistic classifier design for multiclass and single-labeled classification problems and first presents a hybrid approach to take advantage of the generative and discriminative approaches. Our formulation considers a generative model trained on labeled samples and a newly introduced bias correction model, whose belongs to the same model family as the generative model, but whose parameters are different from the generative model. A hybrid classifier is constructed by combining both the generative and bias correction models based on the maximum entropy principle, where the combination weights of these models are determined so that the class labels of labeled samples are as correctly predicted as possible. We apply the hybrid approach to text classification problems by employing naive Bayes as the generative and bias correction models. In our experimental results on three English and one Japanese text data sets, we confirmed that the hybrid classifier significantly outperformed conventional probabilistic generative and discriminative classifiers when the classification performance of the generative classifier was comparable to the discriminative classifier.

  10. EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms

    KAUST Repository

    Rapakoulia, Trisevgeni

    2014-04-26

    Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.

  11. ARC: Automated Resource Classifier for agglomerative functional classification of prokaryotic proteins using annotation texts

    Indian Academy of Sciences (India)

    Muthiah Gnanamani; Naveen Kumar; Srinivasan Ramachandran

    2007-08-01

    Functional classification of proteins is central to comparative genomics. The need for algorithms tuned to enable integrative interpretation of analytical data is felt globally. The availability of a general, automated software with built-in flexibility will significantly aid this activity. We have prepared ARC (Automated Resource Classifier), which is an open source software meeting the user requirements of flexibility. The default classification scheme based on keyword match is agglomerative and directs entries into any of the 7 basic non-overlapping functional classes: Cell wall, Cell membrane and Transporters ($\\mathcal{C}$), Cell division ($\\mathcal{D}$), Information ($\\mathcal{I}$), Translocation ($\\mathcal{L}$), Metabolism ($\\mathcal{M}$), Stress($\\mathcal{R}$), Signal and communication($\\mathcal{S}$) and 2 ancillary classes: Others ($\\mathcal{O}$) and Hypothetical ($\\mathcal{H}$). The keyword library of ARC was built serially by first drawing keywords from Bacillus subtilis and Escherichia coli K12. In subsequent steps, this library was further enriched by collecting terms from archaeal representative Archaeoglobus fulgidus, Gene Ontology, and Gene Symbols. ARC is 94.04% successful on 6,75,663 annotated proteins from 348 prokaryotes. Three examples are provided to illuminate the current perspectives on mycobacterial physiology and costs of proteins in 333 prokaryotes. ARC is available at http://arc.igib.res.in.

  12. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    Science.gov (United States)

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  13. Classifying visuomotor workload in a driving simulator using subject specific spatial brain patterns.

    Directory of Open Access Journals (Sweden)

    Chris eDijksterhuis

    2013-08-01

    Full Text Available A passive Brain Computer Interface (BCI is a system that responds to the spontaneously produced brain activity of its user and could be used to develop interactive task support. A human-machine system that could benefit from brain-based task support is the driver-car interaction system. To investigate the feasibility of such a system to detect changes in visuomotor workload, 34 drivers were exposed to several levels of driving demand in a driving simulator. Driving demand was manipulated by varying driving speed and by asking the drivers to comply to individually set lane keeping performance targets. Differences in the individual driver’s workload levels were classified by applying the Common Spatial Pattern (CSP and Fisher’s linear discriminant analysis to frequency filtered electroencephalogram (EEG data during an off line classification study. Several frequency ranges, EEG cap configurations, and condition pairs were explored. It was found that classifications were most accurate when based on high frequencies, larger electrode sets, and the frontal electrodes. Depending on these factors, classification accuracies across participants reached about 95% on average. The association between high accuracies and high frequencies suggests that part of the underlying information did not originate directly from neuronal activity. Nonetheless, average classification accuracies up to 75%-80% were obtained from the lower EEG ranges that are likely to reflect neuronal activity. For a system designer, this implies that a passive BCI system may use several frequency ranges for workload classifications.

  14. Classifiers for centrality determination in proton-nucleus and nucleus-nucleus collisions

    CERN Document Server

    Altsybeev, Igor

    2016-01-01

    Centrality, as a geometrical property of the collision, is crucial for the physical interpretation of nucleus-nucleus and proton-nucleus experimental data. However, it cannot be directly accessed in event-by-event data analysis. Common methods for centrality estimation in A-A and p-A collisions usually rely on a single detector (either on the signal in zero-degree calorimeters or on the multiplicity in some semi-central rapidity range). In the present work, we made an attempt to develop an approach for centrality determination that is based on machine-learning techniques and utilizes information from several detector subsystems simultaneously. Different event classifiers are suggested and evaluated for their selectivity power in terms of the number of nucleons-participants and the impact parameter of the collision. Finer centrality resolution may allow to reduce impact from so-called volume fluctuations on physical observables being studied in heavy-ion experiments like ALICE at the LHC and fixed target exper...

  15. Comparative study of artificial neural network and multivariate methods to classify Spanish DO rose wines.

    Science.gov (United States)

    Pérez-Magariño, S; Ortega-Heras, M; González-San José, M L; Boger, Z

    2004-04-19

    Classical multivariate analysis techniques such as factor analysis and stepwise linear discriminant analysis and artificial neural networks method (ANN) have been applied to the classification of Spanish denomination of origin (DO) rose wines according to their geographical origin. Seventy commercial rose wines from four different Spanish DO (Ribera del Duero, Rioja, Valdepeñas and La Mancha) and two successive vintages were studied. Nineteen different variables were measured in these wines. The stepwise linear discriminant analyses (SLDA) model selected 10 variables obtaining a global percentage of correct classification of 98.8% and of global prediction of 97.3%. The ANN model selected seven variables, five of which were also selected by the SLDA model, and it gave a 100% of correct classification for training and prediction. So, both models can be considered satisfactory and acceptable, being the selected variables useful to classify and differentiate these wines by their origin. Furthermore, the casual index analysis gave information that can be easily explained from an enological point of view.

  16. Classifying content-based Images using Self Organizing Map Neural Networks Based on Nonlinear Features

    Directory of Open Access Journals (Sweden)

    Ebrahim Parcham

    2014-07-01

    Full Text Available Classifying similar images is one of the most interesting and essential image processing operations. Presented methods have some disadvantages like: low accuracy in analysis step and low speed in feature extraction process. In this paper, a new method for image classification is proposed in which similarity weight is revised by means of information in related and unrelated images. Based on researchers’ idea, most of real world similarity measurement systems are nonlinear. Thus, traditional linear methods are not capable of recognizing nonlinear relationship and correlation in such systems. Undoubtedly, Self Organizing Map neural networks are strongest networks for data mining and nonlinear analysis of sophisticated spaces purposes. In our proposed method, we obtain images with the most similarity measure by extracting features of our target image and comparing them with the features of other images. We took advantage of NLPCA algorithm for feature extraction which is a nonlinear algorithm that has the ability to recognize the smallest variations even in noisy images. Finally, we compare the run time and efficiency of our proposed method with previous proposed methods.

  17. Improved Bevirimat resistance prediction by combination of structural and sequence-based classifiers

    Directory of Open Access Journals (Sweden)

    Dybowski J Nikolaj

    2011-11-01

    Full Text Available Abstract Background Maturation inhibitors such as Bevirimat are a new class of antiretroviral drugs that hamper the cleavage of HIV-1 proteins into their functional active forms. They bind to these preproteins and inhibit their cleavage by the HIV-1 protease, resulting in non-functional virus particles. Nevertheless, there exist mutations in this region leading to resistance against Bevirimat. Highly specific and accurate tools to predict resistance to maturation inhibitors can help to identify patients, who might benefit from the usage of these new drugs. Results We tested several methods to improve Bevirimat resistance prediction in HIV-1. It turned out that combining structural and sequence-based information in classifier ensembles led to accurate and reliable predictions. Moreover, we were able to identify the most crucial regions for Bevirimat resistance computationally, which are in line with experimental results from other studies. Conclusions Our analysis demonstrated the use of machine learning techniques to predict HIV-1 resistance against maturation inhibitors such as Bevirimat. New maturation inhibitors are already under development and might enlarge the arsenal of antiretroviral drugs in the future. Thus, accurate prediction tools are very useful to enable a personalized therapy.

  18. Building Keypoint Mappings on Multispectral Images by a Cascade of Classifiers with a Resurrection Mechanism

    Directory of Open Access Journals (Sweden)

    Yong Li

    2015-05-01

    Full Text Available Inspired by the boosting technique for detecting objects, this paper proposes a cascade structure with a resurrection mechanism to establish keypoint mappings on multispectral images. The cascade structure is composed of four steps by utilizing best bin first (BBF, color and intensity distribution of segment (CIDS, global information and the RANSAC process to remove outlier keypoint matchings. Initial keypoint mappings are built with the descriptors associated with keypoints; then, at each step, only a small number of keypoint mappings of a high confidence are classified to be incorrect. The unclassified keypoint mappings will be passed on to subsequent steps for determining whether they are correct. Due to the drawback of a classification rule, some correct keypoint mappings may be misclassified as incorrect at a step. Observing this, we design a resurrection mechanism, so that they will be reconsidered and evaluated by the rules utilized in subsequent steps. Experimental results show that the proposed cascade structure combined with the resurrection mechanism can effectively build more reliable keypoint mappings on multispectral images than existing methods.

  19. Classifying Uncertain and Evolving Data Streams with Distributed Extreme Learning Machine

    Institute of Scientific and Technical Information of China (English)

    韩东红; 张昕; 王国仁

    2015-01-01

    Conventional classification algorithms are not well suited for the inherent uncertainty, potential concept drift, volume, and velocity of streaming data. Specialized algorithms are needed to obtain efficient and accurate classifiers for uncertain data streams. In this paper, we first introduce Distributed Extreme Learning Machine (DELM), an optimization of ELM for large matrix operations over large datasets. We then present Weighted Ensemble Classifier Based on Distributed ELM (WE-DELM), an online and one-pass algorithm for efficiently classifying uncertain streaming data with concept drift. A probability world model is built to transform uncertain streaming data into certain streaming data. Base classifiers are learned using DELM. The weights of the base classifiers are updated dynamically according to classification results. WE-DELM improves both the efficiency in learning the model and the accuracy in performing classification. Experimental results show that WE-DELM achieves better performance on different evaluation criteria, including efficiency, accuracy, and speedup.

  20. Employing Neocognitron Neural Network Base Ensemble Classifiers To Enhance Efficiency Of Classification In Handwritten Digit Datasets

    Directory of Open Access Journals (Sweden)

    Neera Saxena

    2011-07-01

    Full Text Available This paper presents an ensemble of neo-cognitron neural network base classifiers to enhance the accuracy of the system, along the experimental results. The method offers lesser computational preprocessing in comparison to other ensemble techniques as it ex-preempts feature extraction process before feeding the data into base classifiers. This is achieved by the basic nature of neo-cognitron, it is a multilayer feed-forward neural network. Ensemble of such base classifiers gives class labels for each pattern that in turn is combined to give the final class label for that pattern. The purpose of this paper is not only to exemplify learning behaviour of neo-cognitron as base classifiers, but also to purport better fashion to combine neural network based ensemble classifiers.