WorldWideScience

Sample records for classified information

  1. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security......

  2. 75 FR 705 - Classified National Security Information

    Science.gov (United States)

    2010-01-05

    ... Executive Order 13526--Classified National Security Information Memorandum of December 29, 2009--Implementation of the Executive Order ``Classified National Security Information'' Order of December 29, 2009... ] Executive Order 13526 of December 29, 2009 Classified National Security Information This order prescribes...

  3. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  4. 75 FR 37253 - Classified National Security Information

    Science.gov (United States)

    2010-06-28

    ... and Records Administration Information Security Oversight Office 32 CFR Parts 2001 and 2003 Classified National Security Information; Final Rule #0;#0;Federal Register / Vol. 75, No. 123 / Monday, June 28, 2010 / Rules and Regulations#0;#0; ] NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Information...

  5. Searching and Classifying non-textual information

    OpenAIRE

    Arentz, Will Archer

    2004-01-01

    This dissertation contains a set of contributions that deal with search or classification of non-textual information. Each contribution can be considered a solution to a specific problem, in an attempt to map out a common ground. The problems cover a wide range of research fields, including search in music, classifying digitally sampled music, visualization and navigation in search results, and classifying images and Internet sites.On classification of digitally sample music, as method for ex...

  6. Comparing cosmic web classifiers using information theory

    Science.gov (United States)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  7. Comparing cosmic web classifiers using information theory

    CERN Document Server

    Leclercq, Florent; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-web, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  8. Use Restricted - Classified information sharing, case NESA

    OpenAIRE

    El-Bash, Amira

    2015-01-01

    This Thesis is written for the Laurea University of Applied Sciences under the Bachelor’s Degree in Security Management. The empirical research of the thesis was supported by the National Emergency Supply Agency as a CASE study, in classified information sharing in the organization. The National Emergency Supply Agency was chosen for the research because of its social significance and distinctively wide operation field. Being one of the country’s administrator’s actors, its range of tasks in ...

  9. Comparing cosmic web classifiers using information theory

    OpenAIRE

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative perf...

  10. Use of information barriers to protect classified information

    International Nuclear Information System (INIS)

    This paper discusses the detailed requirements for an information barrier (IB) for use with verification systems that employ intrusive measurement technologies. The IB would protect classified information in a bilateral or multilateral inspection of classified fissile material. Such a barrier must strike a balance between providing the inspecting party the confidence necessary to accept the measurement while protecting the inspected party's classified information. The authors discuss the structure required of an IB as well as the implications of the IB on detector system maintenance. A defense-in-depth approach is proposed which would provide assurance to the inspected party that all sensitive information is protected and to the inspecting party that the measurements are being performed as expected. The barrier could include elements of physical protection (such as locks, surveillance systems, and tamper indicators), hardening of key hardware components, assurance of capabilities and limitations of hardware and software systems, administrative controls, validation and verification of the systems, and error detection and resolution. Finally, an unclassified interface could be used to display and, possibly, record measurement results. The introduction of an IB into an analysis system may result in many otherwise innocuous components (detectors, analyzers, etc.) becoming classified and unavailable for routine maintenance by uncleared personnel. System maintenance and updating will be significantly simplified if the classification status of as many components as possible can be made reversible (i.e. the component can become unclassified following the removal of classified objects)

  11. What are the Differences between Bayesian Classifiers and Mutual-Information Classifiers?

    CERN Document Server

    Hu, Bao-Gang

    2011-01-01

    In this study, both Bayesian classifiers and mutual information classifiers are examined for binary classifications with or without a reject option. The general decision rules in terms of distinctions on error types and reject types are derived for Bayesian classifiers. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of "non-consistency" for interpreting cost terms. If no data is given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the theoretical differences, including the extremely-class-imbalanced cases. Finally, we briefly summarize the Bayesian classifiers and mutual-info...

  12. What are the differences between Bayesian classifiers and mutual-information classifiers?

    Science.gov (United States)

    Hu, Bao-Gang

    2014-02-01

    In this paper, both Bayesian and mutual-information classifiers are examined for binary classifications with or without a reject option. The general decision rules are derived for Bayesian classifiers with distinctions on error types and reject types. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of nonconsistency for interpreting cost terms. If no data are given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the differences, including the extremely class-imbalanced cases. Finally, we briefly summarize the Bayesian and mutual-information classifiers in terms of their application advantages and disadvantages, respectively. PMID:24807026

  13. 21 CFR 1402.4 - Information classified by another agency.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Information classified by another agency. 1402.4 Section 1402.4 Food and Drugs OFFICE OF NATIONAL DRUG CONTROL POLICY MANDATORY DECLASSIFICATION REVIEW § 1402.4 Information classified by another agency. When a request is received for information that...

  14. Classifying and identifying servers for biomedical information retrieval.

    OpenAIRE

    Patrick, T. B.; Springer, G K

    1994-01-01

    Useful retrieval of biomedical information from network information sources requires methods for organized access to those information sources. This access must be organized in terms of the information content of information sources and in terms of the discovery of the network location of those information sources. We have developed an approach to providing organized access to information sources based on a scheme of hierarchical classifiers and identifiers of the servers providing access to ...

  15. 3 CFR - Classified Information and Controlled Unclassified Information

    Science.gov (United States)

    2010-01-01

    ... citizens, national security, or other legitimate interests, a democratic government accountable to the... and perceived technological obstacles to moving toward an information sharing culture, continue...

  16. 75 FR 51609 - Classified National Security Information Program for State, Local, Tribal, and Private Sector...

    Science.gov (United States)

    2010-08-23

    ... National Security Information Program for State, Local, Tribal, and Private Sector Entities By the... established a Classified National Security Information Program (Program) designed to safeguard and govern access to classified national security information shared by the Federal Government with State,...

  17. 32 CFR 2004.21 - Protection of Classified Information [201(e)].

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Protection of Classified Information . 2004.21 Section 2004.21 National Defense Other Regulations Relating to National Defense INFORMATION SECURITY... DIRECTIVE NO. 1 Operations § 2004.21 Protection of Classified Information . Procedures for the...

  18. Classifying and Designing the Educational Methods with Information Communications Technoligies

    Directory of Open Access Journals (Sweden)

    I. N. Semenova

    2013-01-01

    Full Text Available The article describes the conceptual apparatus for implementing the Information Communications Technologies (ICT in education. The authors suggest the classification variants of the related teaching methods according to the following component combinations: types of students work with information, goals of ICT incorporation into the training process, individualization degrees, contingent involvement, activity levels and pedagogical field targets, ideology of informational didactics, etc. Each classification can solve the educational tasks in the context of the partial paradigm of modern didactics; any kind of methods implies the particular combination of activities in educational environment.The whole spectrum of classifications provides the informational functional basis for the adequate selection of necessary teaching methods in accordance with the specified goals and planned results. The potential variants of ICT implementation methods are given for different teaching models. 

  19. 14 CFR 1213.106 - Preventing release of classified information to the media.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Preventing release of classified... ADMINISTRATION RELEASE OF INFORMATION TO NEWS AND INFORMATION MEDIA § 1213.106 Preventing release of classified... employee from responsibility for preventing any unauthorized release. See NPR 1600.1, Chapter 5, Section...

  20. 48 CFR 8.608 - Protection of classified and sensitive information.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Protection of classified and sensitive information. 8.608 Section 8.608 Federal Acquisition Regulations System FEDERAL... Prison Industries, Inc. 8.608 Protection of classified and sensitive information. Agencies shall...

  1. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  2. A Probabilistic Approach to Classifying Supernovae Using Photometric Information

    OpenAIRE

    Natalia V. Kuznetsova; Connolly, Brian M.

    2006-01-01

    This paper presents a novel method for determining the probability that a supernova candidate belongs to a known supernova type (such as Ia, Ibc, IIL, \\emph{etc.}), using its photometric information alone. It is validated with Monte Carlo, and both space- and ground- based data. We examine the application of the method to well-sampled as well as poorly sampled supernova light curves and investigate to what extent the best currently available supernova models can be used for typing supernova c...

  3. Heuristics legislation in the field of classified information as a function of training subjects of defense

    Directory of Open Access Journals (Sweden)

    Paun J. Bereš

    2014-04-01

    Full Text Available Education on the protection of classified information should be the top priority when it comes to ensuring the protection of the vital interests of the state. Some information should not be made available to the public because it is mainly related to national security, and no one should question the need to protect this kind of data. This paper is intended for educators dealing with the protection of classified information, and especially to those who work with or come into contact with confidential information in order to inform them of our national system of protection of classified information and enable the implementation of the existing legislation applying the  heuristic model of education. This article describes the legal regulations governing the protection of dataand shows mandatory standards and measures for the protection of classified information.

  4. A Probabilistic Approach to Classifying Supernovae Using Photometric Information

    CERN Document Server

    Kuznetsova, N V; Kuznetsova, Natalia V.; Connolly, Brian M.

    2006-01-01

    This paper presents a novel method for determining the probability that a supernova candidate belongs to a known supernova type (such as Ia, Ibc, IIL, \\emph{etc.}), using its photometric information alone. It is validated with Monte Carlo, and both space- and ground- based data. We examine the application of the method to well-sampled as well as poorly sampled supernova light curves. Central to the method is the assumption that a supernova candidate belongs to a group of objects that can be modeled; we therefore discuss possible ways of removing anomalous or less well understood events from the sample. This method is particularly advantageous for analyses where the purity of the supernova sample is of the essence, or for those where it is important to know the number of the supernova candidates of a certain type (\\emph{e.g.}, in supernova rate studies).

  5. 75 FR 733 - Implementation of the Executive Order, ``Classified National Security Information''

    Science.gov (United States)

    2010-01-05

    ... National Security Information'' Memorandum for the Heads of Executive Departments and Agencies Today I have signed an executive order entitled, ``Classified National Security Information'' (the ``order''), which... Director of the Information Security Oversight Office (ISOO) a copy of the department or agency...

  6. Local Sequence Information-based Support Vector Machine to Classify Voltage-gated Potassium Channels

    Institute of Scientific and Technical Information of China (English)

    Li-Xia LIU; Meng-Long LI; Fu-Yuan TAN; Min-Chun LU; Ke-Long WANG; Yan-Zhi GUO; Zhi-Ning WEN; Lin JIANG

    2006-01-01

    In our previous work, we developed a computational tool, PreK-ClassK-ClassKv, to predict and classify potassium (K+) channels. For K+ channel prediction (PreK) and classification at family level (ClassK), this method performs well. However, it does not perform so well in classifying voltage-gated potassium (Kv) channels (ClassKv). In this paper, a new method based on the local sequence information of Kv channels is introduced to classify Kv channels. Six transmembrane domains of a Kv channel protein are used to define a protein, and the dipeptide composition technique is used to transform an amino acid sequence to a numerical sequence. A Kv channel protein is represented by a vector with 2000 elements, and a support vector machine algorithm is applied to classify Kv channels. This method shows good performance with averages of total accuracy (Acc), sensitivity (SE), specificity (SP); reliability (R) and Matthews correlation coefficient (MCC) of 98.0%, 89.9%, 100%, 0.95 and 0.94 respectively. The results indicate that the local sequence information-based method is better than the global sequence information-based method to classify Kv channels.

  7. 28 CFR 17.47 - Denial or revocation of eligibility for access to classified information.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Denial or revocation of eligibility for access to classified information. 17.47 Section 17.47 Judicial Administration DEPARTMENT OF JUSTICE..., the identity of the deciding authority, and written notice of the right to appeal. (d) Within 30...

  8. 32 CFR 154.6 - Standards for access to classified information or assignment to sensitive duties.

    Science.gov (United States)

    2010-07-01

    ... or assignment to sensitive duties. 154.6 Section 154.6 National Defense Department of Defense OFFICE... Policies § 154.6 Standards for access to classified information or assignment to sensitive duties. (a... Department of Defense mission, including, special expertise, to assign an individual who is not a citizen...

  9. Attribute verification systems with information barriers for classified forms of plutonium in the trilateral initiative

    International Nuclear Information System (INIS)

    A team of technical experts from the Russian Federation, the International Atomic Energy Agency (IAEA), and the United States has been working since December 1997 to develop a toolkit of instruments that could be used to verify plutonium-bearing items that have classified characteristics in nuclear weapons states. This suite of instruments is similar in many ways to standard safeguards equipment and includes high-resolution gamma-ray spectrometers, neutron multiplicity counters, gross neutron counters, and gross gamma-ray detectors. In safeguards applications, this equipment is known to be robust and authentication methods are well understood. However, this equipment is very intrusive, and a traditional safeguards application of such equipment for verification of materials with classified characteristics would reveal classified information to the inspector. Several enabling technologies have been or are being developed to facilitate the use of these trusted, but intrusive safeguards technologies. In this paper, these new technologies will be described. (author)

  10. Comparison of classifiers for decoding sensory and cognitive information from prefrontal neuronal populations.

    Directory of Open Access Journals (Sweden)

    Elaine Astrand

    Full Text Available Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF: the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders.

  11. Comparison of classifiers for decoding sensory and cognitive information from prefrontal neuronal populations.

    Science.gov (United States)

    Astrand, Elaine; Enel, Pierre; Ibos, Guilhem; Dominey, Peter Ford; Baraduc, Pierre; Ben Hamed, Suliann

    2014-01-01

    Decoding neuronal information is important in neuroscience, both as a basic means to understand how neuronal activity is related to cerebral function and as a processing stage in driving neuroprosthetic effectors. Here, we compare the readout performance of six commonly used classifiers at decoding two different variables encoded by the spiking activity of the non-human primate frontal eye fields (FEF): the spatial position of a visual cue, and the instructed orientation of the animal's attention. While the first variable is exogenously driven by the environment, the second variable corresponds to the interpretation of the instruction conveyed by the cue; it is endogenously driven and corresponds to the output of internal cognitive operations performed on the visual attributes of the cue. These two variables were decoded using either a regularized optimal linear estimator in its explicit formulation, an optimal linear artificial neural network estimator, a non-linear artificial neural network estimator, a non-linear naïve Bayesian estimator, a non-linear Reservoir recurrent network classifier or a non-linear Support Vector Machine classifier. Our results suggest that endogenous information such as the orientation of attention can be decoded from the FEF with the same accuracy as exogenous visual information. All classifiers did not behave equally in the face of population size and heterogeneity, the available training and testing trials, the subject's behavior and the temporal structure of the variable of interest. In most situations, the regularized optimal linear estimator and the non-linear Support Vector Machine classifiers outperformed the other tested decoders.

  12. Classifying and filtering blind feedback terms to improve information retrieval effectiveness

    OpenAIRE

    Leveling, Johannes; Jones, Gareth J. F.

    2010-01-01

    The classification of blind relevance feedback (BRF) terms described in this paper aims at increasing precision or recall by determining which terms decrease, increase or do not change the corresponding information retrieval (IR) performance metric. Classification and IR experiments are performed on the German and English GIRT data, using the BM25 retrieval model. Several basic memory-based classifiers are trained on dierent feature sets, grouping together features from different query ...

  13. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor

    Science.gov (United States)

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider’s hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse’s gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider’s motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country’s top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  14. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor.

    Science.gov (United States)

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider's hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse's gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider's motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country's top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  15. 48 CFR 53.204-1 - Safeguarding classified information within industry (DD Form 254, DD Form 441).

    Science.gov (United States)

    2010-10-01

    ... information within industry (DD Form 254, DD Form 441). 53.204-1 Section 53.204-1 Federal Acquisition....204-1 Safeguarding classified information within industry (DD Form 254, DD Form 441). The following... specified in subpart 4.4 and the clause at 52.204-2: (a) DD Form 254 (Department of Defense (DOD)),...

  16. Towards Evidence-based Precision Medicine: Extracting Population Information from Biomedical Text using Binary Classifiers and Syntactic Patterns.

    Science.gov (United States)

    Raja, Kalpana; Dasot, Naman; Goyal, Pawan; Jonnalagadda, Siddhartha R

    2016-01-01

    Precision Medicine is an emerging approach for prevention and treatment of disease that considers individual variability in genes, environment, and lifestyle for each person. The dissemination of individualized evidence by automatically identifying population information in literature is a key for evidence-based precision medicine at the point-of-care. We propose a hybrid approach using natural language processing techniques to automatically extract the population information from biomedical literature. Our approach first implements a binary classifier to classify sentences with or without population information. A rule-based system based on syntactic-tree regular expressions is then applied to sentences containing population information to extract the population named entities. The proposed two-stage approach achieved an F-score of 0.81 using a MaxEnt classifier and the rule- based system, and an F-score of 0.87 using a Nai've-Bayes classifier and the rule-based system, and performed relatively well compared to many existing systems. The system and evaluation dataset is being released as open source. PMID:27570671

  17. 高校信息安全等级保护评测%University classified protection of information security evaluation

    Institute of Scientific and Technical Information of China (English)

    刘玉燕

    2011-01-01

    Information security classified protection, risk assessment, system security assessment is the current state of the construction of information security system is an important content. Rank is standard,evaluation is the means. With everyone here is to protect infonnation security classified related issues.Implementation of infonnation security classified protection can promote network security service to establish and perfect the mechanism, to adopt systems, standardized, scientific management and safeguard measures, to improve technology classified of information security and protection, security departments business system efficiency, safety operation.%信息安全等级保护、风险评估、系统安全测评是当前国家信息安全保障体系建设的重要内容.等级保护是标准,评估、测评是手段.这里探讨的是与信息安全等级保护相关问题.实施信息安全等级保护可以推动网络安全服务机制的建立和完善;有利于采取系统、规范、科学的管理和技术保障措施,提高信息安全保护水平;保障各部门业务系统高效、安全运转.

  18. 19 CFR 351.105 - Public, business proprietary, privileged, and classified information.

    Science.gov (United States)

    2010-04-01

    ... proprietary information was obtained; (10) The position of a domestic producer or workers regarding a petition... not properly designated as business proprietary; (4) Publicly available laws, regulations, decrees... consider information privileged if, based on principles of law concerning privileged information,...

  19. 3 CFR 13526 - Executive Order 13526 of December 29, 2009. Classified National Security Information

    Science.gov (United States)

    2010-01-01

    ... of the national security. (b) Basic scientific research information not clearly related to the... declassification work processes, training, and quality assurance measures; (5) the development of solutions to... receive training in proper classification (including the avoidance of over-classification)...

  20. 77 FR 65709 - Agency Information Collection Activities: Petition To Classify Orphan as an Immediate Relative...

    Science.gov (United States)

    2012-10-30

    ... call the USCIS National Customer Service Center at 1-800-375-5283. Written comments and suggestions... adult member (age 18 and older), who lives in the home of the prospective adoptive parent(s), except for... SECURITY U.S. Citizenship and Immigration Services Agency Information Collection Activities: Petition...

  1. MOWGLI: prediction of protein-MannOse interacting residues With ensemble classifiers usinG evoLutionary Information.

    Science.gov (United States)

    Pai, Priyadarshini P; Mondal, Sukanta

    2016-10-01

    Proteins interact with carbohydrates to perform various cellular interactions. Of the many carbohydrate ligands that proteins bind with, mannose constitute an important class, playing important roles in host defense mechanisms. Accurate identification of mannose-interacting residues (MIR) may provide important clues to decipher the underlying mechanisms of protein-mannose interactions during infections. This study proposes an approach using an ensemble of base classifiers for prediction of MIR using their evolutionary information in the form of position-specific scoring matrix. The base classifiers are random forests trained by different subsets of training data set Dset128 using 10-fold cross-validation. The optimized ensemble of base classifiers, MOWGLI, is then used to predict MIR on protein chains of the test data set Dtestset29 which showed a promising performance with 92.0% accurate prediction. An overall improvement of 26.6% in precision was observed upon comparison with the state-of-art. It is hoped that this approach, yielding enhanced predictions, could be eventually used for applications in drug design and vaccine development. PMID:26457920

  2. Supervised Feature Subset Selection based on Modified Fuzzy Relative Information Measure for classifier Cart

    OpenAIRE

    K.SAROJINI,; Dr. K.THANGAVEL; D.DEVAKUMARI

    2010-01-01

    Feature subset selection is an essential task in data mining. This paper presents a new method for dealing with supervised feature subset selection based on Modified Fuzzy Relative Information Measure (MFRIM). First, Discretization algorithm is applied to discretize numeric features to construct the membership functions of each fuzzy sets of a feature. Then the proposed MFRIM is applied to select the feature subset focusing on boundary samples. The proposed method can select feature subset wi...

  3. Scientometric Indicators as a Way to Classify Brands for Customer’s Information

    Directory of Open Access Journals (Sweden)

    Mihaela Paun

    2015-10-01

    Full Text Available The paper proposes a novel approach for classification of different brands that commercialize similar products, for customer information. The approach is tested on electronic shopping records found on Amazon.com, by quantifying customer behavior and comparing the results with classifications of the same brands found online through search engines. The indicators proposed for the classification are currently used scientometric measures that can be easily applied to marketing classification.

  4. Supervised Feature Subset Selection based on Modified Fuzzy Relative Information Measure for classifier Cart

    Directory of Open Access Journals (Sweden)

    K.SAROJINI,

    2010-06-01

    Full Text Available Feature subset selection is an essential task in data mining. This paper presents a new method for dealing with supervised feature subset selection based on Modified Fuzzy Relative Information Measure (MFRIM. First, Discretization algorithm is applied to discretize numeric features to construct the membership functions of each fuzzy sets of a feature. Then the proposed MFRIM is applied to select the feature subset focusing on boundary samples. The proposed method can select feature subset with minimum number of features, which are relevant to get higher average classification accuracy for datasets. The experimental results with UCI datasets show that the proposed algorithm is effective and efficient in selecting subset with minimum number of features getting higher average classification accuracy than the consistency based feature subset selection method.

  5. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...

  6. Characterizing, Classifying, and Understanding Information Security Laws and Regulations: Considerations for Policymakers and Organizations Protecting Sensitive Information Assets

    Science.gov (United States)

    Thaw, David Bernard

    2011-01-01

    Current scholarly understanding of information security regulation in the United States is limited. Several competing mechanisms exist, many of which are untested in the courts and before state regulators, and new mechanisms are being proposed on a regular basis. Perhaps of even greater concern, the pace at which technology and threats change far…

  7. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...

  8. An Advancement To The Security Level Through Galois Field In The Existing Password Based Technique Of Hiding Classified Information In Images

    Directory of Open Access Journals (Sweden)

    Mita Kosode

    2015-06-01

    Full Text Available Abstract In this paper we are using the existing passcode based approach of hiding classified information in images with addition of the Galois field theorywhich is advancing the security level to make this combination method extremely difficult to intercept and useful for open channel communication while maintaining the losses and high speed transmission.

  9. A Learning Outcome-Oriented Approach towards Classifying Pervasive Games for Learning Using Game Design Patterns and Contextual Information

    Science.gov (United States)

    Schmitz, Birgit; Klemke, Roland; Specht, Marcus

    2013-01-01

    Mobile and in particular pervasive games are a strong component of future scenarios for teaching and learning. Based on results from a previous review of practical papers, this work explores the educational potential of pervasive games for learning by analysing underlying game mechanisms. In order to determine and classify cognitive and affective…

  10. Multi-source Fuzzy Information Fusion Method Based on Bayesian Optimal Classifier%基于贝叶斯最优分类器的多源模糊信息融合方法

    Institute of Scientific and Technical Information of China (English)

    苏宏升

    2008-01-01

    To make conventional Bayesian optimal classifier possess the abilities of disposing fuzzy information and realizing the automation of reasoning process, a new Bayesian optimal classifier is proposed with fuzzy information embedded. It can not only dispose fuzzy information effectively, but also retain learning properties of Bayesian optimal classifier. In addition, according to the evolution of fuzzy set theory, vague set is also imbedded into it to generate vague Bayesian optimal classifier. It can simultaneously simulate the twofold characteristics of fuzzy information from the positive and reverse directions. Further, a set pair Bayesian optimal classifier is also proposed considering the threefold characteristics of fuzzy information from the positive, reverse, and indeterminate sides. In the end, a knowledge-based artificial neural network (KBANN) is presented to realize automatic reasoning of Bayesian optimal classifier. It not only reduces the computational cost of Bayesian optimal classifier but also improves its classification learning quality.

  11. 基于层次分析涉密信息系统风险评估%Classified Information System Security Risk Assessment based on Hierarchical Analysis

    Institute of Scientific and Technical Information of China (English)

    李增鹏; 马春光; 李迎涛

    2014-01-01

    信息技术的发展使得政府和军队相关部门对信息系统安全问题提出更高要求。涉密信息系统安全有其独特性,风险评估区别于普通信息安全系统。文章以涉密信息系统为研究对象,首先阐述涉密信息系统的特点,针对其独特性对现有信息安全风险评估方法进行分析评价。然后将基于层析分析法的评估模型引入到涉密信息系统安全风险评估中。为涉密信息系统进行风险评估提供一种新的技术思路。最后通过实例分析,所提模型在处理涉密信息系统评估过程中对得到的离散数据分布无要求,与德菲尔法以及 BP 神经网络相比,该模型具有一定实用性和可扩张性,适合用于实际地涉密信息系统风险评估中。%The development of information technology makes the government and military authorities put forward higher requirements on the security of information system. The uniqueness of classified information system security makes risk assessment different from the common information security system. In this paper, classified information systems are research object. We first describe the characteristics of information system security, the uniqueness of the existing methods of analysis and evaluation of information security risk assessment. Then an evaluation model based on AHP is introduced into security risk assessment of information system security. A new way of classified information system risk assessment is presented. At last, through the analysis of an example, we analyze that the proposed model in processing of information system security evaluation process to get the discrete data distribution is not required, compared with the Delphi method and BP neural network, this model is highly of practicability and expansibility, it does suitable for classified information system risk assessment actually.

  12. A Classifier Ensemble of Binary Classifier Ensembles

    Directory of Open Access Journals (Sweden)

    Sajad Parvin

    2011-09-01

    Full Text Available This paper proposes an innovative combinational algorithm to improve the performance in multiclass classification domains. Because the more accurate classifier the better performance of classification, the researchers in computer communities have been tended to improve the accuracies of classifiers. Although a better performance for classifier is defined the more accurate classifier, but turning to the best classifier is not always the best option to obtain the best quality in classification. It means to reach the best classification there is another alternative to use many inaccurate or weak classifiers each of them is specialized for a sub-space in the problem space and using their consensus vote as the final classifier. So this paper proposes a heuristic classifier ensemble to improve the performance of classification learning. It is specially deal with multiclass problems which their aim is to learn the boundaries of each class from many other classes. Based on the concept of multiclass problems classifiers are divided into two different categories: pairwise classifiers and multiclass classifiers. The aim of a pairwise classifier is to separate one class from another one. Because of pairwise classifiers just train for discrimination between two classes, decision boundaries of them are simpler and more effective than those of multiclass classifiers.The main idea behind the proposed method is to focus classifier in the erroneous spaces of problem and use of pairwise classification concept instead of multiclass classification concept. Indeed although usage of pairwise classification concept instead of multiclass classification concept is not new, we propose a new pairwise classifier ensemble with a very lower order. In this paper, first the most confused classes are determined and then some ensembles of classifiers are created. The classifiers of each of these ensembles jointly work using majority weighting votes. The results of these ensembles

  13. Classifying Pediatric Central Nervous System Tumors through near Optimal Feature Selection and Mutual Information: A Single Center Cohort

    Directory of Open Access Journals (Sweden)

    Mohammad Faranoush

    2013-10-01

    Full Text Available Background: Labeling, gathering mutual information, clustering and classificationof central nervous system tumors may assist in predicting not only distinct diagnosesbased on tumor-specific features but also prognosis. This study evaluates the epidemi-ological features of central nervous system tumors in children who referred to Mahak’sPediatric Cancer Treatment and Research Center in Tehran, Iran.Methods: This cohort (convenience sample study comprised 198 children (≤15years old with central nervous system tumors who referred to Mahak's PediatricCancer Treatment and Research Center from 2007 to 2010. In addition to the descriptiveanalyses on epidemiological features and mutual information, we used the LeastSquares Support Vector Machines method in MATLAB software to propose apreliminary predictive model of pediatric central nervous system tumor feature-labelanalysis. Results:Of patients, there were 63.1% males and 36.9% females. Patients' mean±SDage was 6.11±3.65 years. Tumor location was as follows: supra-tentorial (30.3%, infra-tentorial (67.7% and 2% (spinal. The most frequent tumors registered were: high-gradeglioma (supra-tentorial in 36 (59.99% patients and medulloblastoma (infra-tentorialin 65 (48.51% patients. The most prevalent clinical findings included vomiting,headache and impaired vision. Gender, age, ethnicity, tumor stage and the presence ofmetastasis were the features predictive of supra-tentorial tumor histology.Conclusion: Our data agreed with previous reports on the epidemiology of centralnervous system tumors. Our feature-label analysis has shown how presenting features maypartially predict diagnosis. Timely diagnosis and management of central nervous systemtumors can lead to decreased disease burden and improved survival. This may be furtherfacilitated through development of partitioning, risk prediction and prognostic models.

  14. Mapping Robinia Pseudoacacia Forest Health Conditions by Using Combined Spectral, Spatial, and Textural Information Extracted from IKONOS Imagery and Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2015-07-01

    Full Text Available The textural and spatial information extracted from very high resolution (VHR remote sensing imagery provides complementary information for applications in which the spectral information is not sufficient for identification of spectrally similar landscape features. In this study grey-level co-occurrence matrix (GLCM textures and a local statistical analysis Getis statistic (Gi, computed from IKONOS multispectral (MS imagery acquired from the Yellow River Delta in China, along with a random forest (RF classifier, were used to discriminate Robina pseudoacacia tree health levels. Specifically, eight GLCM texture features (mean, variance, homogeneity, dissimilarity, contrast, entropy, angular second moment, and correlation were first calculated from IKONOS NIR band (Band 4 to determine an optimal window size (13 × 13 and an optimal direction (45°. Then, the optimal window size and direction were applied to the three other IKONOS MS bands (blue, green, and red for calculating the eight GLCM textures. Next, an optimal distance value (5 and an optimal neighborhood rule (Queen’s case were determined for calculating the four Gi features from the four IKONOS MS bands. Finally, different RF classification results of the three forest health conditions were created: (1 an overall accuracy (OA of 79.5% produced using the four MS band reflectances only; (2 an OA of 97.1% created with the eight GLCM features calculated from IKONOS Band 4 with the optimal window size of 13 × 13 and direction 45°; (3 an OA of 93.3% created with the all 32 GLCM features calculated from the four IKONOS MS bands with a window size of 13 × 13 and direction of 45°; (4 an OA of 94.0% created using the four Gi features calculated from the four IKONOS MS bands with the optimal distance value of 5 and Queen’s neighborhood rule; and (5 an OA of 96.9% created with the combined 16 spectral (four, spatial (four, and textural (eight features. The most important feature ranked by RF

  15. Uncertainty and Climate Change and its effect on Generalization and Prediction abilities by creating Diverse Classifiers and Feature Section Methods using Information Fusion

    Directory of Open Access Journals (Sweden)

    Y. P. Kosta

    2010-11-01

    Full Text Available The model forecast suggests a deterministic approach. Forecasting was traditionally done by a singlemodel - deterministic prediction, recent years has witnessed drastic changes. Today, with InformationFusion (Ensemble technique it is possible to improve the generalization ability of classifiers with highlevels of reliability. Through Information Fusion it is easily possible to combine diverse & independentoutcomes for decision-making. This approach adopts the idea of combining the results of multiplemethods (two-way interactions between them using appropriate model on the testset. Althoughuncertainties are often very significant, for the purpose of single prediction, especially at the initialstage, one dose not consider uncertainties in the model, the initial conditions, or the very nature of theclimate (environment or atmosphere itself using single model. If we make small changes in the initialparameter setting, it will result in change in predictive accuracy of the model. Similarly, uncertainty inmodel physics can result in large forecast differences and errors. So, instead of running one prediction,run a collection/package/bundle (ensemble of predictions, each one kick starting from a different initialstate or with different conditions and sequentially executing the next. The variations resulting due toexecution of different prediction package/model could be then used (independently combining oraggregating to estimate the uncertainty of the prediction, giving us better accuracy and reliability. Inthis paper the authors propose to use Information fusion technique that will provide insight of probablekey parameters that is necessary to purposefully evaluate the successes of new generation of productsand services, improving forecasting. Ensembles can be creatively applied to provide insight against thenew generation products yielding higher probabilities of success. Ensemble will yield critical features ofthe products and also provide insight to

  16. An examination of electronic file transfer between host and microcomputers for the AMPMODNET/AIMNET (Army Material Plan Modernization Network/Acquisition Information Management Network) classified network environment

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.

    1990-11-01

    This report presents the results of investigation and testing conducted by Oak Ridge National Laboratory (ORNL) for the Project Manager -- Acquisition Information Management (PM-AIM), and the United States Army Materiel Command Headquarters (HQ-AMC). It concerns the establishment of file transfer capabilities on the Army Materiel Plan Modernization (AMPMOD) classified computer system. The discussion provides a general context for micro-to-mainframe connectivity and focuses specifically upon two possible solutions for file transfer capabilities. The second section of this report contains a statement of the problem to be examined, a brief description of the institutional setting of the investigation, and a concise declaration of purpose. The third section lays a conceptual foundation for micro-to-mainframe connectivity and provides a more detailed description of the AMPMOD computing environment. It gives emphasis to the generalized International Business Machines, Inc. (IBM) standard of connectivity because of the predominance of this vendor in the AMPMOD computing environment. The fourth section discusses two test cases as possible solutions for file transfer. The first solution used is the IBM 3270 Control Program telecommunications and terminal emulation software. A version of this software was available on all the IBM Tempest Personal Computer 3s. The second solution used is Distributed Office Support System host electronic mail software with Personal Services/Personal Computer microcomputer e-mail software running with IBM 3270 Workstation Program for terminal emulation. Test conditions and results are presented for both test cases. The fifth section provides a summary of findings for the two possible solutions tested for AMPMOD file transfer. The report concludes with observations on current AMPMOD understanding of file transfer and includes recommendations for future consideration by the sponsor.

  17. Classifying unstructured text using structured training instances and ensemble classifiers

    OpenAIRE

    Lianos, Andreas; Yang, Yanyan

    2015-01-01

    Typical supervised classification techniques require training instances similar to the values that need to be classified. This research proposes a methodology that can utilize training instances found in a different format. The benefit of this approach is that it allows the use of traditional classification techniques, without the need to hand-tag training instances if the information exists in other data sources. The proposed approach is presented through a practical classification applicati...

  18. Recognition Using Hybrid Classifiers.

    Science.gov (United States)

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply. PMID:26959677

  19. Dynamic system classifier

    CERN Document Server

    Pumpe, Daniel; Müller, Ewald; Enßlin, Torsten A

    2016-01-01

    Stochastic differential equations describe well many physical, biological and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of DSC to oscillation processes with a time dependent frequency {\\omega}(t) and damping factor {\\gamma}(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The {\\omega} and {\\gamma} timelines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiment...

  20. Classifying Cereal Data

    Science.gov (United States)

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  1. Intelligent Garbage Classifier

    OpenAIRE

    Ignacio Rodríguez Novelle; Javier Pérez Cid; Alvaro Salmador

    2008-01-01

    IGC (Intelligent Garbage Classifier) is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  2. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  3. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  4. Text Classification and Classifiers:A Survey

    Directory of Open Access Journals (Sweden)

    Vandana Korde

    2012-03-01

    Full Text Available As most information (over 80% is stored as text, text mining is believed to have a high commercial potential value. knowledge may be discovered from many sources of information; yet, unstructured texts remain the largest readily available source of knowledge .Text classification which classifies the documents according to predefined categories .In this paper we are tried to give the introduction of text classification, process of text classification as well as the overview of the classifiers and tried to compare the some existing classifier on basis of few criteria like time complexity, principal and performance.

  5. Adaboost Ensemble Classifiers for Corporate Default Prediction

    Directory of Open Access Journals (Sweden)

    Suresh Ramakrishnan

    2015-01-01

    Full Text Available This study aims to show a substitute technique to corporate default prediction. Data mining techniques have been extensively applied for this task, due to its ability to notice non-linear relationships and show a good performance in presence of noisy information, as it usually happens in corporate default prediction problems. In spite of several progressive methods that have widely been proposed, this area of research is not out dated and still needs further examination. In this study, the performance of multiple classifier systems is assessed in terms of their capability to appropriately classify default and non-default Malaysian firms listed in Bursa Malaysia. Multi-stage combination classifiers provided significant improvements over the single classifiers. In addition, Adaboost shows improvement in performance over the single classifiers.

  6. Arabic Word Recognition by Classifiers and Context

    Institute of Scientific and Technical Information of China (English)

    Nadir Farah; Labiba Souici; Mokhtar Sellami

    2005-01-01

    Given the number and variety of methods used for handwriting recognition, it has been shown that there is no single method that can be called the "best". In recent years, the combination of different classifiers and the use of contextual information have become major areas of interest in improving recognition results. This paper addresses a case study on the combination of multiple classifiers and the integration of syntactic level information for the recognition of handwritten Arabic literal amounts. To the best of our knowledge, this is the first time either of these methods has been applied to Arabic word recognition. Using three individual classifiers with high level global features, we performed word recognition experiments. A parallel combination method was tested for all possible configuration cases of the three chosen classifiers. A syntactic analyzer makes a final decision on the candidate words generated by the best configuration scheme.The effectiveness of contextual knowledge integration in our application is confirmed by the obtained results.

  7. A Sequential Algorithm for Training Text Classifiers

    CERN Document Server

    Lewis, D D; Lewis, David D.; Gale, William A.

    1994-01-01

    The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.

  8. Quality Classifiers for Open Source Software Repositories

    OpenAIRE

    Tsatsaronis, George; Halkidi, Maria; Giakoumakis, Emmanouel A.

    2009-01-01

    Open Source Software (OSS) often relies on large repositories, like SourceForge, for initial incubation. The OSS repositories offer a large variety of meta-data providing interesting information about projects and their success. In this paper we propose a data mining approach for training classifiers on the OSS meta-data provided by such data repositories. The classifiers learn to predict the successful continuation of an OSS project. The `successfulness' of projects is defined in terms of th...

  9. Botnet analysis using ensemble classifier

    Directory of Open Access Journals (Sweden)

    Anchit Bijalwan

    2016-09-01

    Full Text Available This paper analyses the botnet traffic using Ensemble of classifier algorithm to find out bot evidence. We used ISCX dataset for training and testing purpose. We extracted the features of both training and testing datasets. After extracting the features of this dataset, we bifurcated these features into two classes, normal traffic and botnet traffic and provide labelling. Thereafter using modern data mining tool, we have applied ensemble of classifier algorithm. Our experimental results show that the performance for finding bot evidence using ensemble of classifiers is better than single classifier. Ensemble based classifiers perform better than single classifier by either combining powers of multiple algorithms or introducing diversification to the same classifier by varying input in bot analysis. Our results are showing that by using voting method of ensemble based classifier accuracy is increased up to 96.41% from 93.37%.

  10. Local Component Analysis for Nonparametric Bayes Classifier

    CERN Document Server

    Khademi, Mahmoud; safayani, Meharn

    2010-01-01

    The decision boundaries of Bayes classifier are optimal because they lead to maximum probability of correct decision. It means if we knew the prior probabilities and the class-conditional densities, we could design a classifier which gives the lowest probability of error. However, in classification based on nonparametric density estimation methods such as Parzen windows, the decision regions depend on the choice of parameters such as window width. Moreover, these methods suffer from curse of dimensionality of the feature space and small sample size problem which severely restricts their practical applications. In this paper, we address these problems by introducing a novel dimension reduction and classification method based on local component analysis. In this method, by adopting an iterative cross-validation algorithm, we simultaneously estimate the optimal transformation matrices (for dimension reduction) and classifier parameters based on local information. The proposed method can classify the data with co...

  11. Classifying Entropy Measures

    Directory of Open Access Journals (Sweden)

    Angel Garrido

    2011-07-01

    Full Text Available Our paper analyzes some aspects of Uncertainty Measures. We need to obtain new ways to model adequate conditions or restrictions, constructed from vague pieces of information. The classical entropy measure originates from scientific fields; more specifically, from Statistical Physics and Thermodynamics. With time it was adapted by Claude Shannon, creating the current expanding Information Theory. However, the Hungarian mathematician, Alfred Rényi, proves that different and valid entropy measures exist in accordance with the purpose and/or need of application. Accordingly, it is essential to clarify the different types of measures and their mutual relationships. For these reasons, we attempt here to obtain an adequate revision of such fuzzy entropy measures from a mathematical point of view.

  12. Emergent behaviors of classifier systems

    Energy Technology Data Exchange (ETDEWEB)

    Forrest, S.; Miller, J.H.

    1989-01-01

    This paper discusses some examples of emergent behavior in classifier systems, describes some recently developed methods for studying them based on dynamical systems theory, and presents some initial results produced by the methodology. The goal of this work is to find techniques for noticing when interesting emergent behaviors of classifier systems emerge, to study how such behaviors might emerge over time, and make suggestions for designing classifier systems that exhibit preferred behaviors. 20 refs., 1 fig.

  13. Evolving Classifiers: Methods for Incremental Learning

    CERN Document Server

    Hulley, Greg

    2007-01-01

    The ability of a classifier to take on new information and classes by evolving the classifier without it having to be fully retrained is known as incremental learning. Incremental learning has been successfully applied to many classification problems, where the data is changing and is not all available at once. In this paper there is a comparison between Learn++, which is one of the most recent incremental learning algorithms, and the new proposed method of Incremental Learning Using Genetic Algorithm (ILUGA). Learn++ has shown good incremental learning capabilities on benchmark datasets on which the new ILUGA method has been tested. ILUGA has also shown good incremental learning ability using only a few classifiers and does not suffer from catastrophic forgetting. The results obtained for ILUGA on the Optical Character Recognition (OCR) and Wine datasets are good, with an overall accuracy of 93% and 94% respectively showing a 4% improvement over Learn++.MT for the difficult multi-class OCR dataset.

  14. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages. PMID:17431300

  15. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  16. Optimally Training a Cascade Classifier

    CERN Document Server

    Shen, Chunhua; Hengel, Anton van den

    2010-01-01

    Cascade classifiers are widely used in real-time object detection. Different from conventional classifiers that are designed for a low overall classification error rate, a classifier in each node of the cascade is required to achieve an extremely high detection rate and moderate false positive rate. Although there are a few reported methods addressing this requirement in the context of object detection, there is no a principled feature selection method that explicitly takes into account this asymmetric node learning objective. We provide such an algorithm here. We show a special case of the biased minimax probability machine has the same formulation as the linear asymmetric classifier (LAC) of \\cite{wu2005linear}. We then design a new boosting algorithm that directly optimizes the cost function of LAC. The resulting totally-corrective boosting algorithm is implemented by the column generation technique in convex optimization. Experimental results on object detection verify the effectiveness of the proposed bo...

  17. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  18. COMBINED CLASSIFIER FOR WEBSITE MESSAGES FILTRATION

    OpenAIRE

    TARASOV VENIAMIN; MEZENCEVA EKATERINA; KARBAEV DANILA

    2015-01-01

    The paper describes a new approach to website messages filtration using combined classifier. Information security standards for the internet resources require user data protection however the increasing volume of spam messages in interactive sections of websites poses a special problem. Unlike many email filtering solutions the proposed approach is based on the effective combination of Bayes and Fisher methods, which allows us to build accurate and stable spam filter. In this paper we conside...

  19. Adaptively robust filtering with classified adaptive factors

    Institute of Scientific and Technical Information of China (English)

    CUI Xianqiang; YANG Yuanxi

    2006-01-01

    The key problems in applying the adaptively robust filtering to navigation are to establish an equivalent weight matrix for the measurements and a suitable adaptive factor for balancing the contributions of the measurements and the predicted state information to the state parameter estimates. In this paper, an adaptively robust filtering with classified adaptive factors was proposed, based on the principles of the adaptively robust filtering and bi-factor robust estimation for correlated observations. According to the constant velocity model of Kalman filtering, the state parameter vector was divided into two groups, namely position and velocity. The estimator of the adaptively robust filtering with classified adaptive factors was derived, and the calculation expressions of the classified adaptive factors were presented. Test results show that the adaptively robust filtering with classified adaptive factors is not only robust in controlling the measurement outliers and the kinematic state disturbing but also reasonable in balancing the contributions of the predicted position and velocity, respectively, and its filtering accuracy is superior to the adaptively robust filter with single adaptive factor based on the discrepancy of the predicted position or the predicted velocity.

  20. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  1. Semantic Features for Classifying Referring Search Terms

    Energy Technology Data Exchange (ETDEWEB)

    May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.

    2012-05-11

    When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.

  2. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable. The...

  3. Classifying self-gravitating radiations

    CERN Document Server

    Kim, Hyeong-Chan

    2016-01-01

    We study static systems of self-gravitating radiations confined in a sphere by using numerical and analytic calculations. We classify and analyze the solutions systematically. Due to the scaling symmetry, any solution can be represented as a segment of a solution curve on a plane of two-dimensional scale invariant variables. We find that a system can be conveniently parametrized by three parameters representing the solution curve, the scaling, and the system size, instead of the parameters defined at the outer boundary. The solution curves are classified to three types representing regular solutions, conically singular solutions with, and without an object which resembles an event horizon up to causal disconnectedness. For the last type, the behavior of a self-gravitating system is simple enough to allow analytic calculations.

  4. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  5. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered. PMID:27557100

  6. Disassembly and Sanitization of Classified Matter

    International Nuclear Information System (INIS)

    The Disassembly Sanitization Operation (DSO) process was implemented to support weapon disassembly and disposition by using recycling and waste minimization measures. This process was initiated by treaty agreements and reconfigurations within both the DOD and DOE Complexes. The DOE is faced with disassembling and disposing of a huge inventory of retired weapons, components, training equipment, spare parts, weapon maintenance equipment, and associated material. In addition, regulations have caused a dramatic increase in the need for information required to support the handling and disposition of these parts and materials. In the past, huge inventories of classified weapon components were required to have long-term storage at Sandia and at many other locations throughout the DoE Complex. These materials are placed in onsite storage unit due to classification issues and they may also contain radiological and/or hazardous components. Since no disposal options exist for this material, the only choice was long-term storage. Long-term storage is costly and somewhat problematic, requiring a secured storage area, monitoring, auditing, and presenting the potential for loss or theft of the material. Overall recycling rates for materials sent through the DSO process have enabled 70 to 80% of these components to be recycled. These components are made of high quality materials and once this material has been sanitized, the demand for the component metals for recycling efforts is very high. The DSO process for NGPF, classified components established the credibility of this technique for addressing the long-term storage requirements of the classified weapons component inventory. The success of this application has generated interest from other Sandia organizations and other locations throughout the complex. Other organizations are requesting the help of the DSO team and the DSO is responding to these requests by expanding its scope to include Work-for- Other projects. For example

  7. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    Directory of Open Access Journals (Sweden)

    Nowakowski Artur

    2015-10-01

    Full Text Available Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics.

  8. Glycosylation site prediction using ensembles of Support Vector Machine classifiers

    Directory of Open Access Journals (Sweden)

    Silvescu Adrian

    2007-11-01

    Full Text Available Abstract Background Glycosylation is one of the most complex post-translational modifications (PTMs of proteins in eukaryotic cells. Glycosylation plays an important role in biological processes ranging from protein folding and subcellular localization, to ligand recognition and cell-cell interactions. Experimental identification of glycosylation sites is expensive and laborious. Hence, there is significant interest in the development of computational methods for reliable prediction of glycosylation sites from amino acid sequences. Results We explore machine learning methods for training classifiers to predict the amino acid residues that are likely to be glycosylated using information derived from the target amino acid residue and its sequence neighbors. We compare the performance of Support Vector Machine classifiers and ensembles of Support Vector Machine classifiers trained on a dataset of experimentally determined N-linked, O-linked, and C-linked glycosylation sites extracted from O-GlycBase version 6.00, a database of 242 proteins from several different species. The results of our experiments show that the ensembles of Support Vector Machine classifiers outperform single Support Vector Machine classifiers on the problem of predicting glycosylation sites in terms of a range of standard measures for comparing the performance of classifiers. The resulting methods have been implemented in EnsembleGly, a web server for glycosylation site prediction. Conclusion Ensembles of Support Vector Machine classifiers offer an accurate and reliable approach to automated identification of putative glycosylation sites in glycoprotein sequences.

  9. Classifying and ranking DMUs in interval DEA

    Institute of Scientific and Technical Information of China (English)

    GUO Jun-peng; WU Yu-hua; LI Wen-hua

    2005-01-01

    During efficiency evaluating by DEA, the inputs and outputs of DMUs may be intervals because of insufficient information or measure error. For this reason, interval DEA is proposed. To make the efficiency scores more discriminative, this paper builds an Interval Modified DEA (IMDEA) model based on MDEA.Furthermore, models of obtaining upper and lower bounds of the efficiency scores for each DMU are set up.Based on this, the DMUs are classified into three types. Next, a new order relation between intervals which can express the DM' s preference to the three types is proposed. As a result, a full and more eonvietive ranking is made on all the DMUs. Finally an example is given.

  10. Combining Heterogeneous Classifiers for Relational Databases

    CERN Document Server

    Manjunatha, Geetha; Sitaram, Dinkar

    2012-01-01

    Most enterprise data is distributed in multiple relational databases with expert-designed schema. Using traditional single-table machine learning techniques over such data not only incur a computational penalty for converting to a 'flat' form (mega-join), even the human-specified semantic information present in the relations is lost. In this paper, we present a practical, two-phase hierarchical meta-classification algorithm for relational databases with a semantic divide and conquer approach. We propose a recursive, prediction aggregation technique over heterogeneous classifiers applied on individual database tables. The proposed algorithm was evaluated on three diverse datasets, namely TPCH, PKDD and UCI benchmarks and showed considerable reduction in classification time without any loss of prediction accuracy.

  11. A semi-automated approach to building text summarisation classifiers

    Directory of Open Access Journals (Sweden)

    Matias Garcia-Constantino

    2012-12-01

    Full Text Available An investigation into the extraction of useful information from the free text element of questionnaires, using a semi-automated summarisation extraction technique, is described. The summarisation technique utilises the concept of classification but with the support of domain/human experts during classifier construction. A realisation of the proposed technique, SARSET (Semi-Automated Rule Summarisation Extraction Tool, is presented and evaluated using real questionnaire data. The results of this evaluation are compared against the results obtained using two alternative techniques to build text summarisation classifiers. The first of these uses standard rule-based classifier generators, and the second is founded on the concept of building classifiers using secondary data. The results demonstrate that the proposed semi-automated approach outperforms the other two approaches considered.

  12. A 3-D Contextual Classifier

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    1997-01-01

    In this paper we will consider an extension of the Bayesian 2-D contextual class ification routine developed by Owen, Hjort \\$\\backslash\\$& Mohn to 3 spatial dimensions. It is evident that compared to classical pixelwise classification further information can be obtained by tak ing into account...

  13. Cellular computation using classifier systems

    OpenAIRE

    Kelly, Ciaran; Decraene, James, Lobo, Victor; Mitchell, George G.; McMullin, Barry; O'Brien, Darragh

    2006-01-01

    The EU FP6 Integrated Project PACE ('Programmable Artificial Cell Evolution') is investigating the creation, de novo, of chemical 'protocells'. These will be minimal 'wetware' chemical systems integrating molecular information carriers, primitive energy conversion (metabolism) and containment (membrane). Ultimately they should be capable of autonomous reproduction, and be 'programmable' to realise specific desired function. A key objective of PACE is to explore the application of such pro...

  14. On classifying digital accounting documents

    OpenAIRE

    Chih-Fong, Tsai

    2007-01-01

    Advances in computing and multimedia technologies allow many accounting documents to be digitized within little cost for effective storage and access. Moreover, the amount of accounting documents is increasing rapidly, this leads to the need of developing some mechanisms to effectively manage those (semi-structured) digital accounting documents for future accounting information systems (AIS). In general, accounting documents contains such as invoices, purchase orders, checks, photographs, cha...

  15. Building an automated SOAP classifier for emergency department reports.

    Science.gov (United States)

    Mowery, Danielle; Wiebe, Janyce; Visweswaran, Shyam; Harkema, Henk; Chapman, Wendy W

    2012-02-01

    Information extraction applications that extract structured event and entity information from unstructured text can leverage knowledge of clinical report structure to improve performance. The Subjective, Objective, Assessment, Plan (SOAP) framework, used to structure progress notes to facilitate problem-specific, clinical decision making by physicians, is one example of a well-known, canonical structure in the medical domain. Although its applicability to structuring data is understood, its contribution to information extraction tasks has not yet been determined. The first step to evaluating the SOAP framework's usefulness for clinical information extraction is to apply the model to clinical narratives and develop an automated SOAP classifier that classifies sentences from clinical reports. In this quantitative study, we applied the SOAP framework to sentences from emergency department reports, and trained and evaluated SOAP classifiers built with various linguistic features. We found the SOAP framework can be applied manually to emergency department reports with high agreement (Cohen's kappa coefficients over 0.70). Using a variety of features, we found classifiers for each SOAP class can be created with moderate to outstanding performance with F(1) scores of 93.9 (subjective), 94.5 (objective), 75.7 (assessment), and 77.0 (plan). We look forward to expanding the framework and applying the SOAP classification to clinical information extraction tasks.

  16. Classifying supernovae using only galaxy data

    Energy Technology Data Exchange (ETDEWEB)

    Foley, Ryan J. [Astronomy Department, University of Illinois at Urbana-Champaign, 1002 West Green Street, Urbana, IL 61801 (United States); Mandel, Kaisey [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2013-12-01

    We present a new method for probabilistically classifying supernovae (SNe) without using SN spectral or photometric data. Unlike all previous studies to classify SNe without spectra, this technique does not use any SN photometry. Instead, the method relies on host-galaxy data. We build upon the well-known correlations between SN classes and host-galaxy properties, specifically that core-collapse SNe rarely occur in red, luminous, or early-type galaxies. Using the nearly spectroscopically complete Lick Observatory Supernova Search sample of SNe, we determine SN fractions as a function of host-galaxy properties. Using these data as inputs, we construct a Bayesian method for determining the probability that an SN is of a particular class. This method improves a common classification figure of merit by a factor of >2, comparable to the best light-curve classification techniques. Of the galaxy properties examined, morphology provides the most discriminating information. We further validate this method using SN samples from the Sloan Digital Sky Survey and the Palomar Transient Factory. We demonstrate that this method has wide-ranging applications, including separating different subclasses of SNe and determining the probability that an SN is of a particular class before photometry or even spectra can. Since this method uses completely independent data from light-curve techniques, there is potential to further improve the overall purity and completeness of SN samples and to test systematic biases of the light-curve techniques. Further enhancements to the host-galaxy method, including additional host-galaxy properties, combination with light-curve methods, and hybrid methods, should further improve the quality of SN samples from past, current, and future transient surveys.

  17. A Film Classifier Based on Low-level Visual Features

    Directory of Open Access Journals (Sweden)

    Hui-Yu Huang

    2008-07-01

    Full Text Available We propose an approach to classify the film classes by using low level features and visual features. This approach aims to classify the films into genres. Our current domain of study is using the movie preview. A movie preview often emphasizes the theme of a film and hence provides suitable information for classifying process. In our approach, we categorize films into three broad categories: action, dramas, and thriller films. Four computable video features (average shot length, color variance, motion content and lighting key and visual features (show and fast moving effects are combined in our approach to provide the advantage information to demonstrate the movie category. The experimental results present that visual features are the useful messages for processing the film classification. On the other hand, our approach can also be extended for other potential applications, including the browsing and retrieval of videos on the internet, video-on-demand, and video libraries.

  18. A Neural Network Classifier of Volume Datasets

    CERN Document Server

    Zukić, Dženan; Kolb, Andreas

    2009-01-01

    Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets,...

  19. Is it important to classify ischaemic stroke?

    LENUS (Irish Health Repository)

    Iqbal, M

    2012-02-01

    Thirty-five percent of all ischemic events remain classified as cryptogenic. This study was conducted to ascertain the accuracy of diagnosis of ischaemic stroke based on information given in the medical notes. It was tested by applying the clinical information to the (TOAST) criteria. Hundred and five patients presented with acute stroke between Jan-Jun 2007. Data was collected on 90 patients. Male to female ratio was 39:51 with age range of 47-93 years. Sixty (67%) patients had total\\/partial anterior circulation stroke; 5 (5.6%) had a lacunar stroke and in 25 (28%) the mechanism of stroke could not be identified. Four (4.4%) patients with small vessel disease were anticoagulated; 5 (5.6%) with atrial fibrillation received antiplatelet therapy and 2 (2.2%) patients with atrial fibrillation underwent CEA. This study revealed deficiencies in the clinical assessment of patients and treatment was not tailored to the mechanism of stroke in some patients.

  20. Pavement Crack Classifiers: A Comparative Study

    Directory of Open Access Journals (Sweden)

    S. Siddharth

    2012-12-01

    Full Text Available Non Destructive Testing (NDT is an analysis technique used to inspect metal sheets and components without harming the product. NDT do not cause any change after inspection; this technique saves money and time in product evaluation, research and troubleshooting. In this study the objective is to perform NDT using soft computing techniques. Digital images are taken; Gray Level Co-occurrence Matrix (GLCM extracts features from these images. Extracted features are then fed into the classifiers which classifies them into images with and without cracks. Three major classifiers: Neural networks, Support Vector Machine (SVM and Linear classifiers are taken for the classification purpose. Performances of these classifiers are assessed and the best classifier for the given data is chosen.

  1. Classifying VAT Legislation for Automation

    DEFF Research Database (Denmark)

    Sudzina, Frantisek; Nielsen, Morten Ib; Simonsen, Jakob Grue;

    The paper offers a framework for partitioning articles in legal documents pertaining to value added tax (VAT) into categories suitable for subsequent integration in computerized systems for automatically deriving VAT rates. The importance of an enterprise resource planning (ERP) system supporting...... VAT is not that it is required by a definition but because information technology in general increasingly supports everyday activities, so users expect more even from ERP systems. As an extended example, the classification of all articles of the European Council directive 2006/112/EC of 28 November...... 2006 on the common system of value added tax is presented. The classification of VAT articles is important in order to allow for easier VAT modeling for ERP systems. Better VAT modeling should eventually lead to lower cost of implementing changes in VAT legislature....

  2. Rotary fluidized dryer classifier for coal

    Energy Technology Data Exchange (ETDEWEB)

    Sakaba, M.; Ueki, S.; Matsumoto, T.

    1985-01-01

    The development of equipment is reproted which uses a heat transfer medium and hot air to dry metallurgical coal to a predetermined moisture level, and which simultaneously classifies the dust-producing fine coal content. The integral construction of the drying and classifying zones results in a very compact configuration, with an installation area of 1/2 to 1/3 of that required for systems in which a separate dryer and classifier are combined. 6 references.

  3. Discrimination-Aware Classifiers for Student Performance Prediction

    Science.gov (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei

    2015-01-01

    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  4. Examining the significance of fingerprint-based classifiers

    Directory of Open Access Journals (Sweden)

    Collins Jack R

    2008-12-01

    Full Text Available Abstract Background Experimental examinations of biofluids to measure concentrations of proteins or their fragments or metabolites are being explored as a means of early disease detection, distinguishing diseases with similar symptoms, and drug treatment efficacy. Many studies have produced classifiers with a high sensitivity and specificity, and it has been argued that accurate results necessarily imply some underlying biology-based features in the classifier. The simplest test of this conjecture is to examine datasets designed to contain no information with classifiers used in many published studies. Results The classification accuracy of two fingerprint-based classifiers, a decision tree (DT algorithm and a medoid classification algorithm (MCA, are examined. These methods are used to examine 30 artificial datasets that contain random concentration levels for 300 biomolecules. Each dataset contains between 30 and 300 Cases and Controls, and since the 300 observed concentrations are randomly generated, these datasets are constructed to contain no biological information. A modest search of decision trees containing at most seven decision nodes finds a large number of unique decision trees with an average sensitivity and specificity above 85% for datasets containing 60 Cases and 60 Controls or less, and for datasets with 90 Cases and 90 Controls many DTs have an average sensitivity and specificity above 80%. For even the largest dataset (300 Cases and 300 Controls the MCA procedure finds several unique classifiers that have an average sensitivity and specificity above 88% using only six or seven features. Conclusion While it has been argued that accurate classification results must imply some biological basis for the separation of Cases from Controls, our results show that this is not necessarily true. The DT and MCA classifiers are sufficiently flexible and can produce good results from datasets that are specifically constructed to contain no

  5. Classifying climate change adaptation frameworks

    Science.gov (United States)

    Armstrong, Jennifer

    2014-05-01

    Complex socio-ecological demographics are factors that must be considered when addressing adaptation to the potential effects of climate change. As such, a suite of deployable climate change adaptation frameworks is necessary. Multiple frameworks that are required to communicate the risks of climate change and facilitate adaptation. Three principal adaptation frameworks have emerged from the literature; Scenario - Led (SL), Vulnerability - Led (VL) and Decision - Centric (DC). This study aims to identify to what extent these adaptation frameworks; either, planned or deployed are used in a neighbourhood vulnerable to climate change. This work presents a criterion that may be used as a tool for identifying the hallmarks of adaptation frameworks and thus enabling categorisation of projects. The study focussed on the coastal zone surrounding the Sizewell nuclear power plant in Suffolk in the UK. An online survey was conducted identifying climate change adaptation projects operating in the study area. This inventory was analysed to identify the hallmarks of each adaptation project; Levels of dependency on climate model information, Metrics/units of analysis utilised, Level of demographic knowledge, Level of stakeholder engagement, Adaptation implementation strategies and Scale of adaptation implementation. The study found that climate change adaptation projects could be categorised, based on the hallmarks identified, in accordance with the published literature. As such, the criterion may be used to establish the matrix of adaptation frameworks present in a given area. A comprehensive summary of the nature of adaptation frameworks in operation in a locality provides a platform for further comparative analysis. Such analysis, enabled by the criterion, may aid the selection of appropriate frameworks enhancing the efficacy of climate change adaptation.

  6. 32 CFR 775.5 - Classified actions.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Classified actions. 775.5 Section 775.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a...

  7. Serefind: A Social Networking Website for Classifieds

    OpenAIRE

    Verma, Pramod

    2014-01-01

    This paper presents the design and implementation of a social networking website for classifieds, called Serefind. We designed search interfaces with focus on security, privacy, usability, design, ranking, and communications. We deployed this site at the Johns Hopkins University, and the results show it can be used as a self-sustaining classifieds site for public or private communities.

  8. A review of learning vector quantization classifiers

    CERN Document Server

    Nova, David

    2015-01-01

    In this work we present a review of the state of the art of Learning Vector Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets.

  9. Designing Kernel Scheme for Classifiers Fusion

    CERN Document Server

    Haghighi, Mehdi Salkhordeh; Vahedian, Abedin; Modaghegh, Hamed

    2009-01-01

    In this paper, we propose a special fusion method for combining ensembles of base classifiers utilizing new neural networks in order to improve overall efficiency of classification. While ensembles are designed such that each classifier is trained independently while the decision fusion is performed as a final procedure, in this method, we would be interested in making the fusion process more adaptive and efficient. This new combiner, called Neural Network Kernel Least Mean Square1, attempts to fuse outputs of the ensembles of classifiers. The proposed Neural Network has some special properties such as Kernel abilities,Least Mean Square features, easy learning over variants of patterns and traditional neuron capabilities. Neural Network Kernel Least Mean Square is a special neuron which is trained with Kernel Least Mean Square properties. This new neuron is used as a classifiers combiner to fuse outputs of base neural network classifiers. Performance of this method is analyzed and compared with other fusion m...

  10. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  11. Classifying Unidentified Gamma-ray Sources

    CERN Document Server

    Salvetti, David

    2016-01-01

    During its first 2 years of mission the Fermi-LAT instrument discovered more than 1,800 gamma-ray sources in the 100 MeV to 100 GeV range. Despite the application of advanced techniques to identify and associate the Fermi-LAT sources with counterparts at other wavelengths, about 40% of the LAT sources have no a clear identification remaining "unassociated". The purpose of my Ph.D. work has been to pursue a statistical approach to identify the nature of each Fermi-LAT unassociated source. To this aim, we implemented advanced machine learning techniques, such as logistic regression and artificial neural networks, to classify these sources on the basis of all the available gamma-ray information about location, energy spectrum and time variability. These analyses have been used for selecting targets for AGN and pulsar searches and planning multi-wavelength follow-up observations. In particular, we have focused our attention on the search of possible radio-quiet millisecond pulsar (MSP) candidates in the sample of...

  12. Classifier Risk Estimation under Limited Labeling Resources

    OpenAIRE

    Kumar, Anurag; Raj, Bhiksha

    2016-01-01

    In this paper we propose strategies for estimating performance of a classifier when labels cannot be obtained for the whole test set. The number of test instances which can be labeled is very small compared to the whole test data size. The goal then is to obtain a precise estimate of classifier performance using as little labeling resource as possible. Specifically, we try to answer, how to select a subset of the large test set for labeling such that the performance of a classifier estimated ...

  13. Parallelism and programming in classifier systems

    CERN Document Server

    Forrest, Stephanie

    1990-01-01

    Parallelism and Programming in Classifier Systems deals with the computational properties of the underlying parallel machine, including computational completeness, programming and representation techniques, and efficiency of algorithms. In particular, efficient classifier system implementations of symbolic data structures and reasoning procedures are presented and analyzed in detail. The book shows how classifier systems can be used to implement a set of useful operations for the classification of knowledge in semantic networks. A subset of the KL-ONE language was chosen to demonstrate these o

  14. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  15. Dengue—How Best to Classify It

    OpenAIRE

    Srikiatkhachorn, Anon; Rothman, Alan L.; Robert V Gibbons; Sittisombut, Nopporn; Malasit, Prida; Ennis, Francis A.; Nimmannitya, Suchitra; Kalayanarooj, Siripen

    2011-01-01

    Since the 1970s, dengue has been classified as dengue fever and dengue hemorrhagic fever. In 2009, the World Health Organization issued a new, severity-based clinical classification which differs greatly from the previous classification.

  16. An Efficient and Effective Immune Based Classifier

    Directory of Open Access Journals (Sweden)

    Shahram Golzari

    2011-01-01

    Full Text Available Problem statement: Artificial Immune Recognition System (AIRS is most popular and effective immune inspired classifier. Resource competition is one stage of AIRS. Resource competition is done based on the number of allocated resources. AIRS uses a linear method to allocate resources. The linear resource allocation increases the training time of classifier. Approach: In this study, a new nonlinear resource allocation method is proposed to make AIRS more efficient. New algorithm, AIRS with proposed nonlinear method, is tested on benchmark datasets from UCI machine learning repository. Results: Based on the results of experiments, using proposed nonlinear resource allocation method decreases the training time and number of memory cells and doesn't reduce the accuracy of AIRS. Conclusion: The proposed classifier is an efficient and effective classifier.

  17. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  18. Nomograms for Visualization of Naive Bayesian Classifier

    OpenAIRE

    Možina, Martin; Demšar, Janez; Michael W Kattan; Zupan, Blaz

    2004-01-01

    Besides good predictive performance, the naive Bayesian classifier can also offer a valuable insight into the structure of the training data and effects of the attributes on the class probabilities. This structure may be effectively revealed through visualization of the classifier. We propose a new way to visualize the naive Bayesian model in the form of a nomogram. The advantages of the proposed method are simplicity of presentation, clear display of the effects of individual attribute value...

  19. Classifying Genomic Sequences by Sequence Feature Analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Liu; Dian Jiao; Xiao Sun

    2005-01-01

    Traditional sequence analysis depends on sequence alignment. In this study, we analyzed various functional regions of the human genome based on sequence features, including word frequency, dinucleotide relative abundance, and base-base correlation. We analyzed the human chromosome 22 and classified the upstream,exon, intron, downstream, and intergenic regions by principal component analysis and discriminant analysis of these features. The results show that we could classify the functional regions of genome based on sequence feature and discriminant analysis.

  20. Binary Classifier Calibration: Non-parametric approach

    OpenAIRE

    Naeini, Mahdi Pakdaman; Cooper, Gregory F.; Hauskrecht, Milos

    2014-01-01

    Accurate calibration of probabilistic predictive models learned is critical for many practical prediction and decision-making tasks. There are two main categories of methods for building calibrated classifiers. One approach is to develop methods for learning probabilistic models that are well-calibrated, ab initio. The other approach is to use some post-processing methods for transforming the output of a classifier to be well calibrated, as for example histogram binning, Platt scaling, and is...

  1. A multi-class large margin classifier

    Institute of Scientific and Technical Information of China (English)

    Liang TANG; Qi XUAN; Rong XIONG; Tie-jun WU; Jian CHU

    2009-01-01

    Currently there are two approaches for a multi-class support vector classifier (SVC). One is to construct and combine several binary classifiers while the other is to directly consider all classes of data in one optimization formulation. For a K-class problem (K>2), the first approach has to construct at least K classifiers, and the second approach has to solve a much larger op-timization problem proportional to K by the algorithms developed so far. In this paper, following the second approach, we present a novel multi-class large margin classifier (MLMC). This new machine can solve K-class problems in one optimization formula-tion without increasing the size of the quadratic programming (QP) problem proportional to K. This property allows us to construct just one classifier with as few variables in the QP problem as possible to classify multi-class data, and we can gain the advantage of speed from it especially when K is large. Our experiments indicate that MLMC almost works as well as (sometimes better than) many other multi-class SVCs for some benchmark data classification problems, and obtains a reasonable performance in face recognition application on the AR face database.

  2. COMBINING CLASSIFIERS FOR CREDIT RISK PREDICTION

    Institute of Scientific and Technical Information of China (English)

    Bhekisipho TWALA

    2009-01-01

    Credit risk prediction models seek to predict quality factors such as whether an individual will default (bad applicant) on a loan or not (good applicant). This can be treated as a kind of machine learning (ML) problem. Recently, the use of ML algorithms has proven to be of great practical value in solving a variety of risk problems including credit risk prediction. One of the most active areas of recent research in ML has been the use of ensemble (combining) classifiers. Research indicates that ensemble individual classifiers lead to a significant improvement in classification performance by having them vote for the most popular class. This paper explores the predicted behaviour of five classifiers for different types of noise in terms of credit risk prediction accuracy, and how could such accuracy be improved by using pairs of classifier ensembles. Benchmarking results on five credit datasets and comparison with the performance of each individual classifier on predictive accuracy at various attribute noise levels are presented. The experimental evaluation shows that the ensemble of classifiers technique has the potential to improve prediction accuracy.

  3. Mathematical Modeling and Analysis of Classified Marketing of Agricultural Products

    Institute of Scientific and Technical Information of China (English)

    Fengying; WANG

    2014-01-01

    Classified marketing of agricultural products was analyzed using the Logistic Regression Model. This method can take full advantage of information in agricultural product database,to find factors influencing best selling degree of agricultural products,and make quantitative analysis accordingly. Using this model,it is also able to predict sales of agricultural products,and provide reference for mapping out individualized sales strategy for popularizing agricultural products.

  4. Management Education: Classifying Business Curricula and Conceptualizing Transfers and Bridges

    OpenAIRE

    Davar Rezania; Mike Henry

    2010-01-01

    Traditionally, higher academic education has favoured acquisition of individualized conceptual knowledge over context-independent procedural knowledge. Applied degrees, on the other hand, favour procedural knowledge. We present a conceptual model for classifying a business curriculum. This classification can inform discussion around difficulties associated with issues such as assessment of prior learning, as well as transfers and bridges from applied degrees to baccalaureate degrees in busine...

  5. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    OpenAIRE

    Sang-Hoon Hong; Hyun-Ok Kim; Shimon Wdowinski; Emanuelle Feliciano

    2015-01-01

    The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol) synthetic aperture radar (PolSAR) data for classifying wetland vegetation in the Everglades. We ...

  6. Diagnostic value of perfusion MRI in classifying stroke

    International Nuclear Information System (INIS)

    Our study was designed to determine whether supplementary information obtained with perfusion MRI can enhance accuracy. We used delayed perfusion, as represented by time to peak map on perfusion MRI, to classify strokes in 39 patients. Strokes were classified as hemodynamic if delayed perfusion extended to a whole territory of the occluded arterial trunk; as embolic if delayed perfusion was absent or restricted to infarcts; as arteriosclerotic if infarcts were small, multiple, and located mainly in the basal ganglias; or as unclassified if the pathophysiology was unclear. We compared these findings with vascular lesions on cerebral angiography, neurological signs, infarction on MRI, ischemia on xenon-enhanced CT (Xe/CT) and collateral pathway development. Delayed perfusion clearly indicated the area of arterial occlusion. Strokes were classified as hemodynamic in 13 patients, embolic in 14 patients, arteriosclerotic in 6 patients and unclassified in 6 patients. Hemodynamic infarcts were seen only in deep white-matter areas such as the centrum semiovale or corona radiata, whereas embolic infarcts were in the cortex, cortex and subjacent white matter, and lenticulo-striatum. Embolic and arteriosclerotic infarcts occurred even in hemo-dynamically compromized hemispheres. Our findings indicate that perfusion MRI, in association with adetailed analysis of T2-weighted MRI of cerebral infarcts in the axial and coronal planes, can accurately classify stroke as hemodynamic, embolic or arteriosclerotic. (author)

  7. Packet Payload Inspection Classifier in the Network Flow Level

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-06-01

    Full Text Available The network have in the world highly congested channels and topology which was dynamically created with high risk. In this we need flow classifier to find the packet movement in the network. In this paper we have to be developed and evaluated TCP/UDP/FTP/ICMP based on payload information and port numbers and number of flags in the packet for highly flow of packets in the network. The primary motivations of this paper all the valuable protocols are used legally to process find out the end user by using payload packet inspection, and also used evaluations hypothesis testing approach. The effective use of tamper resistant flow classifier has used in one network contexts domain and developed in a different Berkeley and Cambridge, the classification and accuracy was easily found through the packet inspection by using different flags in the packets. While supervised classifier training specific to the new domain results in much better classification accuracy, we also formed a new approach to determine malicious packet and find a packet flow classifier and send correct packet to destination address.

  8. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    Science.gov (United States)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  9. Averaged Extended Tree Augmented Naive Classifier

    Directory of Open Access Journals (Sweden)

    Aaron Meehan

    2015-07-01

    Full Text Available This work presents a new general purpose classifier named Averaged Extended Tree Augmented Naive Bayes (AETAN, which is based on combining the advantageous characteristics of Extended Tree Augmented Naive Bayes (ETAN and Averaged One-Dependence Estimator (AODE classifiers. We describe the main properties of the approach and algorithms for learning it, along with an analysis of its computational time complexity. Empirical results with numerous data sets indicate that the new approach is superior to ETAN and AODE in terms of both zero-one classification accuracy and log loss. It also compares favourably against weighted AODE and hidden Naive Bayes. The learning phase of the new approach is slower than that of its competitors, while the time complexity for the testing phase is similar. Such characteristics suggest that the new classifier is ideal in scenarios where online learning is not required.

  10. Adapt Bagging to Nearest Neighbor Classifiers

    Institute of Scientific and Technical Information of China (English)

    Zhi-Hua Zhou; Yang Yu

    2005-01-01

    It is well-known that in order to build a strong ensemble, the component learners should be with high diversity as well as high accuracy. If perturbing the training set can cause significant changes in the component learners constructed, then Bagging can effectively improve accuracy. However, for stable learners such as nearest neighbor classifiers, perturbing the training set can hardly produce diverse component learners, therefore Bagging does not work well. This paper adapts Bagging to nearest neighbor classifiers through injecting randomness to distance metrics. In constructing the component learners, both the training set and the distance metric employed for identifying the neighbors are perturbed. A large scale empirical study reported in this paper shows that the proposed BagInRand algorithm can effectively improve the accuracy of nearest neighbor classifiers.

  11. Dynamic Bayesian Combination of Multiple Imperfect Classifiers

    CERN Document Server

    Simpson, Edwin; Psorakis, Ioannis; Smith, Arfon

    2012-01-01

    Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse. In this paper we explore Bayesian classifier combination, using the computationally efficient framework of variational Bayesian inference. We apply the approach to real data from a large citizen science project, Galaxy Zoo Supernovae, and show that our method far outperforms other established approaches to imperfect decision combination. We go on to analyse the putative community structure of the decision makers, based on their inferred decision making strategies, and show that natural groupings are formed. Finally we present ...

  12. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  13. A nonparametric classifier for unsegmented text

    Science.gov (United States)

    Nagy, George; Joshi, Ashutosh; Krishnamoorthy, Mukkai; Lin, Yu; Lopresti, Daniel P.; Mehta, Shashank; Seth, Sharad

    2003-12-01

    Symbolic Indirect Correlation (SIC) is a new classification method for unsegmented patterns. SIC requires two levels of comparisons. First, the feature sequences from an unknown query signal and a known multi-pattern reference signal are matched. Then, the order of the matched features is compared with the order of matches between every lexicon symbol-string and the reference string in the lexical domain. The query is classified according to the best matching lexicon string in the second comparison. Accuracy increases as classified feature-and-symbol strings are added to the reference string.

  14. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads;

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...

  15. 76 FR 63811 - Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and...

    Science.gov (United States)

    2011-10-13

    ... Documents#0;#0; ] Executive Order 13587 of October 7, 2011 Structural Reforms To Improve the Security of Classified Networks and the Responsible Sharing and Safeguarding of Classified Information By the authority... to ensure the responsible sharing and safeguarding of classified national security...

  16. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  17. 28 CFR 17.41 - Access to classified information.

    Science.gov (United States)

    2010-07-01

    ... personal and professional history affirmatively indicated loyalty to the United States, strength of... does not discriminate on the basis of race, color, religion, sex, national origin, disability,...

  18. Neural Classifier Construction using Regularization, Pruning

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan;

    1998-01-01

    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...

  19. Design and evaluation of neural classifiers

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Pedersen, Morten With; Hansen, Lars Kai;

    1996-01-01

    In this paper we propose a method for the design of feedforward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme we derive a modified form of the entropy error measure and an algebraic estimate of the test error. In conjunction...

  20. Using Syntactic-Based Kernels for Classifying Temporal Relations

    Institute of Scientific and Technical Information of China (English)

    Seyed Abolghasem Mirroshandel; Gholamreza Ghassem-Sani; Mahdy Khayyamian

    2011-01-01

    Temporal relation classification is one of contemporary demanding tasks of natural language processing. This task can be used in various applications such as question answering, summarization, and language specific information retrieval. In this paper, we propose an improved algorithm for classifying temporal relations, between events or between events and time, using support vector machines (SVM). Along with gold-standard corpus features, the proposed method aims at exploiting some useful automatically generated syntactic features to improve the accuracy of classification. Accordingly, a number of novel kernel functions are introduced and evaluated. Our evaluations clearly demonstrate that adding syntactic features results in a considerable improvement over the state-of-the-art method of classifying temporal relations.

  1. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    Science.gov (United States)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-12-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  2. Combining supervised classifiers with unlabeled data

    Institute of Scientific and Technical Information of China (English)

    刘雪艳; 张雪英; 李凤莲; 黄丽霞

    2016-01-01

    Ensemble learning is a wildly concerned issue. Traditional ensemble techniques are always adopted to seek better results with labeled data and base classifiers. They fail to address the ensemble task where only unlabeled data are available. A label propagation based ensemble (LPBE) approach is proposed to further combine base classification results with unlabeled data. First, a graph is constructed by taking unlabeled data as vertexes, and the weights in the graph are calculated by correntropy function. Average prediction results are gained from base classifiers, and then propagated under a regularization framework and adaptively enhanced over the graph. The proposed approach is further enriched when small labeled data are available. The proposed algorithms are evaluated on several UCI benchmark data sets. Results of simulations show that the proposed algorithms achieve satisfactory performance compared with existing ensemble methods.

  3. Classifying sows' activity types from acceleration patterns

    DEFF Research Database (Denmark)

    Cornou, Cecile; Lundbye-Christensen, Søren

    2008-01-01

    . This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three......, which involves 30 min for each activity. The results show that feeding and lateral/sternal lying activities are best recognized; walking and rooting activities are mostly recognized by a specific axis corresponding to the direction of the sow's movement while performing the activity (horizontal sidewise......An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen...

  4. Classifying bed inclination using pressure images.

    Science.gov (United States)

    Baran Pouyan, M; Ostadabbas, S; Nourani, M; Pompeo, M

    2014-01-01

    Pressure ulcer is one of the most prevalent problems for bed-bound patients in hospitals and nursing homes. Pressure ulcers are painful for patients and costly for healthcare systems. Accurate in-bed posture analysis can significantly help in preventing pressure ulcers. Specifically, bed inclination (back angle) is a factor contributing to pressure ulcer development. In this paper, an efficient methodology is proposed to classify bed inclination. Our approach uses pressure values collected from a commercial pressure mat system. Then, by applying a number of image processing and machine learning techniques, the approximate degree of bed is estimated and classified. The proposed algorithm was tested on 15 subjects with various sizes and weights. The experimental results indicate that our method predicts bed inclination in three classes with 80.3% average accuracy.

  5. Improving 2D Boosted Classifiers Using Depth LDA Classifier for Robust Face Detection

    Directory of Open Access Journals (Sweden)

    Mahmood Rahat

    2012-05-01

    Full Text Available Face detection plays an important role in Human Robot Interaction. Many of services provided by robots depend on face detection. This paper presents a novel face detection algorithm which uses depth data to improve the efficiency of a boosted classifier on 2D data for reduction of false positive alarms. The proposed method uses two levels of cascade classifiers. The classifiers of the first level deal with 2D data and classifiers of the second level use depth data captured by a stereo camera. The first level employs conventional cascade of boosted classifiers which eliminates many of nonface sub windows. The remaining sub windows are used as input to the second level. After calculating the corresponding depth model of the sub windows, a heuristic classifier along with a Linear Discriminant analysis (LDA classifier is applied on the depth data to reject remaining non face sub windows. The experimental results of the proposed method using a Bumblebee-2 stereo vision system on a mobile platform for real time detection of human faces in natural cluttered environments reveal significantly reduction of false positive alarms of 2D face detector.

  6. Deterministic Pattern Classifier Based on Genetic Programming

    Institute of Scientific and Technical Information of China (English)

    LI Jian-wu; LI Min-qiang; KOU Ji-song

    2001-01-01

    This paper proposes a supervised training-test method with Genetic Programming (GP) for pattern classification. Compared and contrasted with traditional methods with regard to deterministic pattern classifiers, this method is true for both linear separable problems and linear non-separable problems. For specific training samples, it can formulate the expression of discriminate function well without any prior knowledge. At last, an experiment is conducted, and the result reveals that this system is effective and practical.

  7. Image Classifying Registration and Dynamic Region Merging

    Directory of Open Access Journals (Sweden)

    Himadri Nath Moulick

    2013-07-01

    Full Text Available In this paper, we address a complex image registration issue arising when the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequentlyencountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weights the two mixture components. The registration problem is formulated both as an energy minimization problem and as a Maximum A Posteriori (MAP estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated at the same time, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into

  8. Analysis of classifiers performance for classification of potential microcalcification

    Science.gov (United States)

    M. N., Arun K.; Sheshadri, H. S.

    2013-07-01

    Breast cancer is a significant public health problem in the world. According to the literature early detection improve breast cancer prognosis. Mammography is a screening tool used for early detection of breast cancer. About 10-30% cases are missed during the routine check as it is difficult for the radiologists to make accurate analysis due to large amount of data. The Microcalcifications (MCs) are considered to be important signs of breast cancer. It has been reported in literature that 30% - 50% of breast cancer detected radio graphically show MCs on mammograms. Histologic examinations report 62% to 79% of breast carcinomas reveals MCs. MC are tiny, vary in size, shape, and distribution, and MC may be closely connected to surrounding tissues. There is a major challenge using the traditional classifiers in the classification of individual potential MCs as the processing of mammograms in appropriate stage generates data sets with an unequal amount of information for both classes (i.e., MC, and Not-MC). Most of the existing state-of-the-art classification approaches are well developed by assuming the underlying training set is evenly distributed. However, they are faced with a severe bias problem when the training set is highly imbalanced in distribution. This paper addresses this issue by using classifiers which handle the imbalanced data sets. In this paper, we also compare the performance of classifiers which are used in the classification of potential MC.

  9. Automative Multi Classifier Framework for Medical Image Analysis

    Directory of Open Access Journals (Sweden)

    R. Edbert Rajan

    2015-04-01

    Full Text Available Medical image processing is the technique used to create images of the human body for medical purposes. Nowadays, medical image processing plays a major role and a challenging solution for the critical stage in the medical line. Several researches have done in this area to enhance the techniques for medical image processing. However, due to some demerits met by some advanced technologies, there are still many aspects that need further development. Existing study evaluate the efficacy of the medical image analysis with the level-set shape along with fractal texture and intensity features to discriminate PF (Posterior Fossa tumor from other tissues in the brain image. To develop the medical image analysis and disease diagnosis, to devise an automotive subjective optimality model for segmentation of images based on different sets of selected features from the unsupervised learning model of extracted features. After segmentation, classification of images is done. The classification is processed by adapting the multiple classifier frameworks in the previous work based on the mutual information coefficient of the selected features underwent for image segmentation procedures. In this study, to enhance the classification strategy, we plan to implement enhanced multi classifier framework for the analysis of medical images and disease diagnosis. The performance parameter used for the analysis of the proposed enhanced multi classifier framework for medical image analysis is Multiple Class intensity, image quality, time consumption.

  10. A space-based radio frequency transient event classifier

    Energy Technology Data Exchange (ETDEWEB)

    Moore, K.R.; Blain, C.P.; Caffrey, M.P.; Franz, R.C.; Henneke, K.M.; Jones, R.G.

    1998-03-01

    The Department of Energy is currently investigating economical and reliable techniques for space-based nuclear weapon treaty verification. Nuclear weapon detonations produce RF transients that are signatures of illegal nuclear weapons tests. However, there are many other sources of RF signals, both natural and man-made. Direct digitization of RF signals requires rates of 300 MSamples per second and produces 10{sup 13} samples per day of data to analyze. it is impractical to store and downlink all digitized RF data from such a satellite without a prohibitively expensive increase in the number and capacities of ground stations. Reliable and robust data processing and information extraction must be performed onboard the spacecraft in order to reduce downlinked data to a reasonable volume. The FORTE (Fast On-Orbit Recording of Transient Events) satellite records RF transients in space. These transients will be classified onboard the spacecraft with an Event Classifier specialized hardware that performs signal preprocessing and neural network classification. The authors describe the Event Classifier requirements, scientific constraints, design and implementation.

  11. Image Classifying Registration for Gaussian & Bayesian Techniques: A Review

    Directory of Open Access Journals (Sweden)

    Rahul Godghate,

    2014-04-01

    Full Text Available A Bayesian Technique for Image Classifying Registration to perform simultaneously image registration and pixel classification. Medical image registration is critical for the fusion of complementary information about patient anatomy and physiology, for the longitudinal study of a human organ over time and the monitoring of disease development or treatment effect, for the statistical analysis of a population variation in comparison to a so-called digital atlas, for image-guided therapy, etc. A Bayesian Technique for Image Classifying Registration is well-suited to deal with image pairs that contain two classes of pixels with different inter-image intensity relationships. We will show through different experiments that the model can be applied in many different ways. For instance if the class map is known, then it can be used for template-based segmentation. If the full model is used, then it can be applied to lesion detection by image comparison. Experiments have been conducted on both real and simulated data. It show that in the presence of an extra-class, the classifying registration improves both the registration and the detection, especially when the deformations are small. The proposed model is defined using only two classes but it is straightforward to extend it to an arbitrary number of classes.

  12. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  13. Comparison of artificial intelligence classifiers for SIP attack data

    Science.gov (United States)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  14. Evolving a Bayesian Classifier for ECG-based Age Classification in Medical Applications.

    Science.gov (United States)

    Wiggins, M; Saad, A; Litt, B; Vachtsevanos, G

    2008-01-01

    OBJECTIVE: To classify patients by age based upon information extracted from their electro-cardiograms (ECGs). To develop and compare the performance of Bayesian classifiers. METHODS AND MATERIAL: We present a methodology for classifying patients according to statistical features extracted from their ECG signals using a genetically evolved Bayesian network classifier. Continuous signal feature variables are converted to a discrete symbolic form by thresholding, to lower the dimensionality of the signal. This simplifies calculation of conditional probability tables for the classifier, and makes the tables smaller. Two methods of network discovery from data were developed and compared: the first using a greedy hill-climb search and the second employed evolutionary computing using a genetic algorithm (GA). RESULTS AND CONCLUSIONS: The evolved Bayesian network performed better (86.25% AUC) than both the one developed using the greedy algorithm (65% AUC) and the naïve Bayesian classifier (84.75% AUC). The methodology for evolving the Bayesian classifier can be used to evolve Bayesian networks in general thereby identifying the dependencies among the variables of interest. Those dependencies are assumed to be non-existent by naïve Bayesian classifiers. Such a classifier can then be used for medical applications for diagnosis and prediction purposes.

  15. 基于电力行业信息系统等级保护的Oracle数据库安全加固实践%Practice of Classified-protection-based Oracle Database Security Reinforcement in Electric Power Information Systems

    Institute of Scientific and Technical Information of China (English)

    杨大哲; 孙瑞浩

    2015-01-01

    In view of Oracle database management system,and according to the requirements on information security classified protection,this study make use of the database's own security mechanism to have completed the security reinforcement practice from the aspects of authentication, access control,intrusion prevention and security audits,after which, the capabilities of data leak prevention and anti-tampering are strengthened, and the security level of electric power information systems is improved.%针对Oracle数据库管理系统,按照信息安全等级保护要求,从身份鉴别、访问控制、入侵防范、安全审计等方面,利用数据库自身的安全机制,进行了具体加固实践,加强数据的防泄密能力与防篡改能力,提高电力信息系统的安全防护水平.

  16. A new method for classifying different phenotypes of kidney transplantation.

    Science.gov (United States)

    Zhu, Dong; Liu, Zexian; Pan, Zhicheng; Qian, Mengjia; Wang, Linyan; Zhu, Tongyu; Xue, Yu; Wu, Duojiao

    2016-08-01

    For end-stage renal diseases, kidney transplantation is the most efficient treatment. However, the unexpected rejection caused by inflammation usually leads to allograft failure. Thus, a systems-level characterization of inflammation factors can provide potentially diagnostic biomarkers for predicting renal allograft rejection. Serum of kidney transplant patients with different immune status were collected and classified as transplant patients with stable renal function (ST), impaired renal function with negative biopsy pathology (UNST), acute rejection (AR), and chronic rejection (CR). The expression profiles of 40 inflammatory proteins were measured by quantitative protein microarrays and reduced to a lower dimensional space by the partial least squares (PLS) model. The determined principal components (PCs) were then trained by the support vector machines (SVMs) algorithm for classifying different phenotypes of kidney transplantation. There were 30, 16, and 13 inflammation proteins that showed statistically significant differences between CR and ST, CR and AR, and CR and UNST patients. Further analysis revealed a protein-protein interaction (PPI) network among 33 inflammatory proteins and proposed a potential role of intracellular adhesion molecule-1 (ICAM-1) in CR. Based on the network analysis and protein expression information, two PCs were determined as the major contributors and trained by the PLS-SVMs method, with a promising accuracy of 77.5 % for classification of chronic rejection after kidney transplantation. For convenience, we also developed software packages of GPS-CKT (Classification phenotype of Kidney Transplantation Predictor) for classifying phenotypes. By confirming a strong correlation between inflammation and kidney transplantation, our results suggested that the network biomarker but not single factors can potentially classify different phenotypes in kidney transplantation. PMID:27278387

  17. Integrating language models into classifiers for BCI communication: a review

    Science.gov (United States)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  18. Integrating language models into classifiers for BCI communication: a review

    Science.gov (United States)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain–computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  19. Colorfulness Enhancement Using Image Classifier Based on Chroma-histogram

    Institute of Scientific and Technical Information of China (English)

    Moon-cheol KIM; Kyoung-won LIM

    2010-01-01

    The paper proposes a colorfulness enhancement of pictorial images using image classifier based on chroma histogram.This ap-poach firstly estimates strength of colorfulness of images and their types.With such determined information,the algorithm automatically adjusts image colorfulness for a better natural image look.With the help of an additional detection of skin colors and a pixel chroma adaptive local processing,the algorithm produces more natural image look.The algorithm performance had been tested with an image quality judgment experiment of 20 persons.The experimental result indicates a better image preference.

  20. On-line computing in a classified environment

    International Nuclear Information System (INIS)

    Westinghouse Hanford Company (WHC) recently developed a Department of Energy (DOE) approved real-time, on-line computer system to control nuclear material. The system simultaneously processes both classified and unclassified information. Implementation of this system required application of many security techniques. The system has a secure, but user friendly interface. Many software applications protect the integrity of the data base from malevolent or accidental errors. Programming practices ensure the integrity of the computer system software. The audit trail and the reports generation capability record user actions and status of the nuclear material inventory

  1. Learnability of min-max pattern classifiers

    Science.gov (United States)

    Yang, Ping-Fai; Maragos, Petros

    1991-11-01

    This paper introduces the class of thresholded min-max functions and studies their learning under the probably approximately correct (PAC) model introduced by Valiant. These functions can be used as pattern classifiers of both real-valued and binary-valued feature vectors. They are a lattice-theoretic generalization of Boolean functions and are also related to three-layer perceptrons and morphological signal operators. Several subclasses of the thresholded min- max functions are shown to be learnable under the PAC model.

  2. Classifying LEP Data with Support Vector Algorithms

    CERN Document Server

    Vannerem, P; Schölkopf, B; Smola, A J; Söldner-Rembold, S

    1999-01-01

    We have studied the application of different classification algorithms in the analysis of simulated high energy physics data. Whereas Neural Network algorithms have become a standard tool for data analysis, the performance of other classifiers such as Support Vector Machines has not yet been tested in this environment. We chose two different problems to compare the performance of a Support Vector Machine and a Neural Net trained with back-propagation: tagging events of the type e+e- -> ccbar and the identification of muons produced in multihadronic e+e- annihilation events.

  3. Support Vector classifiers for Land Cover Classification

    CERN Document Server

    Pal, Mahesh

    2008-01-01

    Support vector machines represent a promising development in machine learning research that is not widely used within the remote sensing community. This paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral DAIS)data in which multi-class SVMs are compared with maximum likelihood and artificial neural network methods in terms of classification accuracy. Our results show that the SVM achieves a higher level of classification accuracy than either the maximum likelihood or the neural classifier, and that the support vector machine can be used with small training datasets and high-dimensional data.

  4. Classifying spaces of degenerating polarized Hodge structures

    CERN Document Server

    Kato, Kazuya

    2009-01-01

    In 1970, Phillip Griffiths envisioned that points at infinity could be added to the classifying space D of polarized Hodge structures. In this book, Kazuya Kato and Sampei Usui realize this dream by creating a logarithmic Hodge theory. They use the logarithmic structures begun by Fontaine-Illusie to revive nilpotent orbits as a logarithmic Hodge structure. The book focuses on two principal topics. First, Kato and Usui construct the fine moduli space of polarized logarithmic Hodge structures with additional structures. Even for a Hermitian symmetric domain D, the present theory is a refinem

  5. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  6. Accurately Classifying Data Races with Portend

    OpenAIRE

    Kasikci, Baris; Zamfir, Cristian; Candea, George

    2011-01-01

    Even though most data races are harmless, the harmful ones are at the heart of some of the worst concurrency bugs. Eliminating all data races from programs is impractical (e.g., system performance could suffer severely), yet spotting just the harmful ones is like finding a needle in a haystack: state-of-the-art data race detectors and classifiers suffer from high false positive rates of 37%–84%. We present Portend, a technique and system for automatically triaging suspect data races based on ...

  7. 36 CFR 1260.22 - Who is responsible for the declassification of classified national security White House...

    Science.gov (United States)

    2010-07-01

    ... declassification of classified national security White House originated information in NARA's holdings? 1260.22... for the declassification of classified national security White House originated information in NARA's... was originated by: (1) The President; (2) The White House staff; (3) Committees, commissions,...

  8. Objectively classifying Southern Hemisphere extratropical cyclones

    Science.gov (United States)

    Catto, Jennifer

    2016-04-01

    There has been a long tradition in attempting to separate extratropical cyclones into different classes depending on their cloud signatures, airflows, synoptic precursors, or upper-level flow features. Depending on these features, the cyclones may have different impacts, for example in their precipitation intensity. It is important, therefore, to understand how the distribution of different cyclone classes may change in the future. Many of the previous classifications have been performed manually. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850 hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first detected), has been used to objectively classify identified cyclones. The results are compared to the manual classification of Sinclair and Revell (2000) and the four objectively identified classes shown in this presentation are found to match well. The relative importance of diabatic heating in the clusters is investigated, as well as the differing precipitation characteristics. The success of the objective classification shows its utility in climate model evaluation and climate change studies.

  9. Cross-classified occupational exposure data.

    Science.gov (United States)

    Jones, Rachael M; Burstyn, Igor

    2016-09-01

    We demonstrate the regression analysis of exposure determinants using cross-classified random effects in the context of lead exposures resulting from blasting surfaces in advance of painting. We had three specific objectives for analysis of the lead data, and observed: (1) high within-worker variability in personal lead exposures, explaining 79% of variability; (2) that the lead concentration outside of half-mask respirators was 2.4-fold higher than inside supplied-air blasting helmets, suggesting that the exposure reduction by blasting helmets may be lower than expected by the Assigned Protection Factor; and (3) that lead concentrations at fixed area locations in containment were not associated with personal lead exposures. In addition, we found that, on average, lead exposures among workers performing blasting and other activities was 40% lower than among workers performing only blasting. In the process of obtaining these analyses objectives, we determined that the data were non-hierarchical: repeated exposure measurements were collected for a worker while the worker was a member of several groups, or cross-classified among groups. Since the worker is a member of multiple groups, the exposure data do not adhere to the traditionally assumed hierarchical structure. Forcing a hierarchical structure on these data led to similar within-group and between-group variability, but decreased precision in the estimate of effect of work activity on lead exposure. We hope hygienists and exposure assessors will consider non-hierarchical models in the design and analysis of exposure assessments. PMID:27029937

  10. A systematic comparison of supervised classifiers.

    Directory of Open Access Journals (Sweden)

    Diego Raphael Amancio

    Full Text Available Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM. In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.

  11. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  12. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  13. Classifying antiarrhythmic actions: by facts or speculation.

    Science.gov (United States)

    Vaughan Williams, E M

    1992-11-01

    Classification of antiarrhythmic actions is reviewed in the context of the results of the Cardiac Arrhythmia Suppression Trials, CAST 1 and 2. Six criticisms of the classification recently published (The Sicilian Gambit) are discussed in detail. The alternative classification, when stripped of speculative elements, is shown to be similar to the original classification. Claims that the classification failed to predict the efficacy of antiarrhythmic drugs for the selection of appropriate therapy have been tested by an example. The antiarrhythmic actions of cibenzoline were classified in 1980. A detailed review of confirmatory experiments and clinical trials during the past decade shows that predictions made at the time agree with subsequent results. Classification of the effects drugs actually have on functioning cardiac tissues provides a rational basis for finding the preferred treatment for a particular arrhythmia in accordance with the diagnosis.

  14. Human Segmentation Using Haar-Classifier

    Directory of Open Access Journals (Sweden)

    Dharani S

    2014-07-01

    Full Text Available Segmentation is an important process in many aspects of multimedia applications. Fast and perfect segmentation of moving objects in video sequences is a basic task in many computer visions and video investigation applications. Particularly Human detection is an active research area in computer vision applications. Segmentation is very useful for tracking and recognition the object in a moving clip. The motion segmentation problem is studied and reviewed the most important techniques. We illustrate some common methods for segmenting the moving objects including background subtraction, temporal segmentation and edge detection. Contour and threshold are common methods for segmenting the objects in moving clip. These methods are widely exploited for moving object segmentation in many video surveillance applications, such as traffic monitoring, human motion capture. In this paper, Haar Classifier is used to detect humans in a moving video clip some features like face detection, eye detection, full body, upper body and lower body detection.

  15. A headband for classifying human postures.

    Science.gov (United States)

    Aloqlah, Mohammed; Lahiji, Rosa R; Loparo, Kenneth A; Mehregany, Mehran

    2010-01-01

    a real-time method using only accelerometer data is developed for classifying basic human static postures, namely sitting, standing, and lying, as well as dynamic transitions between them. The algorithm uses discrete wavelet transform (DWT) in combination with a fuzzy logic inference system (FIS). Data from a single three-axis accelerometer integrated into a wearable headband is transmitted wirelessly, collected and analyzed in real time on a laptop computer, to extract two sets of features for posture classification. The received acceleration signals are decomposed using the DWT to extract the dynamic features; changes in the smoothness of the signal that reflect a transition between postures are detected at finer DWT scales. FIS then uses the previous posture transition and DWT-extracted features to determine the static postures. PMID:21097190

  16. A cognitive approach to classifying perceived behaviors

    Science.gov (United States)

    Benjamin, Dale Paul; Lyons, Damian

    2010-04-01

    This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.

  17. Learning Vector Quantization for Classifying Astronomical Objects

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The sizes of astronomical surveys in different wavebands are increas-ing rapidly. Therefore, automatic classification of objects is becoming ever moreimportant. We explore the performance of learning vector quantization (LVQ) inclassifying multi-wavelength data. Our analysis concentrates on separating activesources from non-active ones. Different classes of X-ray emitters populate distinctregions of a multidimensional parameter space. In order to explore the distributionof various objects in a multidimensional parameter space, we positionally cross-correlate the data of quasars, BL Lacs, active galaxies, stars and normal galaxiesin the optical, X-ray and infrared bands. We then apply LVQ to classify them withthe obtained data. Our results show that LVQ is an effective method for separatingAGNs from stars and normal galaxies with multi-wavelength data.

  18. A Spiking Neural Learning Classifier System

    CERN Document Server

    Howard, Gerard; Lanzi, Pier-Luca

    2012-01-01

    Learning Classifier Systems (LCS) are population-based reinforcement learners used in a wide variety of applications. This paper presents a LCS where each traditional rule is represented by a spiking neural network, a type of network with dynamic internal state. We employ a constructivist model of growth of both neurons and dendrites that realise flexible learning by evolving structures of sufficient complexity to solve a well-known problem involving continuous, real-valued inputs. Additionally, we extend the system to enable temporal state decomposition. By allowing our LCS to chain together sequences of heterogeneous actions into macro-actions, it is shown to perform optimally in a problem where traditional methods can fail to find a solution in a reasonable amount of time. Our final system is tested on a simulated robotics platform.

  19. Classifying prion and prion-like phenomena.

    Science.gov (United States)

    Harbi, Djamel; Harrison, Paul M

    2014-01-01

    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  20. Automatic Fracture Detection Using Classifiers- A Review

    Directory of Open Access Journals (Sweden)

    S.K.Mahendran

    2011-11-01

    Full Text Available X-Ray is one the oldest and frequently used devices, that makes images of any bone in the body, including the hand, wrist, arm, elbow, shoulder, foot, ankle, leg (shin, knee, thigh, hip, pelvis or spine. A typical bone ailment is the fracture, which occurs when bone cannot withstand outside force like direct blows, twisting injuries and falls. Fractures are cracks in bones and are defined as a medical condition in which there is a break in the continuity of the bone. Detection and correct treatment of fractures are considered important, as a wrong diagnosis often lead to ineffective patient management, increased dissatisfaction and expensive litigation. The main focus of this paper is a review study that discusses about various classification algorithms that can be used to classify x-ray images as normal or fractured.

  1. Decoding the Large-Scale Structure of Brain Function by Classifying Mental States Across Individuals

    OpenAIRE

    Poldrack, Russell A.; Halchenko, Yaroslav ,; Hanson, Stephen José

    2009-01-01

    Brain-imaging research has largely focused on localizing patterns of activity related to specific mental processes, but recent work has shown that mental states can be identified from neuroimaging data using statistical classifiers. We investigated whether this approach could be extended to predict the mental state of an individual using a statistical classifier trained on other individuals, and whether the information gained in doing so could provide new insights into how mental processes ar...

  2. A HYBRID CLASSIFICATION ALGORITHM TO CLASSIFY ENGINEERING STUDENTS’ PROBLEMS AND PERKS

    Directory of Open Access Journals (Sweden)

    Mitali Desai

    2016-03-01

    Full Text Available The social networking sites have brought a new horizon for expressing views and opinions of individuals. Moreover, they provide medium to students to share their sentiments including struggles and joy during the learning process. Such informal information has a great venue for decision making. The large and growing scale of information needs automatic classification techniques. Sentiment analysis is one of the automated techniques to classify large data. The existing predictive sentiment analysis techniques are highly used to classify reviews on E-commerce sites to provide business intelligence. However, they are not much useful to draw decisions in education system since they classify the sentiments into merely three pre-set categories: positive, negative and neutral. Moreover, classifying the students’ sentiments into positive or negative category does not provide deeper insight into their problems and perks. In this paper, we propose a novel Hybrid Classification Algorithm to classify engineering students’ sentiments. Unlike traditional predictive sentiment analysis techniques, the proposed algorithm makes sentiment analysis process descriptive. Moreover, it classifies engineering students’ perks in addition to problems into several categories to help future students and education system in decision making.

  3. Gene-expression Classifier in Papillary Thyroid Carcinoma: Validation and Application of a Classifier for Prognostication

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;

    2016-01-01

    frozen tissue from 38 patients was collected between the years 1986 and 2009. Validation cohort: formalin-fixed paraffin-embedded tissues were collected from 183 consecutively treated patients. RESULTS: A 17-gene classifier was identified based on the expression values in patients with and without...

  4. Classifying gauge anomalies through SPT orders and classifying anomalies through topological orders

    CERN Document Server

    Wen, Xiao-Gang

    2013-01-01

    In this paper, we systematically study gauge anomalies in bosonic and fermionic weak-coupling gauge theories with gauge group G (which can be continuous or discrete). We argue that, in d space-time dimensions, the gauge anomalies are described by the elements in Free[H^{d+1}(G,R/Z)]\\oplus H_\\pi^{d+1}(BG,R/Z). The well known Adler-Bell-Jackiw anomalies are classified by the free part of the group cohomology class H^{d+1}(G,R/Z) of the gauge group G (denoted as Free[H^{d+1}(G,\\R/\\Z)]). We refer other kinds of gauge anomalies beyond Adler-Bell-Jackiw anomalies as nonABJ gauge anomalies, which include Witten SU(2) global gauge anomaly. We introduce a notion of \\pi-cohomology group, H_\\pi^{d+1}(BG,R/Z), for the classifying space BG, which is an Abelian group and include Tor[H^{d+1}(G,R/Z)] and topological cohomology group H^{d+1}(BG,\\R/\\Z) as subgroups. We argue that H_\\pi^{d+1}(BG,R/Z) classifies the bosonic nonABJ gauge anomalies, and partially classifies fermionic nonABJ anomalies. We also show a very close rel...

  5. Classified Ads Harvesting Agent and Notification System

    CERN Document Server

    Doomun, Razvi; Nadeem, Auleear; Aukin, Mozafar

    2010-01-01

    The shift from an information society to a knowledge society require rapid information harvesting, reliable search and instantaneous on demand delivery. Information extraction agents are used to explore and collect data available from Web, in order to effectively exploit such data for business purposes, such as automatic news filtering, advertisement or product searching and price comparing. In this paper, we develop a real-time automatic harvesting agent for adverts posted on Servihoo web portal and an SMS-based notification system. It uses the URL of the web portal and the object model, i.e., the fields of interests and a set of rules written using the HTML parsing functions to extract latest adverts information. The extraction engine executes the extraction rules and stores the information in a database to be processed for automatic notification. This intelligent system helps to tremendously save time. It also enables users or potential product buyers to react more quickly to changes and newly posted sales...

  6. Classifying gauge anomalies through symmetry-protected trivial orders and classifying gravitational anomalies through topological orders

    Science.gov (United States)

    Wen, Xiao-Gang

    2013-08-01

    In this paper, we systematically study gauge anomalies in bosonic and fermionic weak-coupling gauge theories with gauge group G (which can be continuous or discrete) in d space-time dimensions. We show a very close relation between gauge anomalies for gauge group G and symmetry-protected trivial (SPT) orders (also known as symmetry-protected topological (SPT) orders) with symmetry group G in one-higher dimension. The SPT phases are classified by group cohomology class Hd+1(G,R/Z). Through a more careful consideration, we argue that the gauge anomalies are described by the elements in Free[Hd+1(G,R/Z)]⊕Hπ˙d+1(BG,R/Z). The well known Adler-Bell-Jackiw anomalies are classified by the free part of Hd+1(G,R/Z) (denoted as Free[Hd+1(G,R/Z)]). We refer to other kinds of gauge anomalies beyond Adler-Bell-Jackiw anomalies as non-ABJ gauge anomalies, which include Witten SU(2) global gauge anomalies. We introduce a notion of π-cohomology group, Hπ˙d+1(BG,R/Z), for the classifying space BG, which is an Abelian group and include Tor[Hd+1(G,R/Z)] and topological cohomology group Hd+1(BG,R/Z) as subgroups. We argue that Hπ˙d+1(BG,R/Z) classifies the bosonic non-ABJ gauge anomalies and partially classifies fermionic non-ABJ anomalies. Using the same approach that shows gauge anomalies to be connected to SPT phases, we can also show that gravitational anomalies are connected to topological orders (i.e., patterns of long-range entanglement) in one-higher dimension.

  7. Stress fracture development classified by bone scintigraphy

    International Nuclear Information System (INIS)

    There is no consensus on classifying stress fractures (SF) appearing on bone scans. The authors present a system of classification based on grading the severity and development of bone lesions by visual inspection, according to three main scintigraphic criteria: focality and size, intensity of uptake compare to adjacent bone, and local medular extension. Four grades of development (I-IV) were ranked, ranging from ill defined slightly increased cortical uptake to well defined regions with markedly increased uptake extending transversely bicortically. 310 male subjects aged 19-2, suffering several weeks from leg pains occurring during intensive physical training underwent bone scans of the pelvis and lower extremities using Tc-99-m-MDP. 76% of the scans were positive with 354 lesions, of which 88% were in th4e mild (I-II) grades and 12% in the moderate (III) and severe (IV) grades. Post-treatment scans were obtained in 65 cases having 78 lesions during 1- to 6-month intervals. Complete resolution was found after 1-2 months in 36% of the mild lesions but in only 12% of the moderate and severe ones, and after 3-6 months in 55% of the mild lesions and 15% of the severe ones. 75% of the moderate and severe lesions showed residual uptake in various stages throughout the follow-up period. Early recognition and treatment of mild SF lesions in this study prevented protracted disability and progression of the lesions and facilitated complete healing

  8. Colorization by classifying the prior knowledge

    Institute of Scientific and Technical Information of China (English)

    DU Weiwei

    2011-01-01

    When a one-dimensional luminance scalar is replaced by a vector of a colorful multi-dimension for every pixel of a monochrome image,the process is called colorization.However,colorization is under-constrained.Therefore,the prior knowledge is considered and given to the monochrome image.Colorization using optimization algorithm is an effective algorithm for the above problem.However,it cannot effectively do with some images well without repeating experiments for confirming the place of scribbles.In this paper,a colorization algorithm is proposed,which can automatically generate the prior knowledge.The idea is that firstly,the prior knowledge crystallizes into some points of the prior knowledge which is automatically extracted by downsampling and upsampling method.And then some points of the prior knowledge are classified and given with corresponding colors.Lastly,the color image can be obtained by the color points of the prior knowledge.It is demonstrated that the proposal can not only effectively generate the prior knowledge but also colorize the monochrome image according to requirements of user with some experiments.

  9. MISR Level 2 TOA/Cloud Classifier parameters V003

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Regional Cloud Classifiers, Cloud Shadow Mask, and...

  10. Optimized Radial Basis Function Classifier for Multi Modal Biometrics

    Directory of Open Access Journals (Sweden)

    Anand Viswanathan

    2014-07-01

    Full Text Available Biometric systems can be used for the identification or verification of humans based on their physiological or behavioral features. In these systems the biometric characteristics such as fingerprints, palm-print, iris or speech can be recorded and are compared with the samples for the identification or verification. Multimodal biometrics is more accurate and solves spoof attacks than the single modal bio metrics systems. In this study, a multimodal biometric system using fingerprint images and finger-vein patterns is proposed and also an optimized Radial Basis Function (RBF kernel classifier is proposed to identify the authorized users. The extracted features from these modalities are selected by PCA and kernel PCA and combined to classify by RBF classifier. The parameters of RBF classifier is optimized by using BAT algorithm with local search. The performance of the proposed classifier is compared with the KNN classifier, Naïve Bayesian classifier and non-optimized RBF classifier.

  11. Performance evaluation of artificial intelligence classifiers for the medical domain.

    Science.gov (United States)

    Smith, A E; Nugent, C D; McClean, S I

    2002-01-01

    The application of artificial intelligence systems is still not widespread in the medical field, however there is an increasing necessity for these to handle the surfeit of information available. One drawback to their implementation is the lack of criteria or guidelines for the evaluation of these systems. This is the primary issue in their acceptability to clinicians, who require them for decision support and therefore need evidence that these systems meet the special safety-critical requirements of the domain. This paper shows evidence that the most prevalent form of intelligent system, neural networks, is generally not being evaluated rigorously regarding classification precision. A taxonomy of the types of evaluation tests that can be carried out, to gauge inherent performance of the outputs of intelligent systems has been assembled, and the results of this presented in a clear and concise form, which should be applicable to all intelligent classifiers for medicine.

  12. Prediction of Pork Quality by Fuzzy Support Vector Machine Classifier

    Science.gov (United States)

    Zhang, Jianxi; Yu, Huaizhi; Wang, Jiamin

    Existing objective methods to evaluate pork quality in general do not yield satisfactory results and their applications in meat industry are limited. In this study, fuzzy support vector machine (FSVM) method was developed to evaluate and predict pork quality rapidly and nondestructively. Firstly, the discrete wavelet transform (DWT) was used to eliminate the noise component in original spectrum and the new spectrum was reconstructed. Then, considering the characteristic variables still exist correlation and contain some redundant information, principal component analysis (PCA) was carried out. Lastly, FSVM was developed to differentiate and classify pork samples into different quality grades using the features from PCA. Jackknife tests on the working datasets indicated that the prediction accuracies were higher than other methods.

  13. Higher School Marketing Strategy Formation: Classifying the Factors

    Directory of Open Access Journals (Sweden)

    N. K. Shemetova

    2012-01-01

    Full Text Available The paper deals with the main trends of higher school management strategy formation. The author specifies the educational changes in the modern information society determining the strategy options. For each professional training level the author denotes the set of strategic factors affecting the educational service consumers and, therefore, the effectiveness of the higher school marketing. The given factors are classified from the stand-points of the providers and consumers of educational service (enrollees, students, graduates and postgraduates. The research methods include the statistic analysis and general methods of scientific analysis, synthesis, induction, deduction, comparison, and classification. The author is convinced that the university management should develop the necessary prerequisites for raising the graduates’ competitiveness in the labor market, and stimulate the active marketing policies of the relating subdivisions and departments. In author’s opinion, the above classification of marketing strategy factors can be used as the system of values for educational service providers. 

  14. Performance Evaluation of Bagged RBF Classifier for Data Mining Applications

    Directory of Open Access Journals (Sweden)

    M.Govindarajan

    2013-11-01

    Full Text Available Data mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. The feasibility and the benefits of the proposed approaches are demonstrated by the means of data mining applications like intrusion detection, direct marketing, and signature verification. A variety of techniques have been employed for analysis ranging from traditional statistical methods to data mining approaches. Bagging and boosting are two relatively new but popular methods for producing ensembles. In this work, bagging is evaluated on real and benchmark data sets of intrusion detection, direct marketing, and signature verification in conjunction with radial basis function classifier as the base learner. The proposed bagged radial basis function is superior to individual approach for data mining applications in terms of classification accuracy.

  15. Method of generating features optimal to a dataset and classifier

    Energy Technology Data Exchange (ETDEWEB)

    Bruillard, Paul J.; Gosink, Luke J.; Jarman, Kenneth D.

    2016-10-18

    A method of generating features optimal to a particular dataset and classifier is disclosed. A dataset of messages is inputted and a classifier is selected. An algebra of features is encoded. Computable features that are capable of describing the dataset from the algebra of features are selected. Irredundant features that are optimal for the classifier and the dataset are selected.

  16. Predict or classify: The deceptive role of time-locking in brain signal classification

    CERN Document Server

    Rusconi, Marco

    2016-01-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate...

  17. Counting, Measuring And The Semantics Of Classifiers

    Directory of Open Access Journals (Sweden)

    Susan Rothstein

    2010-12-01

    Full Text Available This paper makes two central claims. The first is that there is an intimate and non-trivial relation between the mass/count distinction on the one hand and the measure/individuation distinction on the other: a (if not the defining property of mass nouns is that they denote sets of entities which can be measured, while count nouns denote sets of entities which can be counted. Crucially, this is a difference in grammatical perspective and not in ontological status. The second claim is that the mass/count distinction between two types of nominals has its direct correlate at the level of classifier phrases: classifier phrases like two bottles of wine are ambiguous between a counting, or individuating, reading and a measure reading. On the counting reading, this phrase has count semantics, on the measure reading it has mass semantics.ReferencesBorer, H. 1999. ‘Deconstructing the construct’. In K. Johnson & I. Roberts (eds. ‘Beyond Principles and Parameters’, 43–89. Dordrecht: Kluwer publications.Borer, H. 2008. ‘Compounds: the view from Hebrew’. In R. Lieber & P. Stekauer (eds. ‘The Oxford Handbook of Compounds’, 491–511. Oxford: Oxford University Press.Carlson, G. 1977b. Reference to Kinds in English. Ph.D. thesis, University of Massachusetts at Amherst.Carlson, G. 1997. Quantifiers and Selection. Ph.D. thesis, University of Leiden.Carslon, G. 1977a. ‘Amount relatives’. Language 53: 520–542.Chierchia, G. 2008. ‘Plurality of mass nouns and the notion of ‘semantic parameter”. In S. Rothstein (ed. ‘Events and Grammar’, 53–103. Dordrecht: Kluwer.Danon, G. 2008. ‘Definiteness spreading in the Hebrew construct state’. Lingua 118: 872–906.http://dx.doi.org/10.1016/j.lingua.2007.05.012Gillon, B. 1992. ‘Toward a common semantics for English count and mass nouns’. Linguistics and Philosophy 15: 597–640.http://dx.doi.org/10.1007/BF00628112Grosu, A. & Landman, F. 1998. ‘Strange relatives of the third kind

  18. Classifying environmentally significant urban land uses with satellite imagery.

    Science.gov (United States)

    Park, Mi-Hyun; Stenstrom, Michael K

    2008-01-01

    We investigated Bayesian networks to classify urban land use from satellite imagery. Landsat Enhanced Thematic Mapper Plus (ETM(+)) images were used for the classification in two study areas: (1) Marina del Rey and its vicinity in the Santa Monica Bay Watershed, CA and (2) drainage basins adjacent to the Sweetwater Reservoir in San Diego, CA. Bayesian networks provided 80-95% classification accuracy for urban land use using four different classification systems. The classifications were robust with small training data sets with normal and reduced radiometric resolution. The networks needed only 5% of the total data (i.e., 1500 pixels) for sample size and only 5- or 6-bit information for accurate classification. The network explicitly showed the relationship among variables from its structure and was also capable of utilizing information from non-spectral data. The classification can be used to provide timely and inexpensive land use information over large areas for environmental purposes such as estimating stormwater pollutant loads. PMID:17291679

  19. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Vitomir Štruc

    2010-01-01

    Full Text Available This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  20. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Science.gov (United States)

    Štruc, Vitomir; Pavešić, Nikola

    2010-12-01

    This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC). Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1) the introduction of a Gabor phase-based face representation and (2) the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  1. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  2. Classifying controllers by activities : An exploratory study

    NARCIS (Netherlands)

    Verstegen, B.; De Loo, I.G.M.; Mol, P.; Slagter, K.; Geerkens, H.

    2007-01-01

    The goal of this paper is to discern variables (triggers) that affect a controller’s role in an organisation. Using survey data, groups of controllers are distinguished based on coherent combinations of activities. We find that controllers either operate as so-called ‘information adapters’ or ‘watch

  3. Adaboost Ensemble Classifiers for Corporate Default Prediction

    OpenAIRE

    Suresh Ramakrishnan; Maryam Mirzaei; Mahmoud Bekri

    2015-01-01

    This study aims to show a substitute technique to corporate default prediction. Data mining techniques have been extensively applied for this task, due to its ability to notice non-linear relationships and show a good performance in presence of noisy information, as it usually happens in corporate default prediction problems. In spite of several progressive methods that have widely been proposed, this area of research is not out dated and still needs further examination. In this study, the pe...

  4. Rule Based Ensembles Using Pair Wise Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    Moslem Mohammadi Jenghara

    2015-03-01

    Full Text Available In value estimation, the inexperienced people's estimation average is good approximation to true value, provided that the answer of these individual are independent. Classifier ensemble is the implementation of mentioned principle in classification tasks that are investigated in two aspects. In the first aspect, feature space is divided into several local regions and each region is assigned with a highly competent classifier and in the second, the base classifiers are applied in parallel and equally experienced in some ways to achieve a group consensus. In this paper combination of two methods are used. An important consideration in classifier combination is that much better results can be achieved if diverse classifiers, rather than similar classifiers, are combined. To achieve diversity in classifiers output, the symmetric pairwise weighted feature space is used and the outputs of trained classifiers over the weighted feature space are combined to inference final result. In this paper MLP classifiers are used as the base classifiers. The Experimental results show that the applied method is promising.

  5. 78 FR 5116 - NASA Information Security Protection

    Science.gov (United States)

    2013-01-24

    ... SPACE ADMINISTRATION 14 CFR Part 1203 RIN 2700-AD61 NASA Information Security Protection AGENCY..., Classified National Security Information, and appropriately to correspond with NASA's internal requirements, NPR 1600.2, Classified National Security Information, that establishes the Agency's requirements...

  6. Intelligent query by humming system based on score level fusion of multiple classifiers

    Science.gov (United States)

    Pyo Nam, Gi; Thu Trang Luong, Thi; Ha Nam, Hyun; Ryoung Park, Kang; Park, Sung-Joo

    2011-12-01

    Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH) systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB) code-based linear scaling (LS), pitch-based dynamic time warping (DTW), and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.

  7. Intelligent query by humming system based on score level fusion of multiple classifiers

    Directory of Open Access Journals (Sweden)

    Park Sung-Joo

    2011-01-01

    Full Text Available Abstract Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB code-based linear scaling (LS, pitch-based dynamic time warping (DTW, and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.

  8. Tree Species Classification Using Hyperspectral Imagery: A Comparison of Two Classifiers

    Directory of Open Access Journals (Sweden)

    Laurel Ballanti

    2016-05-01

    Full Text Available The identification of tree species can provide a useful and efficient tool for forest managers for planning and monitoring purposes. Hyperspectral data provide sufficient spectral information to classify individual tree species. Two non-parametric classifiers, support vector machines (SVM and random forest (RF, have resulted in high accuracies in previous classification studies. This research takes a comparative classification approach to examine the SVM and RF classifiers in the complex and heterogeneous forests of Muir Woods National Monument and Kent Creek Canyon in Marin County, California. The influence of object- or pixel-based training samples and segmentation size on the object-oriented classification is also explored. To reduce the data dimensionality, a minimum noise fraction transform was applied to the mosaicked hyperspectral image, resulting in the selection of 27 bands for the final classification. Each classifier was also assessed individually to identify any advantage related to an increase in training sample size or an increase in object segmentation size. All classifications resulted in overall accuracies above 90%. No difference was found between classifiers when using object-based training samples. SVM outperformed RF when additional training samples were used. An increase in training samples was also found to improve the individual performance of the SVM classifier.

  9. Learning multiscale and deep representations for classifying remotely sensed imagery

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong

    2016-03-01

    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

  10. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    Science.gov (United States)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  11. Proposing an adaptive mutation to improve XCSF performance to classify ADHD and BMD patients

    Science.gov (United States)

    Sadatnezhad, Khadijeh; Boostani, Reza; Ghanizadeh, Ahmad

    2010-12-01

    There is extensive overlap of clinical symptoms observed among children with bipolar mood disorder (BMD) and those with attention deficit hyperactivity disorder (ADHD). Thus, diagnosis according to clinical symptoms cannot be very accurate. It is therefore desirable to develop quantitative criteria for automatic discrimination between these disorders. This study is aimed at designing an efficient decision maker to accurately classify ADHD and BMD patients by analyzing their electroencephalogram (EEG) signals. In this study, 22 channels of EEGs have been recorded from 21 subjects with ADHD and 22 individuals with BMD. Several informative features, such as fractal dimension, band power and autoregressive coefficients, were extracted from the recorded signals. Considering the multimodal overlapping distribution of the obtained features, linear discriminant analysis (LDA) was used to reduce the input dimension in a more separable space to make it more appropriate for the proposed classifier. A piecewise linear classifier based on the extended classifier system for function approximation (XCSF) was modified by developing an adaptive mutation rate, which was proportional to the genotypic content of best individuals and their fitness in each generation. The proposed operator controlled the trade-off between exploration and exploitation while maintaining the diversity in the classifier's population to avoid premature convergence. To assess the effectiveness of the proposed scheme, the extracted features were applied to support vector machine, LDA, nearest neighbor and XCSF classifiers. To evaluate the method, a noisy environment was simulated with different noise amplitudes. It is shown that the results of the proposed technique are more robust as compared to conventional classifiers. Statistical tests demonstrate that the proposed classifier is a promising method for discriminating between ADHD and BMD patients.

  12. To fuse or not to fuse: Fuser versus best classifier

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.

    1998-04-01

    A sample from a class defined on a finite-dimensional Euclidean space and distributed according to an unknown distribution is given. The authors are given a set of classifiers each of which chooses a hypothesis with least misclassification error from a family of hypotheses. They address the question of choosing the classifier with the best performance guarantee versus combining the classifiers using a fuser. They first describe a fusion method based on isolation property such that the performance guarantee of the fused system is at least as good as the best of the classifiers. For a more restricted case of deterministic classes, they present a method based on error set estimation such that the performance guarantee of fusing all classifiers is at least as good as that of fusing any subset of classifiers.

  13. Taxonomy grounded aggregation of classifiers with different label sets

    OpenAIRE

    SAHA, AMRITA; Indurthi, Sathish; Godbole, Shantanu; Rongali, Subendhu; Raykar, Vikas C.

    2015-01-01

    We describe the problem of aggregating the label predictions of diverse classifiers using a class taxonomy. Such a taxonomy may not have been available or referenced when the individual classifiers were designed and trained, yet mapping the output labels into the taxonomy is desirable to integrate the effort spent in training the constituent classifiers. A hierarchical taxonomy representing some domain knowledge may be different from, but partially mappable to, the label sets of the individua...

  14. Customer-Classified Algorithm Based onFuzzy Clustering Analysis

    Institute of Scientific and Technical Information of China (English)

    郭蕴华; 祖巧红; 陈定方

    2004-01-01

    A customer-classified evaluation system is described with the customization-supporting tree of evaluation indexes, in which users can determine any evaluation index independently. Based on this system, a customer-classified algorithm based on fuzzy clustering analysis is proposed to implement the customer-classified management. A numerical example is presented, which provides correct results,indicating that the algorithm can be used in the decision support system of CRM.

  15. The analysis of cross-classified categorical data

    CERN Document Server

    Fienberg, Stephen E

    2007-01-01

    A variety of biological and social science data come in the form of cross-classified tables of counts, commonly referred to as contingency tables. Until recent years the statistical and computational techniques available for the analysis of cross-classified data were quite limited. This book presents some of the recent work on the statistical analysis of cross-classified data using longlinear models, especially in the multidimensional situation.

  16. Information barriers and authentication

    International Nuclear Information System (INIS)

    Acceptance of nuclear materials into a monitoring regime is complicated if the materials are in classified shapes or have classified composition. An attribute measurement system with an information barrier can be emplo,yed to generate an unclassified display from classified measurements. This information barrier must meet two criteria: (1) classified information cannot be released to the monitoring party, and (2) the monitoring party must be convinced that the unclassified output accurately represents the classified input. Criterion 1 is critical to the host country to protect the classified information. Criterion 2 is critical to the monitoring party and is often termed the 'authentication problem.' Thus, the necessity for authentication of a measurement system with an information barrier stems directly from the description of a useful information barrier. Authentication issues must be continually addressed during the entire development lifecycle of the measurement system as opposed to being applied only after the system is built.

  17. Learning a Flexible K-Dependence Bayesian Classifier from the Chain Rule of Joint Probability Distribution

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available As one of the most common types of graphical models, the Bayesian classifier has become an extremely popular approach to dealing with uncertainty and complexity. The scoring functions once proposed and widely used for a Bayesian network are not appropriate for a Bayesian classifier, in which class variable C is considered as a distinguished one. In this paper, we aim to clarify the working mechanism of Bayesian classifiers from the perspective of the chain rule of joint probability distribution. By establishing the mapping relationship between conditional probability distribution and mutual information, a new scoring function, Sum_MI, is derived and applied to evaluate the rationality of the Bayesian classifiers. To achieve global optimization and high dependence representation, the proposed learning algorithm, the flexible K-dependence Bayesian (FKDB classifier, applies greedy search to extract more information from the K-dependence network structure. Meanwhile, during the learning procedure, the optimal attribute order is determined dynamically, rather than rigidly. In the experimental study, functional dependency analysis is used to improve model interpretability when the structure complexity is restricted.

  18. Construction of unsupervised sentiment classifier on idioms resources

    Institute of Scientific and Technical Information of China (English)

    谢松县; 王挺

    2014-01-01

    Sentiment analysis is the computational study of how opinions, attitudes, emotions, and perspectives are expressed in language, and has been the important task of natural language processing. Sentiment analysis is highly valuable for both research and practical applications. The focuses were put on the difficulties in the construction of sentiment classifiers which normally need tremendous labeled domain training data, and a novel unsupervised framework was proposed to make use of the Chinese idiom resources to develop a general sentiment classifier. Furthermore, the domain adaption of general sentiment classifier was improved by taking the general classifier as the base of a self-training procedure to get a domain self-training sentiment classifier. To validate the effect of the unsupervised framework, several experiments were carried out on publicly available Chinese online reviews dataset. The experiments show that the proposed framework is effective and achieves encouraging results. Specifically, the general classifier outperforms two baselines (a Naïve 50% baseline and a cross-domain classifier), and the bootstrapping self-training classifier approximates the upper bound domain-specific classifier with the lowest accuracy of 81.5%, but the performance is more stable and the framework needs no labeled training dataset.

  19. Facial expression recognition with facial parts based sparse representation classifier

    Science.gov (United States)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  20. Unsupervised Supervised Learning II: Training Margin Based Classifiers without Labels

    CERN Document Server

    Donmez, Pinar; Lebanon, Guy

    2010-01-01

    Many popular linear classifiers, such as logistic regression, boosting, or SVM, are trained by optimizing a margin-based risk function. Traditionally, these risk functions are computed based on a labeled dataset. We develop a novel technique for estimating such risks using only unlabeled data and p(y). We prove that the technique is consistent for high-dimensional linear classifiers and demonstrate it on synthetic and real-world data. In particular, we show how the estimate is used for evaluating classifiers in transfer learning, and for training classifiers with no labeled data whatsoever.

  1. Using Classifiers to Identify Binge Drinkers Based on Drinking Motives.

    Science.gov (United States)

    Crutzen, Rik; Giabbanelli, Philippe

    2013-08-21

    A representative sample of 2,844 Dutch adult drinkers completed a questionnaire on drinking motives and drinking behavior in January 2011. Results were classified using regressions, decision trees, and support vector machines (SVMs). Using SVMs, the mean absolute error was minimal, whereas performance on identifying binge drinkers was high. Moreover, when comparing the structure of classifiers, there were differences in which drinking motives contribute to the performance of classifiers. Thus, classifiers are worthwhile to be used in research regarding (addictive) behaviors, because they contribute to explaining behavior and they can give different insights from more traditional data analytical approaches. PMID:23964957

  2. Mesh Learning for Classifying Cognitive Processes

    CERN Document Server

    Ozay, Mete; Öztekin, Uygar; Vural, Fatos T Yarman

    2012-01-01

    The major goal of this study is to model the encoding and retrieval operations of the brain during memory processing, using statistical learning tools. The suggested method assumes that the memory encoding and retrieval processes can be represented by a supervised learning system, which is trained by the brain data collected from the functional Magnetic Resonance (fMRI) measurements, during the encoding stage. Then, the system outputs the same class labels as that of the fMRI data collected during the retrieval stage. The most challenging problem of modeling such a learning system is the design of the interactions among the voxels to extract the information about the underlying patterns of brain activity. In this study, we suggest a new method called Mesh Learning, which represents each voxel by a mesh of voxels in a neighborhood system. The nodes of the mesh are a set of neighboring voxels, whereas the arc weights are estimated by a linear regression model. The estimated arc weights are used to form Local Re...

  3. Facial Expression Recognition Using SVM Classifier

    Directory of Open Access Journals (Sweden)

    Vasanth P.C.

    2015-03-01

    Full Text Available Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels. First, in the bottom level, facial feature tracking, which usually detects and tracks prominent landmarks surrounding facial components (i.e., mouth, eyebrow, etc, captures the detailed face shape information; Second, facial actions recognition, i.e., recognize facial action units (AUs defined in FACS, try to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc; In the top level, facial  expression analysis attempts to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc; In the top level, facial expression analysis attempts to recognize facial expressions that represent the human emotion states. In this proposed algorithm initially detecting eye and mouth, features of eye and mouth are extracted using Gabor filter, (Local Binary Pattern LBP and PCA is used to reduce the dimensions of the features. Finally SVM is used to classification of expression and facial action units.

  4. Multimodal fusion of polynomial classifiers for automatic person recgonition

    Science.gov (United States)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late

  5. Classifying spaces with virtually cyclic stabilizers for linear groups

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen

    2015-01-01

    We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...

  6. 40 CFR 152.175 - Pesticides classified for restricted use.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Pesticides classified for restricted...) PESTICIDE PROGRAMS PESTICIDE REGISTRATION AND CLASSIFICATION PROCEDURES Classification of Pesticides § 152.175 Pesticides classified for restricted use. The following uses of pesticide products containing...

  7. Quantum classifying spaces and universal quantum characteristic classes

    CERN Document Server

    Durdevic, M

    1996-01-01

    A construction of the noncommutative-geometric counterparts of classical classifying spaces is presented, for general compact matrix quantum structure groups. A quantum analogue of the classical concept of the classifying map is introduced and analyzed. Interrelations with the abstract algebraic theory of quantum characteristic classes are discussed. Various non-equivalent approaches to defining universal characteristic classes are outlined.

  8. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  9. Algorithm for classifying multiple targets using acoustic signatures

    Science.gov (United States)

    Damarla, Thyagaraju; Pham, Tien; Lake, Douglas

    2004-08-01

    In this paper we discuss an algorithm for classification and identification of multiple targets using acoustic signatures. We use a Multi-Variate Gaussian (MVG) classifier for classifying individual targets based on the relative amplitudes of the extracted harmonic set of frequencies. The classifier is trained on high signal-to-noise ratio data for individual targets. In order to classify and further identify each target in a multi-target environment (e.g., a convoy), we first perform bearing tracking and data association. Once the bearings of the targets present are established, we next beamform in the direction of each individual target to spatially isolate it from the other targets (or interferers). Then, we further process and extract a harmonic feature set from each beamformed output. Finally, we apply the MVG classifier on each harmonic feature set for vehicle classification and identification. We present classification/identification results for convoys of three to five ground vehicles.

  10. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  11. Fuzzy-Genetic Classifier algorithm for bank's customers

    Directory of Open Access Journals (Sweden)

    Rashed Mokhtar Elawady

    2011-09-01

    Full Text Available Modern finical banks are running in complex and dynamic environment which may bring high uncertainty and risk to them. So the ability to intelligently collect, mange, and analyze information about customers is a key source of competitive advantage for an E-business. But the data base for any bank is too large, complex and incomprehensible to determine if the customer risk or default. This paper presents a new algorithm for extracting accurate and comprehensible rules from database via fuzzy genetic classifier by two methodologies fuzzy system and genetic algorithms in one algorithm. Proposed evolved system exhibits two important characteristics; first, each rule is obtained through an efficient genetic rule extraction method which adapts the parameters of the fuzzy sets in the premise space and determines the required features of the rule, further improve the interpretability of the obtained model. Second, evolve the obtained rule base through genetic algorithm. The cooperation system increases the classification performance and reach to max classification ratio in the earlier generations.

  12. Construction of High-accuracy Ensemble of Classifiers

    Directory of Open Access Journals (Sweden)

    Hedieh Sajedi

    2014-04-01

    Full Text Available There have been several methods developed to construct ensembles. Some of these methods, such as Bagging and Boosting are meta-learners, i.e. they can be applied to any base classifier. The combination of methods should be selected in order that classifiers cover each other weaknesses. In ensemble, the output of several classifiers is used only when they disagree on some inputs. The degree of disagreement is called diversity of the ensemble. Another factor that plays a significant role in performing an ensemble is accuracy of the basic classifiers. It can be said that all the procedures of constructing ensembles seek to achieve a balance between these two parameters, and successful methods can reach a better balance. The diversity of the members of an ensemble is known as an important factor in determining its generalization error. In this paper, we present a new approach for generating ensembles. The proposed approach uses Bagging and Boosting as the generators of base classifiers. Subsequently, the classifiers are partitioned by means of a clustering algorithm. We introduce a selection phase for construction the final ensemble and three different selection methods are proposed for applying in this phase. In the first proposed selection method, a classifier is selected randomly from each cluster. The second method selects the most accurate classifier from each cluster and the third one selects the nearest classifier to the center of each cluster to construct the final ensemble. The results of the experiments on well-known datasets demonstrate the strength of our proposed approach, especially applying the selection of the most accurate classifiers from clusters and employing Bagging generator.

  13. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    Science.gov (United States)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  14. Web Page Classification using an ensemble of support vector machine classifiers

    Directory of Open Access Journals (Sweden)

    Shaobo Zhong

    2011-11-01

    Full Text Available Web Page Classification (WPC is both an important and challenging topic in data mining. The knowledge of WPC can help users to obtain useable information from the huge internet dataset automatically and efficiently. Many efforts have been made to WPC. However, there is still room for improvement of current approaches. One particular challenge in training classifiers comes from the fact that the available dataset is usually unbalanced. Standard machine learning algorithms tend to be overwhelmed by the major class and ignore the minor one and thus lead to high false negative rate. In this paper, a novel approach for Web page classification was proposed to address this problem by using an ensemble of support vector machine classifiers to perform this work. Principal Component Analysis (PCA is used for feature reduction and Independent Component Analysis (ICA for feature selection. The experimental results indicate that the proposed approach outperforms other existing classifiers widely used in WPC.

  15. Classifying transcription factor targets and discovering relevant biological features

    Directory of Open Access Journals (Sweden)

    DeLisi Charles

    2008-05-01

    Full Text Available Abstract Background An important goal in post-genomic research is discovering the network of interactions between transcription factors (TFs and the genes they regulate. We have previously reported the development of a supervised-learning approach to TF target identification, and used it to predict targets of 104 transcription factors in yeast. We now include a new sequence conservation measure, expand our predictions to include 59 new TFs, introduce a web-server, and implement an improved ranking method to reveal the biological features contributing to regulation. The classifiers combine 8 genomic datasets covering a broad range of measurements including sequence conservation, sequence overrepresentation, gene expression, and DNA structural properties. Principal Findings (1 Application of the method yields an amplification of information about yeast regulators. The ratio of total targets to previously known targets is greater than 2 for 11 TFs, with several having larger gains: Ash1(4, Ino2(2.6, Yaf1(2.4, and Yap6(2.4. (2 Many predicted targets for TFs match well with the known biology of their regulators. As a case study we discuss the regulator Swi6, presenting evidence that it may be important in the DNA damage response, and that the previously uncharacterized gene YMR279C plays a role in DNA damage response and perhaps in cell-cycle progression. (3 A procedure based on recursive-feature-elimination is able to uncover from the large initial data sets those features that best distinguish targets for any TF, providing clues relevant to its biology. An analysis of Swi6 suggests a possible role in lipid metabolism, and more specifically in metabolism of ceramide, a bioactive lipid currently being investigated for anti-cancer properties. (4 An analysis of global network properties highlights the transcriptional network hubs; the factors which control the most genes and the genes which are bound by the largest set of regulators. Cell-cycle and

  16. Intelligent and Effective Heart Disease Prediction System using Weighted Associative Classifiers

    Directory of Open Access Journals (Sweden)

    Jyoti soni,

    2011-06-01

    Full Text Available The healthcare environment is still ‘information rich’ But ‘knowledge poor’. There is a wealth of data available within the health care systems. However, there is a lack of effective analysis tools todiscover hidden relationships in data. The aim of this work is to design a GUI based Interface to enter the patient record and predict whether the patient is having Heart disease or not using Weighted Association rule based Classifier. The prediction is performed from mining the patient’s historical data or data repository. In Weighted Associative Classifier (WAC, different weights are assigned to different attributes according to their predicting capability. It has already been proved that the Associative Classifiers are performing well than traditional classifiers approaches such as decision tree and rule induction. Further from experimental results it has been found that WAC is providing improved accuracy as compare to other already existing Associative Classifiers. Hence the system is using WAC as a Data mining technique to generate rule base. The system has been implemented in java Platform and trained using benchmark data from UCI machine learning repository. The system is expandable for thenew dataset.

  17. On the generalizability of resting-state fMRI machine learning classifiers.

    Science.gov (United States)

    Huf, Wolfgang; Kalcher, Klaudius; Boubela, Roland N; Rath, Georg; Vecsei, Andreas; Filzmoser, Peter; Moser, Ewald

    2014-01-01

    Machine learning classifiers have become increasingly popular tools to generate single-subject inferences from fMRI data. With this transition from the traditional group level difference investigations to single-subject inference, the application of machine learning methods can be seen as a considerable step forward. Existing studies, however, have given scarce or no information on the generalizability to other subject samples, limiting the use of such published classifiers in other research projects. We conducted a simulation study using publicly available resting-state fMRI data from the 1000 Functional Connectomes and COBRE projects to examine the generalizability of classifiers based on regional homogeneity of resting-state time series. While classification accuracies of up to 0.8 (using sex as the target variable) could be achieved on test datasets drawn from the same study as the training dataset, the generalizability of classifiers to different study samples proved to be limited albeit above chance. This shows that on the one hand a certain amount of generalizability can robustly be expected, but on the other hand this generalizability should not be overestimated. Indeed, this study substantiates the need to include data from several sites in a study investigating machine learning classifiers with the aim of generalizability.

  18. Classification of bee pollen grains using hyperspectral microscopy imaging and Fisher linear classifier

    Science.gov (United States)

    Su, Kang; Zhu, Siqi; Wei, Lin; Li, Zhen; Yin, Hao; Ye, Pingping; Li, Anming; Chen, Zhenqiang; Li, Migao

    2016-05-01

    The rapid and accurate classification of bee pollen grains is still a challenge. The purpose of this paper is to develop a method which could directly classify bee pollen grains based on fluorescence spectra. Bee pollen grain samples of six species were excited by a 409-nm laser diode source, and their fluorescence images were acquired by a hyperspectral microscopy imaging (HMI) system. One hundred pixels in the region of interest were randomly selected from each single bee pollen species. The fluorescence spectral information in all the selected pixels was stored in an n-dimensional hyperspectral data set, where n=37 for a total of 37 hyperspectral bands (465 to 645 nm). The hyperspectral data set was classified using a Fisher linear classifier. The performance of the Fisher linear classifier was measured by the leave-one-out cross-validation method, which yielded an overall accuracy of 89.2%. Finally, additional blinded samples were used to evaluate the established classification model, which demonstrated that bee pollen mixtures could be classified efficiently with the HMI system.

  19. PERFORMANCE EVALUATION OF VARIOUS STATISTICAL CLASSIFIERS IN DETECTING THE DISEASED CITRUS LEAVES

    Directory of Open Access Journals (Sweden)

    SUDHEER REDDY BANDI

    2013-02-01

    Full Text Available Citrus fruits are in lofty obligation because the humans consume them daily. This research aims to amend citrus production, which knows a low upshot bourgeois on the production and complex during measurements. Nowadays citrus plants grappling some traits/diseases. Harm of the insect is one of the major trait/disease. Insecticides are not ever evidenced effectual because insecticides may be toxic to some gracious of birds. Farmers get outstanding difficulties in detecting the diseases ended open eye and also it is quite expensive.Machine vision and Image processing techniques helps in sleuthing the disease mark in citrus leaves and sound job. In this search, Citrus leaves of four classes like Normal, Greasy spot, Melanose and Scab are collected and investigated using texture analysis based on the Color Co-occurrence Method (CCM to take Hue, Saturation and Intensity (HSI features. In the arrangement form, the features are categorised for all leafage conditions using k-Nearest Neighbor (kNN, Naive Bayes classifier (NBC, Linear Discriminate Analysis (LDA classifier and Random Forest Tree Algorithm classifier (RFT. The experimental results inform that proposed attack significantly supports 98.75% quality in automated detection of regular and struck leaves using texture psychotherapy based CCM method using LDA formula. Eventually all the classifiers are compared using Earphone Operative Characteristic contour and analyzed the performance of all the classifiers.

  20. Evaluating and classifying the readiness of technology specifications for national standardization.

    Science.gov (United States)

    Baker, Dixie B; Perlin, Jonathan B; Halamka, John

    2015-05-01

    The American Recovery and Reinvestment Act (ARRA) of 2009 clearly articulated the central role that health information technology (HIT) standards would play in improving healthcare quality, safety, and efficiency through the meaningful use of certified, standards based, electronic health record (EHR) technology. In 2012, the Office of the National Coordinator (ONC) asked the Nationwide Health Information Network (NwHIN) Power Team of the Health Information Technology Standards Committee (HITSC) to develop comprehensive, objective, and, to the extent practical, quantitative criteria for evaluating technical standards and implementation specifications and classifying their readiness for national adoption. The Power Team defined criteria, attributes, and metrics for evaluating and classifying technical standards and specifications as 'emerging,' 'pilot,' or 'ready for national standardization' based on their maturity and adoptability. The ONC and the HITSC are now using these metrics for assessing the readiness of technical standards for national adoption. PMID:24872342

  1. Malignancy and Abnormality Detection of Mammograms using Classifier Ensembling

    Directory of Open Access Journals (Sweden)

    Nawazish Naveed

    2011-07-01

    Full Text Available The breast cancer detection and diagnosis is a critical and complex procedure that demands high degree of accuracy. In computer aided diagnostic systems, the breast cancer detection is a two stage procedure. First, to classify the malignant and benign mammograms, while in second stage, the type of abnormality is detected. In this paper, we have developed a novel architecture to enhance the classification of malignant and benign mammograms using multi-classification of malignant mammograms into six abnormality classes. DWT (Discrete Wavelet Transformation features are extracted from preprocessed images and passed through different classifiers. To improve accuracy, results generated by various classifiers are ensembled. The genetic algorithm is used to find optimal weights rather than assigning weights to the results of classifiers on the basis of heuristics. The mammograms declared as malignant by ensemble classifiers are divided into six classes. The ensemble classifiers are further used for multiclassification using one-against-all technique for classification. The output of all ensemble classifiers is combined by product, median and mean rule. It has been observed that the accuracy of classification of abnormalities is more than 97% in case of mean rule. The Mammographic Image Analysis Society dataset is used for experimentation.

  2. Representation of classifier distributions in terms of hypergeometric functions

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper derives alternative analytical expressions for classifier product distributions in terms of Gauss hypergeometric function, 2F1, by considering feed distribution defined in terms of Gates-Gaudin-Schumann function and efficiency curve defined in terms of a logistic function. It is shown that classifier distributions under dispersed conditions of classification pivot at a common size and the distributions are difference similar.The paper also addresses an inverse problem of classifier distributions wherein the feed distribution and efficiency curve are identified from the measured product distributions without needing to know the solid flow split of particles to any of the product streams.

  3. Classifying Regularized Sensor Covariance Matrices: An Alternative to CSP.

    Science.gov (United States)

    Roijendijk, Linsey; Gielen, Stan; Farquhar, Jason

    2016-08-01

    Common spatial patterns (CSP) is a commonly used technique for classifying imagined movement type brain-computer interface (BCI) datasets. It has been very successful with many extensions and improvements on the basic technique. However, a drawback of CSP is that the signal processing pipeline contains two supervised learning stages: the first in which class- relevant spatial filters are learned and a second in which a classifier is used to classify the filtered variances. This may lead to potential overfitting issues, which are generally avoided by limiting CSP to only a few filters. PMID:26372428

  4. Classifying Response Correctness across Different Task Sets: A Machine Learning Approach.

    Science.gov (United States)

    Plewan, Thorsten; Wascher, Edmund; Falkenstein, Michael; Hoffmann, Sven

    2016-01-01

    Erroneous behavior usually elicits a distinct pattern in neural waveforms. In particular, inspection of the concurrent recorded electroencephalograms (EEG) typically reveals a negative potential at fronto-central electrodes shortly following a response error (Ne or ERN) as well as an error-awareness-related positivity (Pe). Seemingly, the brain signal contains information about the occurrence of an error. Assuming a general error evaluation system, the question arises whether this information can be utilized in order to classify behavioral performance within or even across different cognitive tasks. In the present study, a machine learning approach was employed to investigate the outlined issue. Ne as well as Pe were extracted from the single-trial EEG signals of participants conducting a flanker and a mental rotation task and subjected to a machine learning classification scheme (via a support vector machine, SVM). Overall, individual performance in the flanker task was classified more accurately, with accuracy rates of above 85%. Most importantly, it was even feasible to classify responses across both tasks. In particular, an SVM trained on the flanker task could identify erroneous behavior with almost 70% accuracy in the EEG data recorded during the rotation task, and vice versa. Summed up, we replicate that the response-related EEG signal can be used to identify erroneous behavior within a particular task. Going beyond this, it was possible to classify response types across functionally different tasks. Therefore, the outlined methodological approach appears promising with respect to future applications. PMID:27032108

  5. 42 CFR 37.50 - Interpreting and classifying chest roentgenograms.

    Science.gov (United States)

    2010-10-01

    ... interpreted and classified in accordance with the ILO Classification system and recorded on a Roentgenographic... under the Act, shall have immediately available for reference a complete set of the ILO...

  6. A NON-PARAMETER BAYESIAN CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Liu Qingshan; Lu Hanqing; Ma Songde

    2003-01-01

    A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.

  7. NUMERICAL SIMULATION OF PARTICLE MOTION IN TURBO CLASSIFIER

    Institute of Scientific and Technical Information of China (English)

    Ning Xu; Guohua Li; Zhichu Huang

    2005-01-01

    Research on the flow field inside a turbo classifier is complicated though important. According to the stochastic trajectory model of particles in gas-solid two-phase flow, and adopting the PHOENICS code, numerical simulation is carried out on the flow field, including particle trajectory, in the inner cavity of a turbo classifier, using both straight and backward crooked elbow blades. Computation results show that when the backward crooked elbow blades are used, the mixed stream that passes through the two blades produces a vortex in the positive direction which counteracts the attached vortex in the opposite direction due to the high-speed turbo rotation, making the flow steadier, thus improving both the grade efficiency and precision of the turbo classifier. This research provides positive theoretical evidences for designing sub-micron particle classifiers with high efficiency and accuracy.

  8. Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS - R code

    OpenAIRE

    Irawan, Dasapta Erwin; Gio, Prana Ugiana

    2016-01-01

    The following R code was used in this paper "Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS" authors: Prihadi Sumintadireja1, Dasapta Erwin Irawan1, Yuano Rezky2, Prana Ugiana Gio3, Anggita Agustin1

  9. Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS

    OpenAIRE

    Sumintadireja, Prihadi; Irawan, Dasapta Erwin; Rezky, Yuanno; Gio, Prana Ugiana; Agustin, Anggita

    2016-01-01

    This file is the dataset for the following paper "Classifying hot water chemistry: Application of MULTIVARIATE STATISTICS". Authors: Prihadi Sumintadireja1, Dasapta Erwin Irawan1, Yuano Rezky2, Prana Ugiana Gio3, Anggita Agustin1

  10. AUTO CLAIM FRAUD DETECTION USING MULTI CLASSIFIER SYSTEM

    Directory of Open Access Journals (Sweden)

    Luis Alexandre Rodrigues

    2014-06-01

    Full Text Available Through a cost matrix and a combination of classifiers, this work identifies the most economical model to perform the detection of suspected cases of fraud in a dataset of automobile claims. The experiments performed by this work show that working more deeply in sampled data in the training phase and test phase of each classifier is possible obtain a more economic model than other model presented in the literature.

  11. Mining housekeeping genes with a Naive Bayes classifier

    OpenAIRE

    Aitken Stuart; De Ferrari Luna

    2006-01-01

    Abstract Background Traditionally, housekeeping and tissue specific genes have been classified using direct assay of mRNA presence across different tissues, but these experiments are costly and the results not easy to compare and reproduce. Results In this work, a Naive Bayes classifier based only on physical and functional characteristics of genes already available in databases, like exon length and measures of chromatin compactness, has achieved a 97% success rate in classification of human...

  12. Mining housekeeping genes with a Naive Bayes classifier

    OpenAIRE

    Ferrari, Luna De; Aitken, Stuart

    2006-01-01

    BACKGROUND: Traditionally, housekeeping and tissue specific genes have been classified using direct assay of mRNA presence across different tissues, but these experiments are costly and the results not easy to compare and reproduce.RESULTS: In this work, a Naive Bayes classifier based only on physical and functional characteristics of genes already available in databases, like exon length and measures of chromatin compactness, has achieved a 97% success rate in classification of human houseke...

  13. Dealing with contaminated datasets: An approach to classifier training

    Science.gov (United States)

    Homenda, Wladyslaw; Jastrzebska, Agnieszka; Rybnik, Mariusz

    2016-06-01

    The paper presents a novel approach to classification reinforced with rejection mechanism. The method is based on a two-tier set of classifiers. First layer classifies elements, second layer separates native elements from foreign ones in each distinguished class. The key novelty presented here is rejection mechanism training scheme according to the philosophy "one-against-all-other-classes". Proposed method was tested in an empirical study of handwritten digits recognition.

  14. Classifying pedestrian shopping behaviour according to implied heuristic choice rules

    OpenAIRE

    Shigeyuki Kurose; Aloys W J Borgers; Timmermans, Harry J. P.

    2001-01-01

    Our aim in this paper is to build and test a model which classifies and identifies pedestrian shopping behaviour in a shopping centre by using temporal and spatial choice heuristics. In particular, the temporal local-distance-minimising, total-distance-minimising, and global-distance-minimising heuristic choice rules and spatial nearest-destination-oriented, farthest-destination-oriented, and intermediate-destination-oriented choice rules are combined to classify and identify the stop sequenc...

  15. One pass learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2016-01-01

    Generalized classifier neural network introduced as a kind of radial basis function neural network, uses gradient descent based optimized smoothing parameter value to provide efficient classification. However, optimization consumes quite a long time and may cause a drawback. In this work, one pass learning for generalized classifier neural network is proposed to overcome this disadvantage. Proposed method utilizes standard deviation of each class to calculate corresponding smoothing parameter. Since different datasets may have different standard deviations and data distributions, proposed method tries to handle these differences by defining two functions for smoothing parameter calculation. Thresholding is applied to determine which function will be used. One of these functions is defined for datasets having different range of values. It provides balanced smoothing parameters for these datasets through logarithmic function and changing the operation range to lower boundary. On the other hand, the other function calculates smoothing parameter value for classes having standard deviation smaller than the threshold value. Proposed method is tested on 14 datasets and performance of one pass learning generalized classifier neural network is compared with that of probabilistic neural network, radial basis function neural network, extreme learning machines, and standard and logarithmic learning generalized classifier neural network in MATLAB environment. One pass learning generalized classifier neural network provides more than a thousand times faster classification than standard and logarithmic generalized classifier neural network. Due to its classification accuracy and speed, one pass generalized classifier neural network can be considered as an efficient alternative to probabilistic neural network. Test results show that proposed method overcomes computational drawback of generalized classifier neural network and may increase the classification performance.

  16. A cardiorespiratory classifier of voluntary and involuntary electrodermal activity

    Directory of Open Access Journals (Sweden)

    Sejdic Ervin

    2010-02-01

    Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.

  17. LESS: a model-based classifier for sparse subspaces.

    Science.gov (United States)

    Veenman, Cor J; Tax, David M J

    2005-09-01

    In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.

  18. Locating and classifying defects using an hybrid data base

    Energy Technology Data Exchange (ETDEWEB)

    Luna-Aviles, A; Diaz Pineda, A [Tecnologico de Estudios Superiores de Coacalco. Av. 16 de Septiembre 54, Col. Cabecera Municipal. C.P. 55700 (Mexico); Hernandez-Gomez, L H; Urriolagoitia-Calderon, G; Urriolagoitia-Sosa, G [Instituto Politecnico Nacional. ESIME-SEPI. Unidad Profesional ' Adolfo Lopez Mateos' Edificio 5, 30 Piso, Colonia Lindavista. Gustavo A. Madero. 07738 Mexico D.F. (Mexico); Durodola, J F [School of Technology, Oxford Brookes University, Headington Campus, Gipsy Lane, Oxford OX3 0BP (United Kingdom); Beltran Fernandez, J A, E-mail: alelunaav@hotmail.com, E-mail: luishector56@hotmail.com, E-mail: jdurodola@brookes.ac.uk

    2011-07-19

    A computational inverse technique was used in the localization and classification of defects. Postulated voids of two different sizes (2 mm and 4 mm diameter) were introduced in PMMA bars with and without a notch. The bar dimensions are 200x20x5 mm. One half of them were plain and the other half has a notch (3 mm x 4 mm) which is close to the defect area (19 mm x 16 mm).This analysis was done with an Artificial Neural Network (ANN) and its optimization was done with an Adaptive Neuro Fuzzy Procedure (ANFIS). A hybrid data base was developed with numerical and experimental results. Synthetic data was generated with the finite element method using SOLID95 element of ANSYS code. A parametric analysis was carried out. Only one defect in such bars was taken into account and the first five natural frequencies were calculated. 460 cases were evaluated. Half of them were plain and the other half has a notch. All the input data was classified in two groups. Each one has 230 cases and corresponds to one of the two sort of voids mentioned above. On the other hand, experimental analysis was carried on with PMMA specimens of the same size. The first two natural frequencies of 40 cases were obtained with one void. The other three frequencies were obtained numerically. 20 of these bars were plain and the others have a notch. These experimental results were introduced in the synthetic data base. 400 cases were taken randomly and, with this information, the ANN was trained with the backpropagation algorithm. The accuracy of the results was tested with the 100 cases that were left. In the next stage of this work, the ANN output was optimized with ANFIS. Previous papers showed that localization and classification of defects was reduced as notches were introduced in such bars. In the case of this paper, improved results were obtained when a hybrid data base was used.

  19. Solid waste bin detection and classification using Dynamic Time Warping and MLP classifier

    Energy Technology Data Exchange (ETDEWEB)

    Islam, Md. Shafiqul, E-mail: shafique@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hannan, M.A., E-mail: hannan@eng.ukm.my [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Basri, Hassan [Dept. of Civil and Structural Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia); Hussain, Aini; Arebey, Maher [Dept. of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia, Bangi 43600, Selangore (Malaysia)

    2014-02-15

    Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level.

  20. Solid waste bin detection and classification using Dynamic Time Warping and MLP classifier

    International Nuclear Information System (INIS)

    Highlights: • Solid waste bin level detection using Dynamic Time Warping (DTW). • Gabor wavelet filter is used to extract the solid waste image features. • Multi-Layer Perceptron classifier network is used for bin image classification. • The classification performance evaluated by ROC curve analysis. - Abstract: The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level

  1. Analysis of Bayes, Neural Network and Tree Classifier of Classification Technique in Data Mining using WEKA

    Directory of Open Access Journals (Sweden)

    Yugal kumar

    2012-05-01

    Full Text Available In today’s world, gigantic amount of data is available in science, industry, business and many other areas. This data can provide valuable information which can be used by management for making important decisions. But problem is that how can find valuable information. The answer is data mining. Data Mining is popular topic among researchers. There is lot of work that cannot be explored till now. But, this paper focuses on the fundamental concept of the Data mining i.e. Classification Techniques. In this paper BayesNet, NavieBayes, NavieBayes Uptable, Multilayer perceptron, Voted perceptron and J48 classifiers are used for the classification of data set. The performance of these classifiers analyzed with the help of Mean Absolute Error, Root Mean-Squared Error and Time Taken to build the model and the result can be shown statistical as well as graphically. For this purpose the WEKA data mining tool is used.

  2. Representative Vector Machines: A Unified Framework for Classical Classifiers.

    Science.gov (United States)

    Gui, Jie; Liu, Tongliang; Tao, Dacheng; Sun, Zhenan; Tan, Tieniu

    2016-08-01

    Classifier design is a fundamental problem in pattern recognition. A variety of pattern classification methods such as the nearest neighbor (NN) classifier, support vector machine (SVM), and sparse representation-based classification (SRC) have been proposed in the literature. These typical and widely used classifiers were originally developed from different theory or application motivations and they are conventionally treated as independent and specific solutions for pattern classification. This paper proposes a novel pattern classification framework, namely, representative vector machines (or RVMs for short). The basic idea of RVMs is to assign the class label of a test example according to its nearest representative vector. The contributions of RVMs are twofold. On one hand, the proposed RVMs establish a unified framework of classical classifiers because NN, SVM, and SRC can be interpreted as the special cases of RVMs with different definitions of representative vectors. Thus, the underlying relationship among a number of classical classifiers is revealed for better understanding of pattern classification. On the other hand, novel and advanced classifiers are inspired in the framework of RVMs. For example, a robust pattern classification method called discriminant vector machine (DVM) is motivated from RVMs. Given a test example, DVM first finds its k -NNs and then performs classification based on the robust M-estimator and manifold regularization. Extensive experimental evaluations on a variety of visual recognition tasks such as face recognition (Yale and face recognition grand challenge databases), object categorization (Caltech-101 dataset), and action recognition (Action Similarity LAbeliNg) demonstrate the advantages of DVM over other classifiers.

  3. Classifying and Citing Schumpeter's Works From the Perspective of English Availability

    DEFF Research Database (Denmark)

    Andersen, Esben Sloth

    in English; and the citation clearly indicates whether a work is in English or German. The system also gives priority to an article's availability in an easily accessible collection so that the original sources are only included as additional information in the reference. The paper presents a large selection...... of Schumpeter's works according to this system. In addition, Schumpeter's works are classified by subject and by type of publication....

  4. Information

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    There are unstructured abstracts (no more than 256 words) and structured abstracts (no more than 480). The specific requirements for structured abstracts are as follows:An informative, structured abstracts of no more than 4-80 words should accompany each manuscript. Abstracts for original contributions should be structured into the following sections. AIM (no more than 20 words): Only the purpose should be included. Please write the aim as the form of "To investigate/ study/..."; MATERIALS AND METHODS (no more than 140 words); RESULTS (no more than 294 words): You should present P values where appropnate and must provide relevant data to illustrate how they were obtained, e.g. 6.92 ± 3.86 vs 3.61 ± 1.67, P< 0.001; CONCLUSION (no more than 26 words).

  5. A GIS semiautomatic tool for classifying and mapping wetland soils

    Science.gov (United States)

    Moreno-Ramón, Héctor; Marqués-Mateu, Angel; Ibáñez-Asensio, Sara

    2016-04-01

    generated a set of layers with the geographical information, which corresponded with each diagnostic criteria. Finally, the superposition of layers generated the different homogeneous soil units where the soil scientist should locate the soil profiles. Historically, the Albufera of Valencia has been classified as a soil homogeneous unit, but it was demonstrated that there were six homogeneous units after the methodology and the GIS tool application. In that regard, the outcome reveals that it had been necessary to open only six profiles, against the 19 profiles opened when the real study was carried out. As a conclusion, the methodology and the SIG tool demonstrated that could be employed in areas where the soil forming-factors cannot be distinguished. The application of rapid measurement methods and this methodology could economise the definition process of homogeneous units.

  6. Predict or classify: The deceptive role of time-locking in brain signal classification

    Science.gov (United States)

    Rusconi, Marco; Valleriani, Angelo

    2016-01-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal. PMID:27320688

  7. Predict or classify: The deceptive role of time-locking in brain signal classification

    Science.gov (United States)

    Rusconi, Marco; Valleriani, Angelo

    2016-06-01

    Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal.

  8. [Horticultural plant diseases multispectral classification using combined classified methods].

    Science.gov (United States)

    Feng, Jie; Li, Hong-Ning; Yang, Wei-Ping; Hou, De-Dong; Liao, Ning-Fang

    2010-02-01

    The research on multispectral data disposal is getting more and more attention with the development of multispectral technique, capturing data ability and application of multispectral technique in agriculture practice. In the present paper, a cultivated plant cucumber' familiar disease (Trichothecium roseum, Sphaerotheca fuliginea, Cladosporium cucumerinum, Corynespora cassiicola, Pseudoperonospora cubensis) is the research objects. The cucumber leaves multispectral images of 14 visible light channels, near infrared channel and panchromatic channel were captured using narrow-band multispectral imaging system under standard observation and illumination environment, and 210 multispectral data samples which are the 16 bands spectral reflectance of different cucumber disease were obtained. The 210 samples were classified by distance, relativity and BP neural network to discuss effective combination of classified methods for making a diagnosis. The result shows that the classified effective combination of distance and BP neural network classified methods has superior performance than each method, and the advantage of each method is fully used. And the flow of recognizing horticultural plant diseases using combined classified methods is presented. PMID:20384138

  9. Massively Multi-core Acceleration of a Document-Similarity Classifier to Detect Web Attacks

    Energy Technology Data Exchange (ETDEWEB)

    Ulmer, C; Gokhale, M; Top, P; Gallagher, B; Eliassi-Rad, T

    2010-01-14

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to two massively multi-core hardware platforms. The TFIDF classifier is used to detect web attacks in HTTP data. In our parallel hardware approaches, we design streaming, real time classifiers by simplifying the sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. Parallel implementations on the Tilera 64-core System on Chip and the Xilinx Virtex 5-LX FPGA are presented. For the Tilera, we employ a reduced state machine to recognize dictionary terms without requiring explicit tokenization, and achieve throughput of 37MB/s at slightly reduced accuracy. For the FPGA, we have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires 0.2% of the memory used by the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  10. Dynamic weighted voting for multiple classifier fusion: a generalized rough set method

    Institute of Scientific and Technical Information of China (English)

    Sun Liang; Han Chongzhao

    2006-01-01

    To improve the performance of multiple classifier system, a knowledge discovery based dynamic weighted voting (KD-DWV) is proposed based on knowledge discovery. In the method, all base classifiers may be allowed to operate in different measurement/feature spaces to make the most of diverse classification information. The weights assigned to each output of a base classifier are estimated by the separability of training sample sets in relevant feature space. For this purpose, some decision tables (DTs) are established in terms of the diverse feature sets. And then the uncertainty measures of the separability are induced, in the form of mass functions in Dempster-Shafer theory (DST), from each DTs based on generalized rough set model. From the mass functions, all the weights are calculated by a modified heuristic fusion function and assigned dynamically to each classifier varying with its output. The comparison experiment is performed on the hyperspectral remote sensing images. And the experimental results show that the performance of the classification can be improved by using the proposed method compared with the plurality voting (PV).

  11. Classifier models and architectures for EEG-based neonatal seizure detection

    International Nuclear Information System (INIS)

    Neonatal seizures are the most common neurological emergency in the neonatal period and are associated with a poor long-term outcome. Early detection and treatment may improve prognosis. This paper aims to develop an optimal set of parameters and a comprehensive scheme for patient-independent multi-channel EEG-based neonatal seizure detection. We employed a dataset containing 411 neonatal seizures. The dataset consists of multi-channel EEG recordings with a mean duration of 14.8 h from 17 neonatal patients. Early-integration and late-integration classifier architectures were considered for the combination of information across EEG channels. Three classifier models based on linear discriminants, quadratic discriminants and regularized discriminants were employed. Furthermore, the effect of electrode montage was considered. The best performing seizure detection system was found to be an early integration configuration employing a regularized discriminant classifier model. A referential EEG montage was found to outperform the more standard bipolar electrode montage for automated neonatal seizure detection. A cross-fold validation estimate of the classifier performance for the best performing system yielded 81.03% of seizures correctly detected with a false detection rate of 3.82%. With post-processing, the false detection rate was reduced to 1.30% with 59.49% of seizures correctly detected. These results represent a comprehensive illustration that robust reliable patient-independent neonatal seizure detection is possible using multi-channel EEG

  12. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification.

    Science.gov (United States)

    Zhao, Jing; Lin, Lo-Yi; Lin, Chih-Min

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  13. Predicting protein subcellular locations using hierarchical ensemble of Bayesian classifiers based on Markov chains

    Directory of Open Access Journals (Sweden)

    Eils Roland

    2006-06-01

    Full Text Available Abstract Background The subcellular location of a protein is closely related to its function. It would be worthwhile to develop a method to predict the subcellular location for a given protein when only the amino acid sequence of the protein is known. Although many efforts have been made to predict subcellular location from sequence information only, there is the need for further research to improve the accuracy of prediction. Results A novel method called HensBC is introduced to predict protein subcellular location. HensBC is a recursive algorithm which constructs a hierarchical ensemble of classifiers. The classifiers used are Bayesian classifiers based on Markov chain models. We tested our method on six various datasets; among them are Gram-negative bacteria dataset, data for discriminating outer membrane proteins and apoptosis proteins dataset. We observed that our method can predict the subcellular location with high accuracy. Another advantage of the proposed method is that it can improve the accuracy of the prediction of some classes with few sequences in training and is therefore useful for datasets with imbalanced distribution of classes. Conclusion This study introduces an algorithm which uses only the primary sequence of a protein to predict its subcellular location. The proposed recursive scheme represents an interesting methodology for learning and combining classifiers. The method is computationally efficient and competitive with the previously reported approaches in terms of prediction accuracies as empirical results indicate. The code for the software is available upon request.

  14. What Does(n't) K-theory Classify?

    CERN Document Server

    Evslin, J

    2006-01-01

    We review various K-theory classification conjectures in string theory. Sen conjecture based proposals classify D-brane trajectories in backgrounds with no H flux, while Freed-Witten anomaly based proposals classify conserved RR charges and magnetic RR fluxes in topologically time-independent backgrounds. In exactly solvable CFTs a classification of well-defined boundary states implies that there are branes representing every twisted K-theory class. Some of these proposals fail to respect the self-duality of the RR fields in the democratic formulation of type II supergravity and none respect S-duality in type IIB string theory. We discuss two applications. The twisted K-theory classification has led to a conjecture for the topology of the T-dual of any configuration. In the Klebanov-Strassler geometry twisted K-theory classifies universality classes of baryonic vacua.

  15. A novel statistical method for classifying habitat generalists and specialists

    DEFF Research Database (Denmark)

    Chazdon, Robin L; Chao, Anne; Colwell, Robert K;

    2011-01-01

    We develop a novel statistical approach for classifying generalists and specialists in two distinct habitats. Using a multinomial model based on estimated species relative abundance in two habitats, our method minimizes bias due to differences in sampling intensities between two habitat types......: (1) generalist; (2) habitat A specialist; (3) habitat B specialist; and (4) too rare to classify with confidence. We illustrate our multinomial classification method using two contrasting data sets: (1) bird abundance in woodland and heath habitats in southeastern Australia and (2) tree abundance...... in second-growth (SG) and old-growth (OG) rain forests in the Caribbean lowlands of northeastern Costa Rica. We evaluate the multinomial model in detail for the tree data set. Our results for birds were highly concordant with a previous nonstatistical classification, but our method classified a higher...

  16. WORD SENSE DISAMBIGUATION BASED ON IMPROVED BAYESIAN CLASSIFIERS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Word Sense Disambiguation (WSD) is to decide the sense of an ambiguous word on particular context. Most of current studies on WSD only use several ambiguous words as test samples, thus leads to some limitation in practical application. In this paper, we perform WSD study based on large scale real-world corpus using two unsupervised learning algorithms based on ±n-improved Bayesian model and Dependency Grammar(DG)-improved Bayesian model. ±n-improved classifiers reduce the window size of context of ambiguous words with close-distance feature extraction method, and decrease the jamming of useless features, thus obviously improve the accuracy, reaching 83.18% (in open test). DG-improved classifier can more effectively conquer the noise effect existing in Naive-Bayesian classifier. Experimental results show that this approach does better on Chinese WSD, and the open test achieved an accuracy of 86.27%.

  17. Iris Recognition Based on LBP and Combined LVQ Classifier

    CERN Document Server

    Shams, M Y; Nomir, O; El-Awady, R M; 10.5121/ijcsit.2011.3506

    2011-01-01

    Iris recognition is considered as one of the best biometric methods used for human identification and verification, this is because of its unique features that differ from one person to another, and its importance in the security field. This paper proposes an algorithm for iris recognition and classification using a system based on Local Binary Pattern and histogram properties as a statistical approaches for feature extraction, and Combined Learning Vector Quantization Classifier as Neural Network approach for classification, in order to build a hybrid model depends on both features. The localization and segmentation techniques are presented using both Canny edge detection and Hough Circular Transform in order to isolate an iris from the whole eye image and for noise detection .Feature vectors results from LBP is applied to a Combined LVQ classifier with different classes to determine the minimum acceptable performance, and the result is based on majority voting among several LVQ classifier. Different iris da...

  18. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    allows usage of such classifiers in large scale problems. We demonstrate its application for segmenting tibial articular cartilage in knee MRI scans, with number of training voxels being more than 2 million. In the next phase of the study we apply the cascaded classifier to a similar but even more......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... image, respectively and this system is referred as triplanar convolutional neural network in the thesis. We applied the triplanar CNN for segmenting articular cartilage in knee MRI and compared its performance with the same state-of-the-art method which was used as a benchmark for cascaded classifier...

  19. A History of Classified Activities at Oak Ridge National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    2001-01-30

    The facilities that became Oak Ridge National Laboratory (ORNL) were created in 1943 during the United States' super-secret World War II project to construct an atomic bomb (the Manhattan Project). During World War II and for several years thereafter, essentially all ORNL activities were classified. Now, in 2000, essentially all ORNL activities are unclassified. The major purpose of this report is to provide a brief history of ORNL's major classified activities from 1943 until the present (September 2000). This report is expected to be useful to the ORNL Classification Officer and to ORNL's Authorized Derivative Classifiers and Authorized Derivative Declassifiers in their classification review of ORNL documents, especially those documents that date from the 1940s and 1950s.

  20. COMPARISON OF SVM AND FUZZY CLASSIFIER FOR AN INDIAN SCRIPT

    Directory of Open Access Journals (Sweden)

    M. J. Baheti

    2012-01-01

    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  1. A non-parametric 2D deformable template classifier

    DEFF Research Database (Denmark)

    Schultz, Nette; Nielsen, Allan Aasbjerg; Conradsen, Knut;

    2005-01-01

    relaxation in a Bayesian scheme is used. In the Bayesian likelihood a class density function and its estimate hereof is introduced, which is designed to separate the feature space. The method is verified on data collected in Øresund, Scandinavia. The data come from four geographically different areas. Two...... areas, which are homogeneous with respect to bottom type, are used for training of the deformable template classifier, and the classifier is applied to two areas, which are heterogeneous with respect to bottom type. The classification results are good with a correct classification percent above 94 per...... cent for the bottom type classes, and show that the deformable template classifier can be used for interactive on-line sea floor segmentation of RoxAnn echo sounder data....

  2. A Topic Model Approach to Representing and Classifying Football Plays

    KAUST Repository

    Varadarajan, Jagannadan

    2013-09-09

    We address the problem of modeling and classifying American Football offense teams’ plays in video, a challenging example of group activity analysis. Automatic play classification will allow coaches to infer patterns and tendencies of opponents more ef- ficiently, resulting in better strategy planning in a game. We define a football play as a unique combination of player trajectories. To this end, we develop a framework that uses player trajectories as inputs to MedLDA, a supervised topic model. The joint maximiza- tion of both likelihood and inter-class margins of MedLDA in learning the topics allows us to learn semantically meaningful play type templates, as well as, classify different play types with 70% average accuracy. Furthermore, this method is extended to analyze individual player roles in classifying each play type. We validate our method on a large dataset comprising 271 play clips from real-world football games, which will be made publicly available for future comparisons.

  3. A Rules-Based Approach for Configuring Chains of Classifiers in Real-Time Stream Mining Systems

    Directory of Open Access Journals (Sweden)

    Brian Foo

    2009-01-01

    Full Text Available Networks of classifiers can offer improved accuracy and scalability over single classifiers by utilizing distributed processing resources and analytics. However, they also pose a unique combination of challenges. First, classifiers may be located across different sites that are willing to cooperate to provide services, but are unwilling to reveal proprietary information about their analytics, or are unable to exchange their analytics due to the high transmission overheads involved. Furthermore, processing of voluminous stream data across sites often requires load shedding approaches, which can lead to suboptimal classification performance. Finally, real stream mining systems often exhibit dynamic behavior and thus necessitate frequent reconfiguration of classifier elements to ensure acceptable end-to-end performance and delay under resource constraints. Under such informational constraints, resource constraints, and unpredictable dynamics, utilizing a single, fixed algorithm for reconfiguring classifiers can often lead to poor performance. In this paper, we propose a new optimization framework aimed at developing rules for choosing algorithms to reconfigure the classifier system under such conditions. We provide an adaptive, Markov model-based solution for learning the optimal rule when stream dynamics are initially unknown. Furthermore, we discuss how rules can be decomposed across multiple sites and propose a method for evolving new rules from a set of existing rules. Simulation results are presented for a speech classification system to highlight the advantages of using the rules-based framework to cope with stream dynamics.

  4. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    International Nuclear Information System (INIS)

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively

  5. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen [Tianjin Key Laboratory of Process Measurement and Control, Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-05-15

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  6. Online classifier adaptation for cost-sensitive learning

    OpenAIRE

    Zhang, Junlin; Garcia, Jose

    2015-01-01

    In this paper, we propose the problem of online cost-sensitive clas- sifier adaptation and the first algorithm to solve it. We assume we have a base classifier for a cost-sensitive classification problem, but it is trained with respect to a cost setting different to the desired one. Moreover, we also have some training data samples streaming to the algorithm one by one. The prob- lem is to adapt the given base classifier to the desired cost setting using the steaming training samples online. ...

  7. Learning Continuous Time Bayesian Network Classifiers Using MapReduce

    Directory of Open Access Journals (Sweden)

    Simone Villa

    2014-12-01

    Full Text Available Parameter and structural learning on continuous time Bayesian network classifiers are challenging tasks when you are dealing with big data. This paper describes an efficient scalable parallel algorithm for parameter and structural learning in the case of complete data using the MapReduce framework. Two popular instances of classifiers are analyzed, namely the continuous time naive Bayes and the continuous time tree augmented naive Bayes. Details of the proposed algorithm are presented using Hadoop, an open-source implementation of a distributed file system and the MapReduce framework for distributed data processing. Performance evaluation of the designed algorithm shows a robust parallel scaling.

  8. Classifying depth of anesthesia using EEG features, a comparison.

    Science.gov (United States)

    Esmaeili, Vahid; Shamsollahi, Mohammad Bagher; Arefian, Noor Mohammad; Assareh, Amin

    2007-01-01

    Various EEG features have been used in depth of anesthesia (DOA) studies. The objective of this study was to find the excellent features or combination of them than can discriminate between different anesthesia states. Conducting a clinical study on 22 patients we could define 4 distinct anesthetic states: awake, moderate, general anesthesia, and isoelectric. We examined features that have been used in earlier studies using single-channel EEG signal processing method. The maximum accuracy (99.02%) achieved using approximate entropy as the feature. Some other features could well discriminate a particular state of anesthesia. We could completely classify the patterns by means of 3 features and Bayesian classifier.

  9. Comparison of machine learning classifiers for influenza detection from emergency department free-text reports.

    Science.gov (United States)

    López Pineda, Arturo; Ye, Ye; Visweswaran, Shyam; Cooper, Gregory F; Wagner, Michael M; Tsui, Fuchiang Rich

    2015-12-01

    Influenza is a yearly recurrent disease that has the potential to become a pandemic. An effective biosurveillance system is required for early detection of the disease. In our previous studies, we have shown that electronic Emergency Department (ED) free-text reports can be of value to improve influenza detection in real time. This paper studies seven machine learning (ML) classifiers for influenza detection, compares their diagnostic capabilities against an expert-built influenza Bayesian classifier, and evaluates different ways of handling missing clinical information from the free-text reports. We identified 31,268 ED reports from 4 hospitals between 2008 and 2011 to form two different datasets: training (468 cases, 29,004 controls), and test (176 cases and 1620 controls). We employed Topaz, a natural language processing (NLP) tool, to extract influenza-related findings and to encode them into one of three values: Acute, Non-acute, and Missing. Results show that all ML classifiers had areas under ROCs (AUC) ranging from 0.88 to 0.93, and performed significantly better than the expert-built Bayesian model. Missing clinical information marked as a value of missing (not missing at random) had a consistently improved performance among 3 (out of 4) ML classifiers when it was compared with the configuration of not assigning a value of missing (missing completely at random). The case/control ratios did not affect the classification performance given the large number of training cases. Our study demonstrates ED reports in conjunction with the use of ML and NLP with the handling of missing value information have a great potential for the detection of infectious diseases.

  10. Classifying genes to the correct Gene Ontology Slim term in Saccharomyces cerevisiae using neighbouring genes with classification learning

    Directory of Open Access Journals (Sweden)

    Tsatsoulis Costas

    2010-05-01

    Full Text Available Abstract Background There is increasing evidence that gene location and surrounding genes influence the functionality of genes in the eukaryotic genome. Knowing the Gene Ontology Slim terms associated with a gene gives us insight into a gene's functionality by informing us how its gene product behaves in a cellular context using three different ontologies: molecular function, biological process, and cellular component. In this study, we analyzed if we could classify a gene in Saccharomyces cerevisiae to its correct Gene Ontology Slim term using information about its location in the genome and information from its nearest-neighbouring genes using classification learning. Results We performed experiments to establish that the MultiBoostAB algorithm using the J48 classifier could correctly classify Gene Ontology Slim terms of a gene given information regarding the gene's location and information from its nearest-neighbouring genes for training. Different neighbourhood sizes were examined to determine how many nearest neighbours should be included around each gene to provide better classification rules. Our results show that by just incorporating neighbour information from each gene's two-nearest neighbours, the percentage of correctly classified genes to their correct Gene Ontology Slim term for each ontology reaches over 80% with high accuracy (reflected in F-measures over 0.80 of the classification rules produced. Conclusions We confirmed that in classifying genes to their correct Gene Ontology Slim term, the inclusion of neighbour information from those genes is beneficial. Knowing the location of a gene and the Gene Ontology Slim information from neighbouring genes gives us insight into that gene's functionality. This benefit is seen by just including information from a gene's two-nearest neighbouring genes.

  11. Solid waste bin detection and classification using Dynamic Time Warping and MLP classifier.

    Science.gov (United States)

    Islam, Md Shafiqul; Hannan, M A; Basri, Hassan; Hussain, Aini; Arebey, Maher

    2014-02-01

    The increasing requirement for Solid Waste Management (SWM) has become a significant challenge for municipal authorities. A number of integrated systems and methods have introduced to overcome this challenge. Many researchers have aimed to develop an ideal SWM system, including approaches involving software-based routing, Geographic Information Systems (GIS), Radio-frequency Identification (RFID), or sensor intelligent bins. Image processing solutions for the Solid Waste (SW) collection have also been developed; however, during capturing the bin image, it is challenging to position the camera for getting a bin area centralized image. As yet, there is no ideal system which can correctly estimate the amount of SW. This paper briefly discusses an efficient image processing solution to overcome these problems. Dynamic Time Warping (DTW) was used for detecting and cropping the bin area and Gabor wavelet (GW) was introduced for feature extraction of the waste bin image. Image features were used to train the classifier. A Multi-Layer Perceptron (MLP) classifier was used to classify the waste bin level and estimate the amount of waste inside the bin. The area under the Receiver Operating Characteristic (ROC) curves was used to statistically evaluate classifier performance. The results of this developed system are comparable to previous image processing based system. The system demonstration using DTW with GW for feature extraction and an MLP classifier led to promising results with respect to the accuracy of waste level estimation (98.50%). The application can be used to optimize the routing of waste collection based on the estimated bin level. PMID:24238802

  12. Airborne Dual-Wavelength LiDAR Data for Classifying Land Cover

    Directory of Open Access Journals (Sweden)

    Cheng-Kai Wang

    2014-01-01

    Full Text Available This study demonstrated the potential of using dual-wavelength airborne light detection and ranging (LiDAR data to classify land cover. Dual-wavelength LiDAR data were acquired from two airborne LiDAR systems that emitted pulses of light in near-infrared (NIR and middle-infrared (MIR lasers. The major features of the LiDAR data, such as surface height, echo width, and dual-wavelength amplitude, were used to represent the characteristics of land cover. Based on the major features of land cover, a support vector machine was used to classify six types of suburban land cover: road and gravel, bare soil, low vegetation, high vegetation, roofs, and water bodies. Results show that using dual-wavelength LiDAR-derived information (e.g., amplitudes at NIR and MIR wavelengths could compensate for the limitations of using single-wavelength LiDAR information (i.e., poor discrimination of low vegetation when classifying land cover.

  13. Support vector machines classifiers of physical activities in preschoolers

    Science.gov (United States)

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3-5 years old were asked to participate in a s...

  14. Recognition of Characters by Adaptive Combination of Classifiers

    Institute of Scientific and Technical Information of China (English)

    WANG Fei; LI Zai-ming

    2004-01-01

    In this paper, the visual feature space based on the long Horizontals, the long Verticals,and the radicals are given. An adaptive combination of classifiers, whose coefficients vary with the input pattern, is also proposed. Experiments show that the approach is promising for character recognition in video sequences.

  15. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Science.gov (United States)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  16. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  17. Gene-expression Classifier in Papillary Thyroid Carcinoma

    DEFF Research Database (Denmark)

    Londero, Stefano Christian; Jespersen, Marie Louise; Krogdahl, Annelise;

    2016-01-01

    BACKGROUND: No reliable biomarker for metastatic potential in the risk stratification of papillary thyroid carcinoma exists. We aimed to develop a gene-expression classifier for metastatic potential. MATERIALS AND METHODS: Genome-wide expression analyses were used. Development cohort: freshly...

  18. Packet Payload Inspection Classifier in the Network Flow Level

    Directory of Open Access Journals (Sweden)

    N.Kannaiya Raja

    2012-06-01

    Full Text Available The network have in the world highly congested channels and topology which was dynamicallycreated with high risk. In this we need flow classifier to find the packet movement in the network.In this paper we have to be developed and evaluated TCP/UDP/FTP/ICMP based on payloadinformation and port numbers and number of flags in the packet for highly flow of packets in thenetwork. The primary motivations of this paper all the valuable protocols are used legally toprocess find out the end user by using payload packet inspection, and also used evaluationshypothesis testing approach. The effective use of tamper resistant flow classifier has used in onenetwork contexts domain and developed in a different Berkeley and Cambridge, the classificationand accuracy was easily found through the packet inspection by using different flags in thepackets. While supervised classifier training specific to the new domain results in much betterclassification accuracy, we also formed a new approach to determine malicious packet and find apacket flow classifier and send correct packet to destination address.

  19. Localizing genes to cerebellar layers by classifying ISH images.

    Directory of Open Access Journals (Sweden)

    Lior Kirsch

    Full Text Available Gene expression controls how the brain develops and functions. Understanding control processes in the brain is particularly hard since they involve numerous types of neurons and glia, and very little is known about which genes are expressed in which cells and brain layers. Here we describe an approach to detect genes whose expression is primarily localized to a specific brain layer and apply it to the mouse cerebellum. We learn typical spatial patterns of expression from a few markers that are known to be localized to specific layers, and use these patterns to predict localization for new genes. We analyze images of in-situ hybridization (ISH experiments, which we represent using histograms of local binary patterns (LBP and train image classifiers and gene classifiers for four layers of the cerebellum: the Purkinje, granular, molecular and white matter layer. On held-out data, the layer classifiers achieve accuracy above 94% (AUC by representing each image at multiple scales and by combining multiple image scores into a single gene-level decision. When applied to the full mouse genome, the classifiers predict specific layer localization for hundreds of new genes in the Purkinje and granular layers. Many genes localized to the Purkinje layer are likely to be expressed in astrocytes, and many others are involved in lipid metabolism, possibly due to the unusual size of Purkinje cells.

  20. Subtractive fuzzy classifier based driver distraction levels classification using EEG.

    Science.gov (United States)

    Wali, Mousa Kadhim; Murugappan, Murugappan; Ahmad, Badlishah

    2013-09-01

    [Purpose] In earlier studies of driver distraction, researchers classified distraction into two levels (not distracted, and distracted). This study classified four levels of distraction (neutral, low, medium, high). [Subjects and Methods] Fifty Asian subjects (n=50, 43 males, 7 females), age range 20-35 years, who were free from any disease, participated in this study. Wireless EEG signals were recorded by 14 electrodes during four types of distraction stimuli (Global Position Systems (GPS), music player, short message service (SMS), and mental tasks). We derived the amplitude spectrum of three different frequency bands, theta, alpha, and beta of EEG. Then, based on fusion of discrete wavelet packet transforms and fast fourier transform yield, we extracted two features (power spectral density, spectral centroid frequency) of different wavelets (db4, db8, sym8, and coif5). Mean ± SD was calculated and analysis of variance (ANOVA) was performed. A fuzzy inference system classifier was applied to different wavelets using the two extracted features. [Results] The results indicate that the two features of sym8 posses highly significant discrimination across the four levels of distraction, and the best average accuracy achieved by the subtractive fuzzy classifier was 79.21% using the power spectral density feature extracted using the sym8 wavelet. [Conclusion] These findings suggest that EEG signals can be used to monitor distraction level intensity in order to alert drivers to high levels of distraction.

  1. 18 CFR 367.18 - Criteria for classifying leases.

    Science.gov (United States)

    2010-04-01

    ... classification of the lease under the criteria in paragraph (a) of this section had the changed terms been in... the lessee) must not give rise to a new classification of a lease for accounting purposes. ... ACT General Instructions § 367.18 Criteria for classifying leases. (a) If, at its inception, a...

  2. Classifying aquatic macrophytes as indicators of eutrophication in European lakes

    NARCIS (Netherlands)

    Penning, W.E.; Mjelde, M.; Dudley, B.; Hellsten, S.; Hanganu, J.; Kolada, A.; van den Berg, Marcel S.; Poikane, S.; Phillips, G.; Willby, N.; Ecke, F.

    2008-01-01

    Aquatic macrophytes are one of the biological quality elements in the Water Framework Directive (WFD) for which status assessments must be defined. We tested two methods to classify macrophyte species and their response to eutrophication pressure: one based on percentiles of occurrence along a phosp

  3. Scoring and Classifying Examinees Using Measurement Decision Theory

    Science.gov (United States)

    Rudner, Lawrence M.

    2009-01-01

    This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…

  4. Combined Approach of PNN and Time-Frequency as the Classifier for Power System Transient Problems

    Directory of Open Access Journals (Sweden)

    Aslam Pervez Memon

    2013-04-01

    Full Text Available The transients in power system cause serious disturbances in the reliability, safety and economy of the system. The transient signals possess the nonstationary characteristics in which the frequency as well as varying time information is compulsory for the analysis. Hence, it is vital, first to detect and classify the type of transient fault and then to mitigate them. This article proposes time-frequency and FFNN (Feedforward Neural Network approach for the classification of power system transients problems. In this work it is suggested that all the major categories of transients are simulated, de-noised, and decomposed with DWT (Discrete Wavelet and MRA (Multiresolution Analysis algorithm and then distinctive features are extracted to get optimal vector as input for training of PNN (Probabilistic Neural Network classifier. The simulation results of proposed approach prove their simplicity, accurateness and effectiveness for the automatic detection and classification of PST (Power System Transient types

  5. 14 CFR 1203.400 - Specific classifying guidance.

    Science.gov (United States)

    2010-01-01

    ... operational information and material, and in some exceptional cases scientific information falling within any..., subsystem or component. (c) Scientific or technological information in an area where an advanced military... intentions. (g) Information which reveals an unusually significant scientific or technological...

  6. Computer-aided diagnosis system for classifying benign and malignant thyroid nodules in multi-stained FNAB cytological images

    International Nuclear Information System (INIS)

    An automated computer-aided diagnosis system is developed to classify benign and malignant thyroid nodules using multi-stained fine needle aspiration biopsy (FNAB) cytological images. In the first phase, the image segmentation is performed to remove the background staining information and retain the appropriate foreground cell objects in cytological images using mathematical morphology and watershed transform segmentation methods. Subsequently, statistical features are extracted using two-level discrete wavelet transform (DWT) decomposition, gray level co-occurrence matrix (GLCM) and Gabor filter based methods. The classifiers k-nearest neighbor (k-NN), Elman neural network (ENN) and support vector machine (SVM) are tested for classifying benign and malignant thyroid nodules. The combination of watershed segmentation, GLCM features and k-NN classifier results a lowest diagnostic accuracy of 60 %. The highest diagnostic accuracy of 93.33 % is achieved by ENN classifier trained with the statistical features extracted by Gabor filter bank from the images segmented by morphology and watershed transform segmentation methods. It is also observed that SVM classifier results its highest diagnostic accuracy of 90 % for DWT and Gabor filter based features along with morphology and watershed transform segmentation methods. The experimental results suggest that the developed system with multi-stained thyroid FNAB images would be useful for identifying thyroid cancer irrespective of staining protocol used.

  7. Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition

    Energy Technology Data Exchange (ETDEWEB)

    Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail: eechl53@cc.kyu.edu.tw; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)

    2009-11-30

    This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.

  8. Unascertained measurement classifying model of goaf collapse prediction

    Institute of Scientific and Technical Information of China (English)

    DONG Long-jun; PENG Gang-jian; FU Yu-hua; BAI Yun-fei; LIU You-fang

    2008-01-01

    Based on optimized forecast method of unascertained classifying, a unascertained measurement classifying model (UMC) to predict mining induced goaf collapse was established. The discriminated factors of the model are influential factors including overburden layer type, overburden layer thickness, the complex degree of geologic structure,the inclination angle of coal bed, volume rate of the cavity region, the vertical goaf depth from the surface and space superposition layer of the goaf region. Unascertained measurement (UM) function of each factor was calculated. The unascertained measurement to indicate the classification center and the grade of waiting forecast sample was determined by the UM distance between the synthesis index of waiting forecast samples and index of every classification. The training samples were tested by the established model, and the correct rate is 100%. Furthermore, the seven waiting forecast samples were predicted by the UMC model. The results show that the forecast results are fully consistent with the actual situation.

  9. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  10. The fuzzy gene filter: A classifier performance assesment

    CERN Document Server

    Perez, Meir

    2011-01-01

    The Fuzzy Gene Filter (FGF) is an optimised Fuzzy Inference System designed to rank genes in order of differential expression, based on expression data generated in a microarray experiment. This paper examines the effectiveness of the FGF for feature selection using various classification architectures. The FGF is compared to three of the most common gene ranking algorithms: t-test, Wilcoxon test and ROC curve analysis. Four classification schemes are used to compare the performance of the FGF vis-a-vis the standard approaches: K Nearest Neighbour (KNN), Support Vector Machine (SVM), Naive Bayesian Classifier (NBC) and Artificial Neural Network (ANN). A nested stratified Leave-One-Out Cross Validation scheme is used to identify the optimal number top ranking genes, as well as the optimal classifier parameters. Two microarray data sets are used for the comparison: a prostate cancer data set and a lymphoma data set.

  11. Efficient iris recognition via ICA feature and SVM classifier

    Institute of Scientific and Technical Information of China (English)

    Wang Yong; Xu Luping

    2007-01-01

    To improve flexibility and reliability of iris recognition algorithm while keeping iris recognition success rate, an iris recognition approach for combining SVM with ICA feature extraction model is presented. SVM is a kind of classifier which has demonstrated high generalization capabilities in the object recognition problem. And ICA is a feature extraction technique which can be considered a generalization of principal component analysis. In this paper, ICA is used to generate a set of subsequences of feature vectors for iris feature extraction. Then each subsequence is classified using support vector machine sequence kernels. Experiments are made on CASIA iris database, the result indicates combination of SVM and ICA can improve iris recognition flexibility and reliability while keeping recognition success rate.

  12. Dendritic spine detection using curvilinear structure detector and LDA classifier.

    Science.gov (United States)

    Zhang, Yong; Zhou, Xiaobo; Witt, Rochelle M; Sabatini, Bernardo L; Adjeroh, Donald; Wong, Stephen T C

    2007-06-01

    Dendritic spines are small, bulbous cellular compartments that carry synapses. Biologists have been studying the biochemical pathways by examining the morphological and statistical changes of the dendritic spines at the intracellular level. In this paper a novel approach is presented for automated detection of dendritic spines in neuron images. The dendritic spines are recognized as small objects of variable shape attached or detached to multiple dendritic backbones in the 2D projection of the image stack along the optical direction. We extend the curvilinear structure detector to extract the boundaries as well as the centerlines for the dendritic backbones and spines. We further build a classifier using Linear Discriminate Analysis (LDA) to classify the attached spines into valid and invalid types to improve the accuracy of the spine detection. We evaluate the proposed approach by comparing with the manual results in terms of backbone length, spine number, spine length, and spine density.

  13. Feasibility study for banking loan using association rule mining classifier

    Directory of Open Access Journals (Sweden)

    Agus Sasmito Aribowo

    2015-03-01

    Full Text Available The problem of bad loans in the koperasi can be reduced if the koperasi can detect whether member can complete the mortgage debt or decline. The method used for identify characteristic patterns of prospective lenders in this study, called Association Rule Mining Classifier. Pattern of credit member will be converted into knowledge and used to classify other creditors. Classification process would separate creditors into two groups: good credit and bad credit groups. Research using prototyping for implementing the design into an application using programming language and development tool. The process of association rule mining using Weighted Itemset Tidset (WIT–tree methods. The results shown that the method can predict the prospective customer credit. Training data set using 120 customers who already know their credit history. Data test used 61 customers who apply for credit. The results concluded that 42 customers will be paying off their loans and 19 clients are decline

  14. Cell signaling-based classifier predicts response to induction therapy in elderly patients with acute myeloid leukemia.

    Directory of Open Access Journals (Sweden)

    Alessandra Cesano

    Full Text Available Single-cell network profiling (SCNP data generated from multi-parametric flow cytometry analysis of bone marrow (BM and peripheral blood (PB samples collected from patients >55 years old with non-M3 AML were used to train and validate a diagnostic classifier (DXSCNP for predicting response to standard induction chemotherapy (complete response [CR] or CR with incomplete hematologic recovery [CRi] versus resistant disease [RD]. SCNP-evaluable patients from four SWOG AML trials were randomized between Training (N = 74 patients with CR, CRi or RD; BM set = 43; PB set = 57 and Validation Analysis Sets (N = 71; BM set = 42, PB set = 53. Cell survival, differentiation, and apoptosis pathway signaling were used as potential inputs for DXSCNP. Five DXSCNP classifiers were developed on the SWOG Training set and tested for prediction accuracy in an independent BM verification sample set (N = 24 from ECOG AML trials to select the final classifier, which was a significant predictor of CR/CRi (area under the receiver operating characteristic curve AUROC = 0.76, p = 0.01. The selected classifier was then validated in the SWOG BM Validation Set (AUROC = 0.72, p = 0.02. Importantly, a classifier developed using only clinical and molecular inputs from the same sample set (DXCLINICAL2 lacked prediction accuracy: AUROC = 0.61 (p = 0.18 in the BM Verification Set and 0.53 (p = 0.38 in the BM Validation Set. Notably, the DXSCNP classifier was still significant in predicting response in the BM Validation Analysis Set after controlling for DXCLINICAL2 (p = 0.03, showing that DXSCNP provides information that is independent from that provided by currently used prognostic markers. Taken together, these data show that the proteomic classifier may provide prognostic information relevant to treatment planning beyond genetic mutations and traditional prognostic factors in elderly AML.

  15. The three-dimensional origin of the classifying algebra

    OpenAIRE

    Fuchs, Jurgen; Schweigert, Christoph; Stigner, Carl

    2009-01-01

    It is known that reflection coefficients for bulk fields of a rational conformal field theory in the presence of an elementary boundary condition can be obtained as representation matrices of irreducible representations of the classifying algebra, a semisimple commutative associative complex algebra. We show how this algebra arises naturally from the three-dimensional geometry of factorization of correlators of bulk fields on the disk. This allows us to derive explicit expressions for the str...

  16. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    OpenAIRE

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara

    2011-01-01

    This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion ...

  17. Application of dispersion analysis for determining classifying separation size

    OpenAIRE

    Golomeova, Mirjana; Golomeov, Blagoj; Krstev, Boris; Zendelska, Afrodita; Krstev, Aleksandar

    2009-01-01

    The paper presents the procedure of mathematical modelling the cut point of copper ore classifying by laboratory hydrocyclone. The application of dispersion analysis and planning with Latin square makes possible significant reduction the number of tests. Tests were carried out by D-100 mm hydrocyclone. Variable parameters are as follows: content of solid in pulp, underflow diameter, overflow diameter and inlet pressure. The cut point is determined by partition curve. The obtained mathemat...

  18. Higher operations in string topology of classifying spaces

    OpenAIRE

    Lahtinen, Anssi

    2015-01-01

    Examples of non-trivial higher string topology operations have been regrettably rare in the literature. In this paper, working in the context of string topology of classifying spaces, we provide explicit calculations of a wealth of non-trivial higher string topology operations associated to a number of different Lie groups. As an application of these calculations, we obtain an abundance of interesting homology classes in the twisted homology groups of automorphism groups of free groups, the o...

  19. Controlled self-organisation using learning classifier systems

    OpenAIRE

    Richter, Urban Maximilian

    2009-01-01

    The complexity of technical systems increases, breakdowns occur quite often. The mission of organic computing is to tame these challenges by providing degrees of freedom for self-organised behaviour. To achieve these goals, new methods have to be developed. The proposed observer/controller architecture constitutes one way to achieve controlled self-organisation. To improve its design, multi-agent scenarios are investigated. Especially, learning using learning classifier systems is addressed.

  20. Learning Classifier Systems: A Complete Introduction, Review, and Roadmap

    OpenAIRE

    Urbanowicz, Ryan J; Jason H Moore

    2009-01-01

    If complexity is your problem, learning classifier systems (LCSs) may offer a solution. These rule-based, multifaceted, machine learning algorithms originated and have evolved in the cradle of evolutionary biology and artificial intelligence. The LCS concept has inspired a multitude of implementations adapted to manage the different problem domains to which it has been applied (e.g., autonomous robotics, classification, knowledge discovery, and modeling). One field that is taking increasing n...

  1. Learning Rates for ${l}^{1}$ -Regularized Kernel Classifiers

    OpenAIRE

    Hongzhi Tong; Di-Rong Chen; Fenghong Yang

    2013-01-01

    We consider a family of classification algorithms generated from a regularization kernel scheme associated with ${l}^{1}$ -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derive...

  2. Learning Classifiers from Synthetic Data Using a Multichannel Autoencoder

    OpenAIRE

    Zhang, Xi; Fu, Yanwei; Zang, Andi; Sigal, Leonid; Agam, Gady

    2015-01-01

    We propose a method for using synthetic data to help learning classifiers. Synthetic data, even is generated based on real data, normally results in a shift from the distribution of real data in feature space. To bridge the gap between the real and synthetic data, and jointly learn from synthetic and real data, this paper proposes a Multichannel Autoencoder(MCAE). We show that by suing MCAE, it is possible to learn a better feature representation for classification. To evaluate the proposed a...

  3. Classifying and Visualizing Motion Capture Sequences using Deep Neural Networks

    OpenAIRE

    Cho, Kyunghyun; Chen, Xi

    2013-01-01

    The gesture recognition using motion capture data and depth sensors has recently drawn more attention in vision recognition. Currently most systems only classify dataset with a couple of dozens different actions. Moreover, feature extraction from the data is often computational complex. In this paper, we propose a novel system to recognize the actions from skeleton data with simple, but effective, features using deep neural networks. Features are extracted for each frame based on the relative...

  4. Classifying Floating Potential Measurement Unit Data Products as Science Data

    Science.gov (United States)

    Coffey, Victoria; Minow, Joseph

    2015-01-01

    We are Co-Investigators for the Floating Potential Measurement Unit (FPMU) on the International Space Station (ISS) and members of the FPMU operations and data analysis team. We are providing this memo for the purpose of classifying raw and processed FPMU data products and ancillary data as NASA science data with unrestricted, public availability in order to best support science uses of the data.

  5. Image Replica Detection based on Binary Support Vector Classifier

    OpenAIRE

    Maret, Y.; Dufaux, F.; Ebrahimi, T.

    2005-01-01

    In this paper, we present a system for image replica detection. More specifically, the technique is based on the extraction of 162 features corresponding to texture, color and gray-level characteristics. These features are then weighted and statistically normalized. To improve training and performances, the features space dimensionality is reduced. Lastly, a decision function is generated to classify the test image as replica or non-replica of a given reference image. Experimental results sho...

  6. Classifying racist texts using a support vector machine

    OpenAIRE

    Greevy, Edel; Alan F. SMEATON

    2004-01-01

    In this poster we present an overview of the techniques we used to develop and evaluate a text categorisation system to automatically classify racist texts. Detecting racism is difficult because the presence of indicator words is insufficient to indicate racist texts, unlike some other text classification tasks. Support Vector Machines (SVM) are used to automatically categorise web pages based on whether or not they are racist. Different interpretations of what constitutes a term are taken, a...

  7. VIRTUAL MINING MODEL FOR CLASSIFYING TEXT USING UNSUPERVISED LEARNING

    OpenAIRE

    S. Koteeswaran; E. Kannan; P. Visu

    2014-01-01

    In real world data mining is emerging in various era, one of its most outstanding performance is held in various research such as Big data, multimedia mining, text mining etc. Each of the researcher proves their contribution with tremendous improvements in their proposal by means of mathematical representation. Empowering each problem with solutions are classified into mathematical and implementation models. The mathematical model relates to the straight forward rules and formulas that are re...

  8. An alternative educational indicator for classifying Secondary Schools in Portugal

    OpenAIRE

    Gonçalves, A. Manuela; Costa, Marco; De Oliveira, Mário,

    2015-01-01

    The purpose of this paper aims at carrying out a study in the area of Statistics for classifying Portuguese Secondary Schools (both mainland and islands: “Azores” and “Madeira”),taking into account the results achievedby their students in both national examinations and internal assessment. The main according consists of identifying groups of schools with different performance levels by considering the sub-national public and private education systems’ as well as their respective geographic lo...

  9. Face Recognition Combining Eigen Features with a Parzen Classifier

    Institute of Scientific and Technical Information of China (English)

    SUN Xin; LIU Bing; LIU Ben-yong

    2005-01-01

    A face recognition scheme is proposed, wherein a face image is preprocessed by pixel averaging and energy normalizing to reduce data dimension and brightness variation effect, followed by the Fourier transform to estimate the spectrum of the preprocessed image. The principal component analysis is conducted on the spectra of a face image to obtain eigen features. Combining eigen features with a Parzen classifier, experiments are taken on the ORL face database.

  10. College students classified with ADHD and the foreign language requirement.

    Science.gov (United States)

    Sparks, Richard L; Javorsky, James; Philips, Lois

    2004-01-01

    The conventional assumption of most disability service providers is that students classified as having attention-deficit/hyperactivity disorder (ADHD) will experience difficulties in foreign language (FL) courses. However, the evidence in support of this assumption is anecdotal. In this empirical investigation, the demographic profiles, overall academic performance, college entrance scores, and FL classroom performance of 68 college students classified as having ADHD were examined. All students had graduated from the same university over a 5-year period. The findings showed that all 68 students had completed the university's FL requirement by passing FL courses. The students' college entrance scores were similar to the middle 50% of freshmen at this university, and their graduating grade point average was similar to the typical graduating senior at the university. The students had participated in both lower (100) and upper (200, 300, 400) level FL courses and had achieved mostly average and above-average grades (A, B, C) in these courses. One student had majored and eight students had minored in an FL. Two thirds of the students passed all of their FL courses without the use of instructional accommodations. In this study, the classification of ADHD did not appear to interfere with participants' performance in FL courses. The findings suggest that students classified as having ADHD should enroll in and fulfill the FL requirement by passing FL courses. PMID:15493238

  11. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Science.gov (United States)

    Shin, Seung Yeon; Lee, Soochahn; Yun, Il Dong; Jung, Ho Yub; Heo, Yong Seok; Kim, Sun Mi; Lee, Kyoung Mu

    2015-01-01

    In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC). Our framework comprises three classification stages: i) a random forest (RF) classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii) a more complex discriminative restricted Boltzmann machine (DRBM) classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii) a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS) and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC) and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC) curve for detection of clustered μCs. PMID:26630496

  12. Image classifiers for the cell transformation assay: a progress report

    Science.gov (United States)

    Urani, Chiara; Crosta, Giovanni F.; Procaccianti, Claudio; Melchioretto, Pasquale; Stefanini, Federico M.

    2010-02-01

    The Cell Transformation Assay (CTA) is one of the promising in vitro methods used to predict human carcinogenicity. The neoplastic phenotype is monitored in suitable cells by the formation of foci and observed by light microscopy after staining. Foci exhibit three types of morphological alterations: Type I, characterized by partially transformed cells, and Types II and III considered to have undergone neoplastic transformation. Foci recognition and scoring have always been carried visually by a trained human expert. In order to automatically classify foci images one needs to implement some image understanding algorithm. Herewith, two such algorithms are described and compared by performance. The supervised classifier (as described in previous articles) relies on principal components analysis embedded in a training feedback loop to process the morphological descriptors extracted by "spectrum enhancement" (SE). The unsupervised classifier architecture is based on the "partitioning around medoids" and is applied to image descriptors taken from histogram moments (HM). Preliminary results suggest the inadequacy of the HMs as image descriptors as compared to those from SE. A justification derived from elementary arguments of real analysis is provided in the Appendix.

  13. Classifying prosthetic use via accelerometry in persons with transtibial amputations

    Directory of Open Access Journals (Sweden)

    Morgan T. Redfield, MSEE

    2013-12-01

    Full Text Available Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor, record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+ was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing, standing (i.e., standing upright with limited leg motion, or sitting (i.e., seated with limited leg motion. Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/– standard deviation accuracy of 96.6% +/– 3.0%.

  14. Classifying prosthetic use via accelerometry in persons with transtibial amputations.

    Science.gov (United States)

    Redfield, Morgan T; Cagle, John C; Hafner, Brian J; Sanders, Joan E

    2013-01-01

    Knowledge of how persons with amputation use their prostheses and how this use changes over time may facilitate effective rehabilitation practices and enhance understanding of prosthesis functionality. Perpetual monitoring and classification of prosthesis use may also increase the health and quality of life for prosthetic users. Existing monitoring and classification systems are often limited in that they require the subject to manipulate the sensor (e.g., attach, remove, or reset a sensor), record data over relatively short time periods, and/or classify a limited number of activities and body postures of interest. In this study, a commercially available three-axis accelerometer (ActiLife ActiGraph GT3X+) was used to characterize the activities and body postures of individuals with transtibial amputation. Accelerometers were mounted on prosthetic pylons of 10 persons with transtibial amputation as they performed a preset routine of actions. Accelerometer data was postprocessed using a binary decision tree to identify when the prosthesis was being worn and to classify periods of use as movement (i.e., leg motion such as walking or stair climbing), standing (i.e., standing upright with limited leg motion), or sitting (i.e., seated with limited leg motion). Classifications were compared to visual observation by study researchers. The classifier achieved a mean +/- standard deviation accuracy of 96.6% +/- 3.0%.

  15. Exploiting Language Models to Classify Events from Twitter.

    Science.gov (United States)

    Vo, Duc-Thuan; Hai, Vo Thuan; Ock, Cheol-Young

    2015-01-01

    Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP), which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets' features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events. PMID:26451139

  16. Exploiting Language Models to Classify Events from Twitter

    Directory of Open Access Journals (Sweden)

    Duc-Thuan Vo

    2015-01-01

    Full Text Available Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP, which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets’ features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events.

  17. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  18. Self-organizing map classifier for stressed speech recognition

    Science.gov (United States)

    Partila, Pavol; Tovarek, Jaromir; Voznak, Miroslav

    2016-05-01

    This paper presents a method for detecting speech under stress using Self-Organizing Maps. Most people who are exposed to stressful situations can not adequately respond to stimuli. Army, police, and fire department occupy the largest part of the environment that are typical of an increased number of stressful situations. The role of men in action is controlled by the control center. Control commands should be adapted to the psychological state of a man in action. It is known that the psychological changes of the human body are also reflected physiologically, which consequently means the stress effected speech. Therefore, it is clear that the speech stress recognizing system is required in the security forces. One of the possible classifiers, which are popular for its flexibility, is a self-organizing map. It is one type of the artificial neural networks. Flexibility means independence classifier on the character of the input data. This feature is suitable for speech processing. Human Stress can be seen as a kind of emotional state. Mel-frequency cepstral coefficients, LPC coefficients, and prosody features were selected for input data. These coefficients were selected for their sensitivity to emotional changes. The calculation of the parameters was performed on speech recordings, which can be divided into two classes, namely the stress state recordings and normal state recordings. The benefit of the experiment is a method using SOM classifier for stress speech detection. Results showed the advantage of this method, which is input data flexibility.

  19. ASYMBOOST-BASED FISHER LINEAR CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Wang Xianji; Ye Xueyi; Li Bin; Li Xin; Zhuang Zhenquan

    2008-01-01

    When using AdaBoost to select discriminant features from some feature space (e.g. Gabor feature space) for face recognition, cascade structure is usually adopted to leverage the asymmetry in the distribution of positive and negative samples. Each node in the cascade structure is a classifier trained by AdaBoost with an asymmetric learning goal of high recognition rate but only moderate low false positive rate. One limitation of AdaBoost arises in the context of skewed example distribution and cascade classifiers: AdaBoost minimizes the classification error, which is not guaranteed to achieve the asymmetric node learning goal. In this paper, we propose to use the asymmetric AdaBoost (Asym-Boost) as a mechanism to address the asymmetric node learning goal. Moreover, the two parts of the selecting features and forming ensemble classifiers are decoupled, both of which occur simultaneously in AsymBoost and AdaBoost. Fisher Linear Discriminant Analysis (FLDA) is used on the selected features to learn a linear discriminant function that maximizes the separability of data among the different classes, which we think can improve the recognition performance. The proposed algorithm is dem onstrated with face recognition using a Gabor based representation on the FERET database. Experimental results show that the proposed algorithm yields better recognition performance than AdaBoost itself.

  20. Early Detection of Breast Cancer using SVM Classifier Technique

    Directory of Open Access Journals (Sweden)

    Y.Ireaneus Anna Rejani

    2009-11-01

    Full Text Available This paper presents a tumor detection algorithm from mammogram. The proposed system focuses on the solution of two problems. One is how to detect tumors as suspicious regions with a very weak contrast to their background and another is how to extract features which categorize tumors. The tumor detection method follows the scheme of (a mammogram enhancement. (b The segmentation of the tumor area. (c The extraction of features from the segmented tumor area. (d The use of SVM classifier. The enhancement can be defined as conversion of the image quality to a better and more understandable level. The mammogram enhancement procedure includes filtering, top hat operation, DWT. Then the contrast stretching is used to increase the contrast of the image. The segmentation of mammogram images has been playing an important role to improve the detection and diagnosis of breast cancer. The most common segmentation method used is thresholding. The features are extracted from the segmented breast area. Next stage include, which classifies the regions using the SVM classifier. The method was tested on 75 mammographic images, from the mini-MIAS database. The methodology achieved a sensitivity of 88.75%.

  1. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Directory of Open Access Journals (Sweden)

    Seung Yeon Shin

    Full Text Available In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC. Our framework comprises three classification stages: i a random forest (RF classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii a more complex discriminative restricted Boltzmann machine (DRBM classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC curve for detection of clustered μCs.

  2. Immigrants' health in Europe: A cross-classified multilevel approach to examine origin country, destination country, and community effects

    NARCIS (Netherlands)

    Huijts, T.H.M.; Kraaykamp, G.L.M.

    2012-01-01

    In this study, we examined origin, destination, and community effects on first- and second-generation immigrants health in Europe. We used information from the European Social Surveys (20022008) on 19,210 immigrants from 123 countries of origin, living in 31 European countries. Cross-classified mult

  3. Analysis of unintended events in hospitals: inter-rater reliability of constructing causal trees and classifying root causes.

    NARCIS (Netherlands)

    Smits, M.; Janssen, J.; Vet, R. de; Zwaan, L.; Timmermans, D.; Groenewegen, P.; Wagner, C.

    2009-01-01

    Background: Root cause analysis is a method to examine causes of unintended events. PRISMA (Prevention and Recovery Information System for Monitoring and Analysis) is a root cause analysis tool. With PRISMA, events are described in causal trees and root causes are subsequently classified with the Ei

  4. Analysis of unintended events in hospitals: inter-rater reliability of constructing causal trees and classifying root causes

    NARCIS (Netherlands)

    Smits, M.; Janssen, J.; Vet, de H.C.W.; Zwaan, L.; Timmermans, D.R.M.; Groenewegen, P.P.; Wagner, C.

    2009-01-01

    BACKGROUND: Root cause analysis is a method to examine causes of unintended events. PRISMA (Prevention and Recovery Information System for Monitoring and Analysis: is a root cause analysis tool. With PRISMA, events are described in causal trees and root causes are subsequently classified with the Ei

  5. Analysis of unintended events in hospitals : inter-rater reliability of constructing causal trees and classifying root causes

    NARCIS (Netherlands)

    Smits, M.; Janssen, J.; Vet, R. de; Zwaan, L.; Groenewegen, P.P.; Timmermans, D.

    2009-01-01

    Background. Root cause analysis is a method to examine causes of unintended events. PRISMA (Prevention and Recovery Information System for Monitoring and Analysis) is a root cause analysis tool. With PRISMA, events are described in causal trees and root causes are subsequently classified with the Ei

  6. A radial basis classifier for the automatic detection of aspiration in children with dysphagia

    Directory of Open Access Journals (Sweden)

    Blain Stefanie

    2006-07-01

    Full Text Available Abstract Background Silent aspiration or the inhalation of foodstuffs without overt physiological signs presents a serious health issue for children with dysphagia. To date, there are no reliable means of detecting aspiration in the home or community. An assistive technology that performs in these environments could inform caregivers of adverse events and potentially reduce the morbidity and anxiety of the feeding experience for the child and caregiver, respectively. This paper proposes a classifier for automatic classification of aspiration and swallow vibration signals non-invasively recorded on the neck of children with dysphagia. Methods Vibration signals associated with safe swallows and aspirations, both identified via videofluoroscopy, were collected from over 100 children with neurologically-based dysphagia using a single-axis accelerometer. Five potentially discriminatory mathematical features were extracted from the accelerometry signals. All possible combinations of the five features were investigated in the design of radial basis function classifiers. Performance of different classifiers was compared and the best feature sets were identified. Results Optimal feature combinations for two, three and four features resulted in statistically comparable adjusted accuracies with a radial basis classifier. In particular, the feature pairing of dispersion ratio and normality achieved an adjusted accuracy of 79.8 ± 7.3%, a sensitivity of 79.4 ± 11.7% and specificity of 80.3 ± 12.8% for aspiration detection. Addition of a third feature, namely energy, increased adjusted accuracy to 81.3 ± 8.5% but the change was not statistically significant. A closer look at normality and dispersion ratio features suggest leptokurticity and the frequency and magnitude of atypical values as distinguishing characteristics between swallows and aspirations. The achieved accuracies are 30% higher than those reported for bedside cervical auscultation. Conclusion

  7. Drosophila olfactory receptors as classifiers for volatiles from disparate real world applications

    International Nuclear Information System (INIS)

    Olfactory receptors evolved to provide animals with ecologically and behaviourally relevant information. The resulting extreme sensitivity and discrimination has proven useful to humans, who have therefore co-opted some animals’ sense of smell. One aim of machine olfaction research is to replace the use of animal noses and one avenue of such research aims to incorporate olfactory receptors into artificial noses. Here, we investigate how well the olfactory receptors of the fruit fly, Drosophila melanogaster, perform in classifying volatile odourants that they would not normally encounter. We collected a large number of in vivo recordings from individual Drosophila olfactory receptor neurons in response to an ecologically relevant set of 36 chemicals related to wine (‘wine set’) and an ecologically irrelevant set of 35 chemicals related to chemical hazards (‘industrial set’), each chemical at a single concentration. Resampled response sets were used to classify the chemicals against all others within each set, using a standard linear support vector machine classifier and a wrapper approach. Drosophila receptors appear highly capable of distinguishing chemicals that they have not evolved to process. In contrast to previous work with metal oxide sensors, Drosophila receptors achieved the best recognition accuracy if the outputs of all 20 receptor types were used. (paper)

  8. Understanding and classifying metabolite space and metabolite-likeness.

    Directory of Open Access Journals (Sweden)

    Julio E Peironcely

    Full Text Available While the entirety of 'Chemical Space' is huge (and assumed to contain between 10(63 and 10(200 'small molecules', distinct subsets of this space can nonetheless be defined according to certain structural parameters. An example of such a subspace is the chemical space spanned by endogenous metabolites, defined as 'naturally occurring' products of an organisms' metabolism. In order to understand this part of chemical space in more detail, we analyzed the chemical space populated by human metabolites in two ways. Firstly, in order to understand metabolite space better, we performed Principal Component Analysis (PCA, hierarchical clustering and scaffold analysis of metabolites and non-metabolites in order to analyze which chemical features are characteristic for both classes of compounds. Here we found that heteroatom (both oxygen and nitrogen content, as well as the presence of particular ring systems was able to distinguish both groups of compounds. Secondly, we established which molecular descriptors and classifiers are capable of distinguishing metabolites from non-metabolites, by assigning a 'metabolite-likeness' score. It was found that the combination of MDL Public Keys and Random Forest exhibited best overall classification performance with an AUC value of 99.13%, a specificity of 99.84% and a selectivity of 88.79%. This performance is slightly better than previous classifiers; and interestingly we found that drugs occupy two distinct areas of metabolite-likeness, the one being more 'synthetic' and the other being more 'metabolite-like'. Also, on a truly prospective dataset of 457 compounds, 95.84% correct classification was achieved. Overall, we are confident that we contributed to the tasks of classifying metabolites, as well as to understanding metabolite chemical space better. This knowledge can now be used in the development of new drugs that need to resemble metabolites, and in our work particularly for assessing the metabolite

  9. DFRFT: A Classified Review of Recent Methods with Its Application

    Directory of Open Access Journals (Sweden)

    Ashutosh Kumar Singh

    2013-01-01

    Full Text Available In the literature, there are various algorithms available for computing the discrete fractional Fourier transform (DFRFT. In this paper, all the existing methods are reviewed, classified into four categories, and subsequently compared to find out the best alternative from the view point of minimal computational error, computational complexity, transform features, and additional features like security. Subsequently, the correlation theorem of FRFT has been utilized to remove significantly the Doppler shift caused due to motion of receiver in the DSB-SC AM signal. Finally, the role of DFRFT has been investigated in the area of steganography.

  10. Classifying the future of universes with dark energy

    International Nuclear Information System (INIS)

    We classify the future of the universe for general cosmological models including matter and dark energy. If the equation of state of dark energy is less then -1, the age of the universe becomes finite. We compute the rest of the age of the universe for such universe models. The behaviour of the future growth of matter density perturbation is also studied. We find that the collapse of the spherical overdensity region is greatly changed if the equation of state of dark energy is less than -1

  11. Classifying Cubic Edge-Transitive Graphs of Order 8

    Indian Academy of Sciences (India)

    Mehdi Alaeiyan; M K Hosseinipoor

    2009-11-01

    A simple undirected graph is said to be semisymmetric if it is regular and edge-transitive but not vertex-transitive. Let be a prime. It was shown by Folkman (J. Combin. Theory 3(1967) 215--232) that a regular edge-transitive graph of order 2 or 22 is necessarily vertex-transitive. In this paper, an extension of his result in the case of cubic graphs is given. It is proved that, every cubic edge-transitive graph of order 8 is symmetric, and then all such graphs are classified.

  12. Support vector machine classifiers for large data sets.

    Energy Technology Data Exchange (ETDEWEB)

    Gertz, E. M.; Griffin, J. D.

    2006-01-31

    This report concerns the generation of support vector machine classifiers for solving the pattern recognition problem in machine learning. Several methods are proposed based on interior point methods for convex quadratic programming. Software implementations are developed by adapting the object-oriented packaging OOQP to the problem structure and by using the software package PETSc to perform time-intensive computations in a distributed setting. Linear systems arising from classification problems with moderately large numbers of features are solved by using two techniques--one a parallel direct solver, the other a Krylov-subspace method incorporating novel preconditioning strategies. Numerical results are provided, and computational experience is discussed.

  13. A Fast Scalable Classifier Tightly Integrated with RDBMS

    Institute of Scientific and Technical Information of China (English)

    刘红岩; 陆宏钧; 陈剑

    2002-01-01

    In this paper, we report our success in building efficient scalable classifiers by exploring the capabilities of modern relational database management systems(RDBMS).In addition to high classification accuracy, the unique features of theapproach include its high training speed, linear scalability, and simplicity in implementation. More importantly,the major computation required in the approachcan be implemented using standard functions provided by the modern relational DBMS.Besides, with the effective rule pruning strategy, the algorithm proposed inthis paper can produce a compact set of classification rules. The results of experiments conducted for performance evaluation and analysis are presented.

  14. Brain Computer Interface. Comparison of Neural Networks Classifiers.

    OpenAIRE

    Martínez Pérez, Jose Luis; Barrientos Cruz, Antonio

    2008-01-01

    Brain Computer Interface is an emerging technology that allows new output paths to communicate the user’s intentions without use of normal output ways, such as muscles or nerves (Wolpaw, J. R.; et al., 2002).In order to obtain its objective BCI devices shall make use of classifier which translate the inputs provided by user’s brain signal to commands for external devices. The primary uses of this technology will benefit persons with some kind blocking disease as for example: ALS, brainstem st...

  15. Some factors influencing interobserver variation in classifying simple pneumoconiosis.

    OpenAIRE

    Musch, D C; Higgins, I T; Landis, J R

    1985-01-01

    Three experienced physician readers assessed the chest radiographs of 743 men from a coal mining community in West Virginia for the signs of simple pneumoconiosis, using the ILO U/C 1971 Classification of Radiographs of the Pneumoconioses. The number of films categorised by each reader as showing evidence of simple pneumoconiosis varied from 63 (8.5%) to 114 (15.3%) of the 743 films classified. The effect of film quality and obesity on interobserver agreement was assessed by use of kappa-type...

  16. BIOPHARMACEUTICS CLASSIFICATION SYSTEM: A STRATEGIC TOOL FOR CLASSIFYING DRUG SUBSTANCES

    Directory of Open Access Journals (Sweden)

    Rohilla Seema

    2011-07-01

    Full Text Available The biopharmaceutical classification system (BCS is a scientific approach for classifying drug substances based on their dose/solubility ratio and intestinal permeability. The BCS has been developed to allow prediction of in vivo pharmacokinetic performance of drug products from measurements of permeability and solubility. Moreover, the drugs can be categorized into four classes of BCS on the basis of permeability and solubility namely; high permeability high solubility, high permeability low solubility, low permeability high solubility and low permeability low solubility. The present review summarizes the principles, objectives, benefits, classification and applications of BCS.

  17. Use RAPD Analysis to Classify Tea Trees in Yunnan

    Institute of Scientific and Technical Information of China (English)

    SHAO Wan-fang; PANG Rui-hua; DUAN Hong-xing; WANG Ping-sheng; XU Mei; ZHANG Ya-ping; LI Jia-hua

    2003-01-01

    RAPD assessment on genetic variations of 45 tea trees in Yunnan was carried out. Eight primers selected from 40 random primers were used to amplify 45 tea samples, and a total of 95 DNA bands were amplified, of which 90 (94.7 %) were polymorphism. The average number of DNA bands amplified by each primer was 11.5. Based on the results of UPGMA cluster analysis of 95 DNA bands amplified by 8 primers,all the tested materials could be classified into 7 groups including 5 complex groups and 2 simple groups, which was basically identical with morphological classification. In addition, there were some speciations in 2 simple groups.

  18. Classifying paragraph types using linguistic features: Is paragraph positioning important?

    Directory of Open Access Journals (Sweden)

    Scott A. Crossley, Kyle Dempsey & Danielle S. McNamara

    2011-12-01

    Full Text Available This study examines the potential for computational tools and human raters to classify paragraphs based on positioning. In this study, a corpus of 182 paragraphs was collected from student, argumentative essays. The paragraphs selected were initial, middle, and final paragraphs and their positioning related to introductory, body, and concluding paragraphs. The paragraphs were analyzed by the computational tool Coh-Metrix on a variety of linguistic features with correlates to textual cohesion and lexical sophistication and then modeled using statistical techniques. The paragraphs were also classified by human raters based on paragraph positioning. The performance of the reported model was well above chance and reported an accuracy of classification that was similar to human judgments of paragraph type (66% accuracy for human versus 65% accuracy for our model. The model's accuracy increased when longer paragraphs that provided more linguistic coverage and paragraphs judged by human raters to be of higher quality were examined. The findings support the notions that paragraph types contain specific linguistic features that allow them to be distinguished from one another. The finding reported in this study should prove beneficial in classroom writing instruction and in automated writing assessment.

  19. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    Directory of Open Access Journals (Sweden)

    Hossein Bashashati

    Full Text Available A problem that impedes the progress in Brain-Computer Interface (BCI research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA as the classifier of choice for BCI systems.

  20. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    Science.gov (United States)

    Bashashati, Hossein; Ward, Rabab K; Birch, Gary E; Bashashati, Ali

    2015-01-01

    A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems.

  1. Deep convolutional neural networks for classifying GPR B-scans

    Science.gov (United States)

    Besaw, Lance E.; Stimac, Philip J.

    2015-05-01

    Symmetric and asymmetric buried explosive hazards (BEHs) present real, persistent, deadly threats on the modern battlefield. Current approaches to mitigate these threats rely on highly trained operatives to reliably detect BEHs with reasonable false alarm rates using handheld Ground Penetrating Radar (GPR) and metal detectors. As computers become smaller, faster and more efficient, there exists greater potential for automated threat detection based on state-of-the-art machine learning approaches, reducing the burden on the field operatives. Recent advancements in machine learning, specifically deep learning artificial neural networks, have led to significantly improved performance in pattern recognition tasks, such as object classification in digital images. Deep convolutional neural networks (CNNs) are used in this work to extract meaningful signatures from 2-dimensional (2-D) GPR B-scans and classify threats. The CNNs skip the traditional "feature engineering" step often associated with machine learning, and instead learn the feature representations directly from the 2-D data. A multi-antennae, handheld GPR with centimeter-accurate positioning data was used to collect shallow subsurface data over prepared lanes containing a wide range of BEHs. Several heuristics were used to prevent over-training, including cross validation, network weight regularization, and "dropout." Our results show that CNNs can extract meaningful features and accurately classify complex signatures contained in GPR B-scans, complementing existing GPR feature extraction and classification techniques.

  2. Decision Tree Classifiers for Star/Galaxy Separation

    CERN Document Server

    Vasconcellos, E C; Gal, R R; LaBarbera, F L; Capelato, H V; Velho, H F Campos; Trevisan, M; Ruiz, R S R

    2010-01-01

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of $884,126$ SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: $14\\le r\\le21$ ($85.2%$) and $r\\ge19$ ($82.1%$). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT and Ball et al. (2006). We find that our FT classifier is comparable or better in completeness over the full magnitude range $15\\le r\\le21$, with m...

  3. Automatic misclassification rejection for LDA classifier using ROC curves.

    Science.gov (United States)

    Menon, Radhika; Di Caterina, Gaetano; Lakany, Heba; Petropoulakis, Lykourgos; Conway, Bernard A; Soraghan, John J

    2015-08-01

    This paper presents a technique to improve the performance of an LDA classifier by determining if the predicted classification output is a misclassification and thereby rejecting it. This is achieved by automatically computing a class specific threshold with the help of ROC curves. If the posterior probability of a prediction is below the threshold, the classification result is discarded. This method of minimizing false positives is beneficial in the control of electromyography (EMG) based upper-limb prosthetic devices. It is hypothesized that a unique EMG pattern is associated with a specific hand gesture. In reality, however, EMG signals are difficult to distinguish, particularly in the case of multiple finger motions, and hence classifiers are trained to recognize a set of individual gestures. However, it is imperative that misclassifications be avoided because they result in unwanted prosthetic arm motions which are detrimental to device controllability. This warrants the need for the proposed technique wherein a misclassified gesture prediction is rejected resulting in no motion of the prosthetic arm. The technique was tested using surface EMG data recorded from thirteen amputees performing seven hand gestures. Results show the number of misclassifications was effectively reduced, particularly in cases with low original classification accuracy. PMID:26736304

  4. Using Narrow Band Photometry to Classify Stars and Brown Dwarfs

    CERN Document Server

    Mainzer, A K; Sievers, J L; Young, E T; Lean, Ian S. Mc

    2004-01-01

    We present a new system of narrow band filters in the near infrared that can be used to classify stars and brown dwarfs. This set of four filters, spanning the H band, can be used to identify molecular features unique to brown dwarfs, such as H2O and CH4. The four filters are centered at 1.495 um (H2O), 1.595 um (continuum), 1.66 um (CH4), and 1.75 um (H2O). Using two H2O filters allows us to solve for individual objects' reddenings. This can be accomplished by constructing a color-color-color cube and rotating it until the reddening vector disappears. We created a model of predicted color-color-color values for different spectral types by integrating filter bandpass data with spectra of known stars and brown dwarfs. We validated this model by making photometric measurements of seven known L and T dwarfs, ranging from L1 - T7.5. The photometric measurements agree with the model to within +/-0.1 mag, allowing us to create spectral indices for different spectral types. We can classify A through early M stars to...

  5. Classifying and mapping wetlands and peat resources using digital cartography

    Science.gov (United States)

    Cameron, Cornelia C.; Emery, David A.

    1992-01-01

    Digital cartography allows the portrayal of spatial associations among diverse data types and is ideally suited for land use and resource analysis. We have developed methodology that uses digital cartography for the classification of wetlands and their associated peat resources and applied it to a 1:24 000 scale map area in New Hampshire. Classifying and mapping wetlands involves integrating the spatial distribution of wetlands types with depth variations in associated peat quality and character. A hierarchically structured classification that integrates the spatial distribution of variations in (1) vegetation, (2) soil type, (3) hydrology, (4) geologic aspects, and (5) peat characteristics has been developed and can be used to build digital cartographic files for resource and land use analysis. The first three parameters are the bases used by the National Wetlands Inventory to classify wetlands and deepwater habitats of the United States. The fourth parameter, geological aspects, includes slope, relief, depth of wetland (from surface to underlying rock or substrate), wetland stratigraphy, and the type and structure of solid and unconsolidated rock surrounding and underlying the wetland. The fifth parameter, peat characteristics, includes the subsurface variation in ash, acidity, moisture, heating value (Btu), sulfur content, and other chemical properties as shown in specimens obtained from core holes. These parameters can be shown as a series of map data overlays with tables that can be integrated for resource or land use analysis.

  6. Gain ratio based fuzzy weighted association rule mining classifier for medical diagnostic interface

    Indian Academy of Sciences (India)

    N S Nithya; K Duraiswamy

    2014-02-01

    The health care environment still needs knowledge based discovery for handling wealth of data. Extraction of the potential causes of the diseases is the most important factor for medical data mining. Fuzzy association rule mining is wellperformed better than traditional classifiers but it suffers from the exponential growth of the rules produced. In the past, we have proposed an information gain based fuzzy association rule mining algorithm for extracting both association rules and membership functions of medical data to reduce the rules. It used a ranking based weight value to identify the potential attribute. When we take a large number of distinct values, the computation of information gain value is not feasible. In this paper, an enhanced approach, called gain ratio based fuzzy weighted association rule mining, is thus proposed for distinct diseases and also increase the learning time of the previous one. Experimental results show that there is a marginal improvement in the attribute selection process and also improvement in the classifier accuracy. The system has been implemented in Java platform and verified by using benchmark data from the UCI machine learning repository.

  7. Presenting a Spatial-Geometric EEG Feature to Classify BMD and Schizophrenic Patients

    Directory of Open Access Journals (Sweden)

    Fatemeh AliMardani

    2016-03-01

    Full Text Available Schizophrenia (SZ and bipolar mood disorder (BMD patients demonstrate some similar signs and symptoms; therefore, distinguishing those using qualitative criteria is not an easy task especially when these patients experience manic or hallucination phases. This study is aimed at classifying these patients by spatial analysis of their electroencephalogram (EEG signals. In this way, 22-channels EEG signals were recorded from 52 patients (26 patients with SZ and 26 patients with BMD. No stimulus has been used during the signal recording in order to investigate whether background EEGs of these patients in the idle state contain discriminative information or not. The EEG signals of all channels were segmented into stationary intervals called “frame” and the covariance matrix of each frame is separately represented in manifold space. Exploiting Riemannian metrics in the manifold space, the classification of sample covariance matrices is carried out by a simple nearest neighbor classifier. To evaluate our method, leave one patient out cross validation approach has been used. The achieved results imply that the difference in the spatial information between the patients along with control subjects is meaningful. Nevertheless, to enhance the diagnosis rate, a new algorithm is introduced in the manifold space to select those frames which are less deviated around the mean as the most probable noise free frames. The classification accuracy is highly improved up to 98.95% compared to the conventional methods. The achieved result is promising and the computational complexity is also suitable for real time processing.

  8. Classified and Clustered Data Constellation: An Efficient Approach of 3D Urban Data Management

    DEFF Research Database (Denmark)

    Azri, Suhaibah; Ujang, Uznir; Antón Castro, Francesc;

    2016-01-01

    it involves various types of data, such as multiple types of zoning themes in the case of urban mixed-use development. Thus, a special technique for efficient handling and management of urban data is necessary. This paper proposes a structure called Classified and Clustered Data Constellation (CCDC) for urban...... data management. CCDC operates on the basis of two filters: classification and clustering. To boost up the performance of information retrieval, CCDC offers a minimal percentage of overlap among nodes and coverage area to avoid repetitive data entry and multipath query. The results of tests conducted...... on several urban mixed-use development datasets using CCDC verify that it efficiently retrieves their semantic and spatial information. Further, comparisons conducted between CCDC and existing clustering and data constellation techniques, from the aspect of preservation of minimal overlap and coverage...

  9. Evaluation of the Diagnostic Power of Thermography in Breast Cancer Using Bayesian Network Classifiers

    Science.gov (United States)

    Nicandro, Cruz-Ramírez; Efrén, Mezura-Montes; María Yaneli, Ameca-Alducin; Enrique, Martín-Del-Campo-Mena; Héctor Gabriel, Acosta-Mesa; Nancy, Pérez-Castro; Alejandro, Guerra-Hernández; Guillermo de Jesús, Hoyos-Rivera; Rocío Erandi, Barrientos-Martínez

    2013-01-01

    Breast cancer is one of the leading causes of death among women worldwide. There are a number of techniques used for diagnosing this disease: mammography, ultrasound, and biopsy, among others. Each of these has well-known advantages and disadvantages. A relatively new method, based on the temperature a tumor may produce, has recently been explored: thermography. In this paper, we will evaluate the diagnostic power of thermography in breast cancer using Bayesian network classifiers. We will show how the information provided by the thermal image can be used in order to characterize patients suspected of having cancer. Our main contribution is the proposal of a score, based on the aforementioned information, that could help distinguish sick patients from healthy ones. Our main results suggest the potential of this technique in such a goal but also show its main limitations that have to be overcome to consider it as an effective diagnosis complementary tool. PMID:23762182

  10. Reranking and Classifying Search Results Exhaustively Based on Edit-and-Propagate Operations

    Science.gov (United States)

    Yamamoto, Takehiro; Nakamura, Satoshi; Tanaka, Katsumi

    Search engines return a huge number of Web search results, and the user usually checks merely the top 5 or 10 results. However, the user sometimes must collect information exhaustively such as collecting all the publications which a certain person had written, or gathering a lot of useful information which assists the user to buy. In this case, the user must repeatedly check search results that are clearly irrelevant. We believe that people would use a search system which provides the reranking or classifying functions by the user’s interaction. We have already proposed a reranking system based on the user’s edit-and-propagate operations. In this paper, we introduce the drag-and-drop operation into our system to support the user’s exhaustive search.

  11. Least Square Support Vector Machine Classifier vs a Logistic Regression Classifier on the Recognition of Numeric Digits

    Directory of Open Access Journals (Sweden)

    Danilo A. López-Sarmiento

    2013-11-01

    Full Text Available In this paper is compared the performance of a multi-class least squares support vector machine (LSSVM mc versus a multi-class logistic regression classifier to problem of recognizing the numeric digits (0-9 handwritten. To develop the comparison was used a data set consisting of 5000 images of handwritten numeric digits (500 images for each number from 0-9, each image of 20 x 20 pixels. The inputs to each of the systems were vectors of 400 dimensions corresponding to each image (not done feature extraction. Both classifiers used OneVsAll strategy to enable multi-classification and a random cross-validation function for the process of minimizing the cost function. The metrics of comparison were precision and training time under the same computational conditions. Both techniques evaluated showed a precision above 95 %, with LS-SVM slightly more accurate. However the computational cost if we found a marked difference: LS-SVM training requires time 16.42 % less than that required by the logistic regression model based on the same low computational conditions.

  12. Handwritten Bangla Alphabet Recognition using an MLP Based Classifier

    CERN Document Server

    Basu, Subhadip; Sarkar, Ram; Kundu, Mahantapas; Nasipuri, Mita; Basu, Dipak Kumar

    2012-01-01

    The work presented here involves the design of a Multi Layer Perceptron (MLP) based classifier for recognition of handwritten Bangla alphabet using a 76 element feature set Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten characters of Bangla alphabet includes 24 shadow features, 16 centroid features and 36 longest-run features. Recognition performances of the MLP designed to work with this feature set are experimentally observed as 86.46% and 75.05% on the samples of the training and the test sets respectively. The work has useful application in the development of a complete OCR system for handwritten Bangla text.

  13. A Speedy Cardiovascular Diseases Classifier Using Multiple Criteria Decision Analysis

    Directory of Open Access Journals (Sweden)

    Wah Ching Lee

    2015-01-01

    Full Text Available Each year, some 30 percent of global deaths are caused by cardiovascular diseases. This figure is worsening due to both the increasing elderly population and severe shortages of medical personnel. The development of a cardiovascular diseases classifier (CDC for auto-diagnosis will help address solve the problem. Former CDCs did not achieve quick evaluation of cardiovascular diseases. In this letter, a new CDC to achieve speedy detection is investigated. This investigation incorporates the analytic hierarchy process (AHP-based multiple criteria decision analysis (MCDA to develop feature vectors using a Support Vector Machine. The MCDA facilitates the efficient assignment of appropriate weightings to potential patients, thus scaling down the number of features. Since the new CDC will only adopt the most meaningful features for discrimination between healthy persons versus cardiovascular disease patients, a speedy detection of cardiovascular diseases has been successfully implemented.

  14. The Motion Trace of Particles in Classifying Flow Field

    Institute of Scientific and Technical Information of China (English)

    LI Guohua; NIE Wenping; YU Yongfu

    2005-01-01

    According to the theory of the stochastic trajectory model of particle in the gas-solid two-phase flows, the two-phase turbulence model between the blades in the inner cavity of the FW-Φ150 horizontal turbo classifier was established, and the commonly-used PHOENICS code was adopted to carried out the numerical simulation. It was achieved the flow characteristics under a certain condition as well as the motion trace of particles with different diameters entering from certain initial location and passing through the flow field between the blades under the correspondent condition. This research method quite directly demonstrates the motion of particles. An experiment was executed to prove the accuracy of the results of numerical simulation.

  15. Support vector classifier based on principal component analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Support vector classifier (SVC) has the superior advantages for small sample learning problems with high dimensions,with especially better generalization ability.However there is some redundancy among the high dimensions of the original samples and the main features of the samples may be picked up first to improve the performance of SVC.A principal component analysis (PCA) is employed to reduce the feature dimensions of the original samples and the pre-selected main features efficiently,and an SVC is constructed in the selected feature space to improve the learning speed and identification rate of SVC.Furthermore,a heuristic genetic algorithm-based automatic model selection is proposed to determine the hyperparameters of SVC to evaluate the performance of the learning machines.Experiments performed on the Heart and Adult benchmark data sets demonstrate that the proposed PCA-based SVC not only reduces the test time drastically,but also improves the identify rates effectively.

  16. On the way of classifying new states of active matter

    Science.gov (United States)

    Menzel, Andreas M.

    2016-07-01

    With ongoing research into the collective behavior of self-propelled particles, new states of active matter are revealed. Some of them are entirely based on the non-equilibrium character and do not have an immediate equilibrium counterpart. In their recent work, Romanczuk et al (2016 New J. Phys. 18 063015) concentrate on the characterization of smectic-like states of active matter. A new type, referred to by the authors as smectic P, is described. In this state, the active particles form stacked layers and self-propel along them. Identifying and classifying states and phases of non-equilibrium matter, including the transitions between them, is an up-to-date effort that will certainly extend for a longer period into the future.

  17. Symbolic shape descriptors for classifying craniosynostosis deformations from skull imaging.

    Science.gov (United States)

    Lin, H; Ruiz-Correa, S; Shapiro, L G; Hing, A; Cunningham, M L; Speltz, M; Sze, R

    2005-01-01

    Craniosynostosis is a serious condition of childhood, caused by the early fusion of the sutures of the skull. The resulting abnormal skull development can lead to severe deformities, increased intra-cranial pressure, as well as vision, hearing and breathing problems. In this work we develop a novel approach to accurately classify deformations caused by metopic and isolated sagittal synostosis. Our method combines a novel set of symbolic shape descriptors and off-the-shelf classification tools to model morphological variations that characterize the synostotic skull. We demonstrate the efficacy of our methodology in a series of large-scale classification experiments that contrast the performance of our proposed symbolic descriptors to those of traditional numeric descriptors, such as clinical severity indices, Fourier-based descriptors and cranial image quantifications. PMID:17281714

  18. Classifying orbits in the restricted three-body problem

    CERN Document Server

    Zotos, Euaggelos E

    2015-01-01

    The case of the planar circular restricted three-body problem is used as a test field in order to determine the character of the orbits of a small body which moves under the gravitational influence of the two heavy primary bodies. We conduct a thorough numerical analysis on the phase space mixing by classifying initial conditions of orbits and distinguishing between three types of motion: (i) bounded, (ii) escape and (iii) collisional. The presented outcomes reveal the high complexity of this dynamical system. Furthermore, our numerical analysis shows a remarkable presence of fractal basin boundaries along all the escape regimes. Interpreting the collisional motion as leaking in the phase space we related our results to both chaotic scattering and the theory of leaking Hamiltonian systems. We also determined the escape and collisional basins and computed the corresponding escape/collisional times. We hope our contribution to be useful for a further understanding of the escape and collisional mechanism of orbi...

  19. Intermediaries in Bredon (Co)homology and Classifying Spaces

    CERN Document Server

    Dembegioti, Fotini; Talelli, Olympia

    2011-01-01

    For certain contractible G-CW-complexes and F a family of subgroups of G, we construct a spectral sequence converging to the F-Bredon cohomology of G with E1-terms given by the F-Bredon cohomology of the stabilizer subgroups. As applications, we obtain several corollaries concerning the cohomological and geometric dimensions of the classifying space for the family F. We also introduce a hierarchically defined class of groups which contains all countable elementary amenable groups and countable linear groups of characteristic zero, and show that if a group G is in this class, then G has finite F-Bredon (co)homological dimension if and only if G has jump F-Bredon (co)homology.

  20. PERFORMANCE ANALYSIS OF SOFT COMPUTING TECHNIQUES FOR CLASSIFYING CARDIAC ARRHYTHMIA

    Directory of Open Access Journals (Sweden)

    R GANESH KUMAR

    2014-01-01

    Full Text Available Cardiovascular diseases kill more people than other diseases. Arrhythmia is a common term used for cardiac rhythm deviating from normal sinus rhythm. Many heart diseases are detected through electrocardiograms (ECG analysis. Manual analysis of ECG is time consuming and error prone. Thus, an automated system for detecting arrhythmia in ECG signals gains importance. Features are extracted from time series ECG data with Discrete Cosine Transform (DCT computing the distance between RR waves. The feature is the beat’s extracted RR interval. Frequency domain extracted features are classified using Classification and Regression Tree (CART, Radial Basis Function (RBF, Support Vector Machine (SVM and Multilayer Perceptron Neural Network (MLP-NN. Experiments were conducted on the MIT-BIH arrhythmia database.

  1. Road network extraction in classified SAR images using genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    肖志强; 鲍光淑; 蒋晓确

    2004-01-01

    Due to the complicated background of objectives and speckle noise, it is almost impossible to extract roads directly from original synthetic aperture radar(SAR) images. A method is proposed for extraction of road network from high-resolution SAR image. Firstly, fuzzy C means is used to classify the filtered SAR image unsupervisedly, and the road pixels are isolated from the image to simplify the extraction of road network. Secondly, according to the features of roads and the membership of pixels to roads, a road model is constructed, which can reduce the extraction of road network to searching globally optimization continuous curves which pass some seed points. Finally, regarding the curves as individuals and coding a chromosome using integer code of variance relative to coordinates, the genetic operations are used to search global optimization roads. The experimental results show that the algorithm can effectively extract road network from high-resolution SAR images.

  2. Building multiclass classifiers for remote homology detection and fold recognition

    Directory of Open Access Journals (Sweden)

    Karypis George

    2006-10-01

    Full Text Available Abstract Background Protein remote homology detection and fold recognition are central problems in computational biology. Supervised learning algorithms based on support vector machines are currently one of the most effective methods for solving these problems. These methods are primarily used to solve binary classification problems and they have not been extensively used to solve the more general multiclass remote homology prediction and fold recognition problems. Results We present a comprehensive evaluation of a number of methods for building SVM-based multiclass classification schemes in the context of the SCOP protein classification. These methods include schemes that directly build an SVM-based multiclass model, schemes that employ a second-level learning approach to combine the predictions generated by a set of binary SVM-based classifiers, and schemes that build and combine binary classifiers for various levels of the SCOP hierarchy beyond those defining the target classes. Conclusion Analyzing the performance achieved by the different approaches on four different datasets we show that most of the proposed multiclass SVM-based classification approaches are quite effective in solving the remote homology prediction and fold recognition problems and that the schemes that use predictions from binary models constructed for ancestral categories within the SCOP hierarchy tend to not only lead to lower error rates but also reduce the number of errors in which a superfamily is assigned to an entirely different fold and a fold is predicted as being from a different SCOP class. Our results also show that the limited size of the training data makes it hard to learn complex second-level models, and that models of moderate complexity lead to consistently better results.

  3. Information Forests

    CERN Document Server

    Yi, Zhao; Dewan, Maneesh; Zhan, Yiqiang

    2012-01-01

    We describe Information Forests, an approach to classification that generalizes Random Forests by replacing the splitting criterion of non-leaf nodes from a discriminative one -- based on the entropy of the label distribution -- to a generative one -- based on maximizing the information divergence between the class-conditional distributions in the resulting partitions. The basic idea consists of deferring classification until a measure of "classification confidence" is sufficiently high, and instead breaking down the data so as to maximize this measure. In an alternative interpretation, Information Forests attempt to partition the data into subsets that are "as informative as possible" for the purpose of the task, which is to classify the data. Classification confidence, or informative content of the subsets, is quantified by the Information Divergence. Our approach relates to active learning, semi-supervised learning, mixed generative/discriminative learning.

  4. Boosting-Based On-Road Obstacle Sensing Using Discriminative Weak Classifiers

    Science.gov (United States)

    Adhikari, Shyam Prasad; Yoo, Hyeon-Joong; Kim, Hyongsuk

    2011-01-01

    This paper proposes an extension of the weak classifiers derived from the Haar-like features for their use in the Viola-Jones object detection system. These weak classifiers differ from the traditional single threshold ones, in that no specific threshold is needed and these classifiers give a more general solution to the non-trivial task of finding thresholds for the Haar-like features. The proposed quadratic discriminant analysis based extension prominently improves the ability of the weak classifiers to discriminate objects and non-objects. The proposed weak classifiers were evaluated by boosting a single stage classifier to detect rear of car. The experiments demonstrate that the object detector based on the proposed weak classifiers yields higher classification performance with less number of weak classifiers than the detector built with traditional single threshold weak classifiers. PMID:22163852

  5. Effective Network Intrusion Detection using Classifiers Decision Trees and Decision rules

    Directory of Open Access Journals (Sweden)

    G.MeeraGandhi

    2010-11-01

    Full Text Available In the era of information society, computer networks and their related applications are the emerging technologies. Network Intrusion Detection aims at distinguishing the behavior of the network. As the network attacks have increased in huge numbers over the past few years, Intrusion Detection System (IDS is increasingly becoming a critical component to secure the network. Owing to large volumes of security audit data in a network in addition to intricate and vibrant properties of intrusion behaviors, optimizing performance of IDS becomes an important open problem which receives more and more attention from the research community. In this work, the field of machine learning attempts to characterize how such changes can occur by designing, implementing, running, and analyzing algorithms that can be run on computers. The discipline draws on ideas, with the goal of understanding the computational character of learning. Learning always occurs in the context of some performance task, and that a learning method should always be coupled with a performance element that uses the knowledge acquired during learning. In this research, machine learning is being investigated as a technique for making the selection, using as training data and their outcome. In this paper, we evaluate the performance of a set of classifier algorithms of rules (JRIP, Decision Tabel, PART, and OneR and trees (J48, RandomForest, REPTree, NBTree. Based on the evaluation results, best algorithms for each attack category is chosen and two classifier algorithm selection models are proposed. The empirical simulation result shows the comparison between the noticeable performance improvements. The classification models were trained using the data collected from Knowledge Discovery Databases (KDD for Intrusion Detection. The trained models were then used for predicting the risk of the attacks in a web server environment or by any network administrator or any Security Experts. The

  6. Classifying regional development in Iran (Application of Composite Index Approach

    Directory of Open Access Journals (Sweden)

    A. Sharifzadeh

    2012-01-01

    Full Text Available Extended abstract1- IntroductionThe spatial economy of Iran, like that of so many other developing countries, is characterized by an uneven spatial pattern of economic activities. The problem of spatial inequality emerged when efficiency-oriented sectoral policies came into conflict with the spatial dimension of development (Atash, 1988. Due to this conflict, extreme imbalanced development in Iran was created. Moreover spatial uneven distribution of economic activities in Iran is unknown and incomplete. So, there is an urgent need for more efficient and effective design, targeting and implementing interventions to manage spatial imbalances in development. Hence, the identification of development patterns at spatial scale and the factors generating them can help improve planning if development programs are focused on removing the constraints adversely affecting development in potentially good areas. There is a need for research that would describe and explain the problem of spatial development patterns as well as proposal of possible strategies, which can be used to develop the country and reduce the spatial imbalances. The main objective of this research was to determine spatial economic development level in order to identify spatial pattern of development and explain determinants of such imbalance in Iran based on methodology of composite index of development. Then, Iran provinces were ranked and classified according to the calculated composite index. To collect the required data, census of 2006 and yearbook in various times were used. 2- Theoretical basesTheories of regional inequality as well as empirical evidence regarding actual trends at the national or international level have been discussed and debated in the economic literature for over three decades. Early debates concerning the impact of market mechanisms on regional inequality in the West (Myrdal, 1957 have become popular again in the 1990s. There is a conflict on probable outcomes

  7. 32 CFR 154.18 - Certain positions not necessarily requiring access to classified information.

    Science.gov (United States)

    2010-07-01

    ... for assignment with the Armed Forces overseas (32 CFR part 253). (f) Officials authorized to issue... assignment. (n) Transportation of arms, ammunition and explosives (AA&E). Any DoD military, civilian or... transporting Category I, II or Confidential AA&E shall have been the subject of a favorably adjudicated NAC...

  8. 78 FR 5828 - Agency Information Collection Activities: Petition To Classify Orphan as an Immediate Relative...

    Science.gov (United States)

    2013-01-28

    ..., at 77 FR 65709, allowing for a 60-day public comment period. USCIS did receive two comments in.../ ] Dashboard.do, or call the USCIS National Customer Service Center at 1-800-375-5283. Written comments and... adult member (age 18 and older), who lives in the home of the prospective adoptive parent(s), except...

  9. Computational classifiers for predicting the short-term course of Multiple sclerosis

    Directory of Open Access Journals (Sweden)

    Comi Giancarlo

    2011-06-01

    Full Text Available Abstract Background The aim of this study was to assess the diagnostic accuracy (sensitivity and specificity of clinical, imaging and motor evoked potentials (MEP for predicting the short-term prognosis of multiple sclerosis (MS. Methods We obtained clinical data, MRI and MEP from a prospective cohort of 51 patients and 20 matched controls followed for two years. Clinical end-points recorded were: 1 expanded disability status scale (EDSS, 2 disability progression, and 3 new relapses. We constructed computational classifiers (Bayesian, random decision-trees, simple logistic-linear regression-and neural networks and calculated their accuracy by means of a 10-fold cross-validation method. We also validated our findings with a second cohort of 96 MS patients from a second center. Results We found that disability at baseline, grey matter volume and MEP were the variables that better correlated with clinical end-points, although their diagnostic accuracy was low. However, classifiers combining the most informative variables, namely baseline disability (EDSS, MRI lesion load and central motor conduction time (CMCT, were much more accurate in predicting future disability. Using the most informative variables (especially EDSS and CMCT we developed a neural network (NNet that attained a good performance for predicting the EDSS change. The predictive ability of the neural network was validated in an independent cohort obtaining similar accuracy (80% for predicting the change in the EDSS two years later. Conclusions The usefulness of clinical variables for predicting the course of MS on an individual basis is limited, despite being associated with the disease course. By training a NNet with the most informative variables we achieved a good accuracy for predicting short-term disability.

  10. Texture feature selection with relevance learning to classify interstitial lung disease patterns

    Science.gov (United States)

    Huber, Markus B.; Bunte, Kerstin; Nagarajan, Mahesh B.; Biehl, Michael; Ray, Lawrence A.; Wismueller, Axel

    2011-03-01

    The Generalized Matrix Learning Vector Quantization (GMLVQ) is used to estimate the relevance of texture features in their ability to classify interstitial lung disease patterns in high-resolution computed tomography (HRCT) images. After a stochastic gradient descent, the GMLVQ algorithm provides a discriminative distance measure of relevance factors, which can account for pairwise correlations between different texture features and their importance for the classification of healthy and diseased patterns. Texture features were extracted from gray-level co-occurrence matrices (GLCMs), and were ranked and selected according to their relevance obtained by GMLVQ and, for comparison, to a mutual information (MI) criteria. A k-nearest-neighbor (kNN) classifier and a Support Vector Machine with a radial basis function kernel (SVMrbf) were optimized in a 10-fold crossvalidation for different texture feature sets. In our experiment with real-world data, the feature sets selected by the GMLVQ approach had a significantly better classification performance compared with feature sets selected by a MI ranking.

  11. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  12. Nearest Neighbor Classifier Method for Making Loan Decision in Commercial Bank

    Directory of Open Access Journals (Sweden)

    Md.Mahbubur Rahman

    2014-07-01

    Full Text Available Bank plays the central role for the economic development world-wide. The failure and success of the banking sector depends upon the ability to proper evaluation of credit risk. Credit risk evaluation of any potential credit application has remained a challenge for banks all over the world till today. Artificial neural network plays a tremendous role in the field of finance for making critical, enigmatic and sensitive decisions those are sometimes impossible for human being. Like other critical decision in the finance, the decision of sanctioning loan to the customer is also an enigmatic problem. The objective of this paper is to design such a Neural Network that can facilitate loan officers to make correct decision for providing loan to the proper client. This paper checks the applicability of one of the new integrated model with nearest neighbor classifier on a sample data taken from a Bangladeshi Bank named Brac Bank. The Neural network will consider several factors of the client of the bank and make the loan officer informed about client’s eligibility of getting a loan. Several effective methods of neural network can be used for making this bank decision such as back propagation learning, regression model, gradient descent algorithm, nearest neighbor classifier etc.

  13. A Multiple-Classifier Framework for Parkinson’s Disease Detection Based on Various Vocal Tests

    Directory of Open Access Journals (Sweden)

    Mahnaz Behroozi

    2016-01-01

    Full Text Available Recently, speech pattern analysis applications in building predictive telediagnosis and telemonitoring models for diagnosing Parkinson’s disease (PD have attracted many researchers. For this purpose, several datasets of voice samples exist; the UCI dataset named “Parkinson Speech Dataset with Multiple Types of Sound Recordings” has a variety of vocal tests, which include sustained vowels, words, numbers, and short sentences compiled from a set of speaking exercises for healthy and people with Parkinson’s disease (PWP. Some researchers claim that summarizing the multiple recordings of each subject with the central tendency and dispersion metrics is an efficient strategy in building a predictive model for PD. However, they have overlooked the point that a PD patient may show more difficulty in pronouncing certain terms than the other terms. Thus, summarizing the vocal tests may lead into loss of valuable information. In order to address this issue, the classification setting must take what has been said into account. As a solution, we introduced a new framework that applies an independent classifier for each vocal test. The final classification result would be a majority vote from all of the classifiers. When our methodology comes with filter-based feature selection, it enhances classification accuracy up to 15%.

  14. Recognition of American Sign Language (ASL) Classifiers in a Planetarium Using a Head-Mounted Display

    Science.gov (United States)

    Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan

    2015-01-01

    A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.

  15. Analyzing tree-shape anatomical structures using topological descriptors of branching and ensemble of classifiers

    Directory of Open Access Journals (Sweden)

    Angeliki Skoura

    2013-04-01

    Full Text Available The analysis of anatomical tree-shape structures visualized in medical images provides insight into the relationship between tree topology and pathology of the corresponding organs. In this paper, we propose three methods to extract descriptive features of the branching topology; the asymmetry index, the encoding of branching patterns using a node labeling scheme and an extension of the Sholl analysis. Based on these descriptors, we present classification schemes for tree topologies with respect to the underlying pathology. Moreover, we present a classifier ensemble approach which combines the predictions of the individual classifiers to optimize the classification accuracy. We applied the proposed methodology to a dataset of x-ray galactograms, medical images which visualize the breast ductal tree, in order to recognize images with radiological findings regarding breast cancer. The experimental results demonstrate the effectiveness of the proposed framework compared to state-of-the-art techniques suggesting that the proposed descriptors provide more valuable information regarding the topological patterns of ductal trees and indicating the potential of facilitating early breast cancer diagnosis.

  16. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review.

  17. Use of artificial neural networks and geographic objects for classifying remote sensing imagery

    Directory of Open Access Journals (Sweden)

    Pedro Resende Silva

    2014-06-01

    Full Text Available The aim of this study was to develop a methodology for mapping land use and land cover in the northern region of Minas Gerais state, where, in addition to agricultural land, the landscape is dominated by native cerrado, deciduous forests, and extensive areas of vereda. Using forest inventory data, as well as RapidEye, Landsat TM and MODIS imagery, three specific objectives were defined: 1 to test use of image segmentation techniques for an object-based classification encompassing spectral, spatial and temporal information, 2 to test use of high spatial resolution RapidEye imagery combined with Landsat TM time series imagery for capturing the effects of seasonality, and 3 to classify data using Artificial Neural Networks. Using MODIS time series and forest inventory data, time signatures were extracted from the dominant vegetation formations, enabling selection of the best periods of the year to be represented in the classification process. Objects created with the segmentation of RapidEye images, along with the Landsat TM time series images, were classified by ten different Multilayer Perceptron network architectures. Results showed that the methodology in question meets both the purposes of this study and the characteristics of the local plant life. With excellent accuracy values for native classes, the study showed the importance of a well-structured database for classification and the importance of suitable image segmentation to meet specific purposes.

  18. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    Science.gov (United States)

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database. PMID:25680220

  19. Safety assessment of plant varieties using transcriptomics profiling and a one-class classifier.

    Science.gov (United States)

    van Dijk, Jeroen P; de Mello, Carla Souza; Voorhuijzen, Marleen M; Hutten, Ronald C B; Arisi, Ana Carolina Maisonnave; Jansen, Jeroen J; Buydens, Lutgarde M C; van der Voet, Hilko; Kok, Esther J

    2014-10-01

    An important part of the current hazard identification of novel plant varieties is comparative targeted analysis of the novel and reference varieties. Comparative analysis will become much more informative with unbiased analytical approaches, e.g. omics profiling. Data analysis estimating the similarity of new varieties to a reference baseline class of known safe varieties would subsequently greatly facilitate hazard identification. Further biological and eventually toxicological analysis would then only be necessary for varieties that fall outside this reference class. For this purpose, a one-class classifier tool was explored to assess and classify transcriptome profiles of potato (Solanum tuberosum) varieties in a model study. Profiles of six different varieties, two locations of growth, two year of harvest and including biological and technical replication were used to build the model. Two scenarios were applied representing evaluation of a 'different' variety and a 'similar' variety. Within the model higher class distances resulted for the 'different' test set compared with the 'similar' test set. The present study may contribute to a more global hazard identification of novel plant varieties. PMID:25046166

  20. Executed Movement Using EEG Signals through a Naive Bayes Classifier

    Directory of Open Access Journals (Sweden)

    Juliano Machado

    2014-11-01

    Full Text Available Recent years have witnessed a rapid development of brain-computer interface (BCI technology. An independent BCI is a communication system for controlling a device by human intension, e.g., a computer, a wheelchair or a neuroprosthes is, not depending on the brain’s normal output pathways of peripheral nerves and muscles, but on detectable signals that represent responsive or intentional brain activities. This paper presents a comparative study of the usage of the linear discriminant analysis (LDA and the naive Bayes (NB classifiers on describing both right- and left-hand movement through electroencephalographic signal (EEG acquisition. For the analysis, we considered the following input features: the energy of the segments of a band pass-filtered signal with the frequency band in sensorimotor rhythms and the components of the spectral energy obtained through the Welch method. We also used the common spatial pattern (CSP filter, so as to increase the discriminatory activity among movement classes. By using the database generated by this experiment, we obtained hit rates up to 70%. The results are compatible with previous studies.

  1. Addressing the Challenge of Defining Valid Proteomic Biomarkers and Classifiers

    LENUS (Irish Health Repository)

    Dakna, Mohammed

    2010-12-10

    Abstract Background The purpose of this manuscript is to provide, based on an extensive analysis of a proteomic data set, suggestions for proper statistical analysis for the discovery of sets of clinically relevant biomarkers. As tractable example we define the measurable proteomic differences between apparently healthy adult males and females. We choose urine as body-fluid of interest and CE-MS, a thoroughly validated platform technology, allowing for routine analysis of a large number of samples. The second urine of the morning was collected from apparently healthy male and female volunteers (aged 21-40) in the course of the routine medical check-up before recruitment at the Hannover Medical School. Results We found that the Wilcoxon-test is best suited for the definition of potential biomarkers. Adjustment for multiple testing is necessary. Sample size estimation can be performed based on a small number of observations via resampling from pilot data. Machine learning algorithms appear ideally suited to generate classifiers. Assessment of any results in an independent test-set is essential. Conclusions Valid proteomic biomarkers for diagnosis and prognosis only can be defined by applying proper statistical data mining procedures. In particular, a justification of the sample size should be part of the study design.

  2. Pulmonary nodule detection using a cascaded SVM classifier

    Science.gov (United States)

    Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias

    2016-03-01

    Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

  3. Quantum Hooke's law to classify pulse laser induced ultrafast melting.

    Science.gov (United States)

    Hu, Hao; Ding, Hepeng; Liu, Feng

    2015-02-03

    Ultrafast crystal-to-liquid phase transition induced by femtosecond pulse laser excitation is an interesting material's behavior manifesting the complexity of light-matter interaction. There exist two types of such phase transitions: one occurs at a time scale shorter than a picosecond via a nonthermal process mediated by electron-hole plasma formation; the other at a longer time scale via a thermal melting process mediated by electron-phonon interaction. However, it remains unclear what material would undergo which process and why? Here, by exploiting the property of quantum electronic stress (QES) governed by quantum Hooke's law, we classify the transitions by two distinct classes of materials: the faster nonthermal process can only occur in materials like ice having an anomalous phase diagram characterized with dTm/dP melting temperature and P is pressure, above a high threshold laser fluence; while the slower thermal process may occur in all materials. Especially, the nonthermal transition is shown to be induced by the QES, acting like a negative internal pressure, which drives the crystal into a "super pressing" state to spontaneously transform into a higher-density liquid phase. Our findings significantly advance fundamental understanding of ultrafast crystal-to-liquid phase transitions, enabling quantitative a priori predictions.

  4. Linearly and Quadratically Separable Classifiers Using Adaptive Approach

    Institute of Scientific and Technical Information of China (English)

    Mohamed Abdel-Kawy Mohamed Ali Soliman; Rasha M. Abo-Bakr

    2011-01-01

    This paper presents a fast adaptive iterative algorithm to solve linearly separable classification problems in Rn.In each iteration,a subset of the sampling data (n-points,where n is the number of features) is adaptively chosen and a hyperplane is constructed such that it separates the chosen n-points at a margin e and best classifies the remaining points.The classification problem is formulated and the details of the algorithm are presented.Further,the algorithm is extended to solving quadratically separable classification problems.The basic idea is based on mapping the physical space to another larger one where the problem becomes linearly separable.Numerical illustrations show that few iteration steps are sufficient for convergence when classes are linearly separable.For nonlinearly separable data,given a specified maximum number of iteration steps,the algorithm returns the best hyperplane that minimizes the number of misclassified points occurring through these steps.Comparisons with other machine learning algorithms on practical and benchmark datasets are also presented,showing the performance of the proposed algorithm.

  5. A Novel Performance Metric for Building an Optimized Classifier

    Directory of Open Access Journals (Sweden)

    Mohammad Hossin

    2011-01-01

    Full Text Available Problem statement: Typically, the accuracy metric is often applied for optimizing the heuristic or stochastic classification models. However, the use of accuracy metric might lead the searching process to the sub-optimal solutions due to its less discriminating values and it is also not robust to the changes of class distribution. Approach: To solve these detrimental effects, we propose a novel performance metric which combines the beneficial properties of accuracy metric with the extended recall and precision metrics. We call this new performance metric as Optimized Accuracy with Recall-Precision (OARP. Results: In this study, we demonstrate that the OARP metric is theoretically better than the accuracy metric using four generated examples. We also demonstrate empirically that a naïve stochastic classification algorithm, which is Monte Carlo Sampling (MCS algorithm trained with the OARP metric, is able to obtain better predictive results than the one trained with the conventional accuracy metric. Additionally, the t-test analysis also shows a clear advantage of the MCS model trained with the OARP metric over the accuracy metric alone for all binary data sets. Conclusion: The experiments have proved that the OARP metric leads stochastic classifiers such as the MCS towards a better training model, which in turn will improve the predictive results of any heuristic or stochastic classification models.

  6. A system-awareness decision classifier to automated MSN forensics

    Science.gov (United States)

    Chu, Yin-Teshou Tsao; Fan, Kuo-Pao; Cheng, Ya-Wen; Tseng, Po-Kai; Chen, Huan; Cheng, Bo-Chao

    2007-09-01

    Data collection is the most important stage in network forensics; but under the resource constrained situations, a good evidence collection mechanism is required to provide effective event collections in a high network traffic environment. In literatures, a few network forensic tools offer MSN-messenger behavior reconstruction. Moreover, they do not have classification strategies at the collection stage when the system becomes saturated. The emphasis of this paper is to address the shortcomings of the above situations and pose a solution to select a better classification in order to ensure the integrity of the evidences in the collection stage under high-traffic network environments. A system-awareness decision classifier (SADC) mechanism is proposed in this paper. MSN-shot sensor is able to adjust the amount of data to be collected according to the current system status and to keep evidence integrity as much as possible according to the file format and the current system status. Analytical results show that proposed SADC to implement selective collection (SC) consumes less cost than full collection (FC) under heavy traffic scenarios. With the deployment of the proposed SADC mechanism, we believe that MSN-shot is able to reconstruct the MSN-messenger behaviors perfectly in the context of upcoming next generation network.

  7. A dimensionless parameter for classifying hemodynamics in intracranial

    Science.gov (United States)

    Asgharzadeh, Hafez; Borazjani, Iman

    2015-11-01

    Rupture of an intracranial aneurysm (IA) is a disease with high rates of mortality. Given the risk associated with the aneurysm surgery, quantifying the likelihood of aneurysm rupture is essential. There are many risk factors that could be implicated in the rupture of an aneurysm. However, the most important factors correlated to the IA rupture are hemodynamic factors such as wall shear stress (WSS) and oscillatory shear index (OSI) which are affected by the IA flows. Here, we carry out three-dimensional high resolution simulations on representative IA models with simple geometries to test a dimensionless number (first proposed by Le et al., ASME J Biomech Eng, 2010), denoted as An number, to classify the flow mode. An number is defined as the ratio of the time takes the parent artery flow transports across the IA neck to the time required for vortex ring formation. Based on the definition, the flow mode is vortex if An>1 and it is cavity if Ananeurysms. In addition, we show that this classification works on three-dimensional geometries reconstructed from three-dimensional rotational angiography of human subjects. Furthermore, we verify the correlation of IA flow mode and WSS/OSI on the human subject IA. This work was supported partly by the NIH grant R03EB014860, and the computational resources were partly provided by CCR at UB. We thank Prof. Hui Meng and Dr. Jianping Xiang for providing us the database of aneurysms and helpful discussions.

  8. A dimensionless parameter for classifying hemodynamics in intracranial

    Science.gov (United States)

    Asgharzadeh, Hafez; Borazjani, Iman

    2015-11-01

    Rupture of an intracranial aneurysm (IA) is a disease with high rates of mortality. Given the risk associated with the aneurysm surgery, quantifying the likelihood of aneurysm rupture is essential. There are many risk factors that could be implicated in the rupture of an aneurysm. However, the most important factors correlated to the IA rupture are hemodynamic factors such as wall shear stress (WSS) and oscillatory shear index (OSI) which are affected by the IA flows. Here, we carry out three-dimensional high resolution simulations on representative IA models with simple geometries to test a dimensionless number (first proposed by Le et al., ASME J Biomech Eng, 2010), denoted as An number, to classify the flow mode. An number is defined as the ratio of the time takes the parent artery flow transports across the IA neck to the time required for vortex ring formation. Based on the definition, the flow mode is vortex if An>1 and it is cavity if AnOSI on the human subject IA. This work was supported partly by the NIH grant R03EB014860, and the computational resources were partly provided by CCR at UB. We thank Prof. Hui Meng and Dr. Jianping Xiang for providing us the database of aneurysms and helpful discussions.

  9. Classifying and explaining democracy in the Muslim world

    Directory of Open Access Journals (Sweden)

    Rohaizan Baharuddin

    2012-12-01

    Full Text Available The purpose of this study is to classify and explain democracies in the 47 Muslim countries between the years 1998 and 2008 by using liberties and elections as independent variables. Specifically focusing on the context of the Muslim world, this study examines the performance of civil liberties and elections, variation of democracy practised the most, the elections, civil liberties and democratic transitions and patterns that followed. Based on the quantitative data primarily collected from Freedom House, this study demonstrates the following aggregate findings: first, the “not free not fair” elections, the “limited” civil liberties and the “Illiberal Partial Democracy” were the dominant feature of elections, civil liberties and democracy practised in the Muslim world; second, a total of 413 Muslim regimes out of 470 (47 regimes x 10 years remained the same as their democratic origin points, without any transitions to a better or worse level of democracy, throughout these 10 years; and third, a slow, yet steady positive transition of both elections and civil liberties occurred in the Muslim world with changes in the nature of elections becoming much more progressive compared to the civil liberties’ transitions.

  10. A framework to classify error in animal-borne technologies

    Directory of Open Access Journals (Sweden)

    Zackory eBurns

    2015-05-01

    Full Text Available The deployment of novel, innovative, and increasingly miniaturized devices on fauna, especially otherwise difficult to observe taxa, to collect data has steadily increased. Yet, every animal-borne technology has its shortcomings, such as limitations in its precision or accuracy. These shortcomings, here labelled as ‘error’, are not yet studied systematically and a framework to identify and classify error does not exist. Here, we propose a classification scheme to synthesize error across technologies, discussing basic physical properties used by a technology to collect data, conversion of raw data into useful variables, and subjectivity in the parameters chosen. In addition, we outline a four-step framework to quantify error in animal-borne devices: to know, to identify, to evaluate, and to store. Both the classification scheme and framework are theoretical in nature. However, since mitigating error is essential to answer many biological questions, we believe they will be operationalized and facilitate future work to determine and quantify error in animal-borne technologies. Moreover, increasing the transparency of error will ensure the technique used to collect data moderates the biological questions and conclusions.

  11. Bilayer segmentation of webcam videos using tree-based classifiers.

    Science.gov (United States)

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems. PMID:21088317

  12. Using color histograms and SPA-LDA to classify bacteria.

    Science.gov (United States)

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  13. Two-categorical bundles and their classifying spaces

    DEFF Research Database (Denmark)

    Baas, Nils A.; Bökstedt, M.; Kro, T.A.

    2012-01-01

    For a 2-category 2C we associate a notion of a principal 2C-bundle. In case of the 2-category of 2-vector spaces in the sense of M.M. Kapranov and V.A. Voevodsky this gives the the 2-vector bundles of N.A. Baas, B.I. Dundas and J. Rognes. Our main result says that the geometric nerve of a good 2......-category is a classifying space for the associated principal 2-bundles. In the process of proving this we develop a lot of powerful machinery which may be useful in further studies of 2-categorical topology. As a corollary we get a new proof of the classification of principal bundles. A calculation based...... on the main theorem shows that the principal 2-bundles associated to the 2-category of 2-vector spaces in the sense of J.C. Baez and A.S. Crans split, up to concordance, as two copies of ordinary vector bundles. When 2C is a cobordism type 2-category we get a new notion of cobordism-bundles which turns out...

  14. Improving tRNAscan-SE Annotation Results via Ensemble Classifiers.

    Science.gov (United States)

    Zou, Quan; Guo, Jiasheng; Ju, Ying; Wu, Meihong; Zeng, Xiangxiang; Hong, Zhiling

    2015-11-01

    tRNAScan-SE is a tRNA detection program that is widely used for tRNA annotation; however, the false positive rate of tRNAScan-SE is unacceptable for large sequences. Here, we used a machine learning method to try to improve the tRNAScan-SE results. A new predictor, tRNA-Predict, was designed. We obtained real and pseudo-tRNA sequences as training data sets using tRNAScan-SE and constructed three different tRNA feature sets. We then set up an ensemble classifier, LibMutil, to predict tRNAs from the training data. The positive data set of 623 tRNA sequences was obtained from tRNAdb 2009 and the negative data set was the false positive tRNAs predicted by tRNAscan-SE. Our in silico experiments revealed a prediction accuracy rate of 95.1 % for tRNA-Predict using 10-fold cross-validation. tRNA-Predict was developed to distinguish functional tRNAs from pseudo-tRNAs rather than to predict tRNAs from a genome-wide scan. However, tRNA-Predict can work with the output of tRNAscan-SE, which is a genome-wide scanning method, to improve the tRNAscan-SE annotation results. The tRNA-Predict web server is accessible at http://datamining.xmu.edu.cn/∼gjs/tRNA-Predict. PMID:27491037

  15. MISR Level 2 FIRSTLOOK TOA/Cloud Classifier parameters V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the Level 2 FIRSTLOOK TOA/Cloud Classifiers Product. It contains the Angular Signature Cloud Mask (ASCM), Cloud Classifiers, and Support Vector Machine...

  16. Construction of Classifier Based on MPCA and QSA and Its Application on Classification of Pancreatic Diseases

    OpenAIRE

    Huiyan Jiang; Di Zhao; Tianjiao Feng; Shiyang Liao; Yenwei Chen

    2013-01-01

    A novel method is proposed to establish the classifier which can classify the pancreatic images into normal or abnormal. Firstly, the brightness feature is used to construct high-order tensors, then using multilinear principal component analysis (MPCA) extracts the eigentensors, and finally, the classifier is constructed based on support vector machine (SVM) and the classifier parameters are optimized with quantum simulated annealing algorithm (QSA). In order to verify the effectiveness of th...

  17. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; Nosov, A.; Popov, A.

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  18. Automating the construction of scene classifiers for content-based video retrieval

    OpenAIRE

    Israël, Menno; Broek, van den, M.A.F.H.; Putten, van, J.P.M.; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classific...

  19. The EB Factory Project I. A Fast, Neural Net Based, General Purpose Light Curve Classifier Optimized for Eclipsing Binaries

    CERN Document Server

    Paegert, M; Burger, D M

    2014-01-01

    We describe a new neural-net based light curve classifier and provide it with documentation as a ready-to-use tool for the community. While optimized for identification and classification of eclipsing binary stars, the classifier is general purpose, and has been developed for speed in the context of upcoming massive surveys such as LSST. A challenge for classifiers in the context of neural-net training and massive data sets is to minimize the number of parameters required to describe each light curve. We show that a simple and fast geometric representation that encodes the overall light curve shape, together with a chi-square parameter to capture higher-order morphology information results in efficient yet robust light curve classification, especially for eclipsing binaries. Testing the classifier on the ASAS light curve database, we achieve a retrieval rate of 98\\% and a false-positive rate of 2\\% for eclipsing binaries. We achieve similarly high retrieval rates for most other periodic variable-star classes,...

  20. Using the joint transform correlator as the feature extractor for the nearest neighbor classifier

    Science.gov (United States)

    Soon, Boon Y.; Karim, Mohammad A.; Alam, Mohammad S.

    1999-01-01

    Financial transactions using credit cards have gained popularity but the growing number of counterfeits and frauds may defeat the purpose of the cards. The search for a superior method to curb the criminal acts has become urgent especially in the brilliant information age. Currently, neural-network-based pattern recognition techniques are employed for security verification. However, it has been a time consuming experience, as some techniques require a long period of training time. Here, a faster and more efficient method is proposed to perform security verification that verifies the fingerprint images using the joint transform correlator as a feature extractor for nearest neighbor classifier. The uniqueness comparison scheme is proposed to improve the accuracy of the system verification. The performance of the system under noise corruption, variable contrast, and rotation of the input image is verified with a computer simulation.

  1. Boosting 2-Thresholded Weak Classifiers over Scattered Rectangle Features for Object Detection

    Directory of Open Access Journals (Sweden)

    Weize Zhang

    2009-12-01

    Full Text Available In this paper, we extend Viola and Jones’ detection framework in two aspects. Firstly, by removing the restriction of the geometry adjacency rule over Haarlike feature, we get a richer representation called scattered rectangle feature, which explores much more orientations other than horizontal, vertical and diagonal, as well as misaligned, detached and non-rectangle shape information that is unreachable to Haar-like feature. Secondly, we strengthen the discriminating power of the weak classifiers by expanding them into 2-thresholded ones, which guarantees a better classification with smaller error, by the simple motivation that the bound on the accuracy of the final hypothesis improves when any of the weak hypotheses is improved. An optimal linear online algorithm is also proposed to determine the two thresholds. The comparison experiments on MIT+CMU upright face test set under an objective detection criterion show that the extended method outperforms the original one.

  2. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah

    2013-06-01

    Full Text Available The free radical gene classification of cancer diseases is challenging job in biomedical data engineering. The improving of classification of gene selection of cancer diseases various classifier are used, but the classification of classifier are not validate. So ensemble classifier is used for cancer gene classification using neural network classifier with random forest tree. The random forest tree is ensembling technique of classifier in this technique the number of classifier ensemble of their leaf node of class of classifier. In this paper we combined neural network with random forest ensemble classifier for classification of cancer gene selection for diagnose analysis of cancer diseases. The proposed method is different from most of the methods of ensemble classifier, which follow an input output paradigm of neural network, where the members of the ensemble are selected from a set of neural network classifier. the number of classifiers is determined during the rising procedure of the forest. Furthermore, the proposed method produces an ensemble not only correct, but also assorted, ensuring the two important properties that should characterize an ensemble classifier. For empirical evaluation of our proposed method we used UCI cancer diseases data set for classification. Our experimental result shows that better result in compression of random forest tree classification.

  3. Detection of prostate cancer by integration of line-scan diffusion, T2-mapping and T2-weighted magnetic resonance imaging; a multichannel statistical classifier

    International Nuclear Information System (INIS)

    A multichannel statistical classifier for detecting prostate cancer was developed and validated by combining information from three different magnetic resonance (MR) methodologies: T2-weighted, T2-mapping, and line scan diffusion imaging (LSDI). From these MR sequences, four different sets of image intensities were obtained: T2-weighted (T2W) from T2-weighted imaging, Apparent Diffusion Coefficient (ADC) from LSDI, and proton density (PD) and T2 (T2 Map) from T2-mapping imaging. Manually segmented tumor labels from a radiologist, which were validated by biopsy results, served as tumor ''ground truth.'' Textural features were extracted from the images using co-occurrence matrix (CM) and discrete cosine transform (DCT). Anatomical location of voxels was described by a cylindrical coordinate system. A statistical jack-knife approach was used to evaluate our classifiers. Single-channel maximum likelihood (ML) classifiers were based on 1 of the 4 basic image intensities. Our multichannel classifiers: support vector machine (SVM) and Fisher linear discriminant (FLD), utilized five different sets of derived features. Each classifier generated a summary statistical map that indicated tumor likelihood in the peripheral zone (PZ) of the prostate gland. To assess classifier accuracy, the average areas under the receiver operator characteristic (ROC) curves over all subjects were compared. Our best FLD classifier achieved an average ROC area of 0.839(±0.064), and our best SVM classifier achieved an average ROC area of 0.761(±0.043). The T2W ML classifier, our best single-channel classifier, only achieved an average ROC area of 0.599(±0.146). Compared to the best single-channel ML classifier, our best multichannel FLD and SVM classifiers have statistically superior ROC performance (P=0.0003 and 0.0017, respectively) from pairwise two-sided t-test. By integrating the information from multiple images and capturing the textural and anatomical features in tumor areas, summary

  4. Detection of prostate cancer by integration of line-scan diffusion, T2-mapping and T2-weighted magnetic resonance imaging; a multichannel statistical classifier.

    Science.gov (United States)

    Chan, Ian; Wells, William; Mulkern, Robert V; Haker, Steven; Zhang, Jianqing; Zou, Kelly H; Maier, Stephan E; Tempany, Clare M C

    2003-09-01

    A multichannel statistical classifier for detecting prostate cancer was developed and validated by combining information from three different magnetic resonance (MR) methodologies: T2-weighted, T2-mapping, and line scan diffusion imaging (LSDI). From these MR sequences, four different sets of image intensities were obtained: T2-weighted (T2W) from T2-weighted imaging, Apparent Diffusion Coefficient (ADC) from LSDI, and proton density (PD) and T2 (T2 Map) from T2-mapping imaging. Manually segmented tumor labels from a radiologist, which were validated by biopsy results, served as tumor "ground truth." Textural features were extracted from the images using co-occurrence matrix (CM) and discrete cosine transform (DCT). Anatomical location of voxels was described by a cylindrical coordinate system. A statistical jack-knife approach was used to evaluate our classifiers. Single-channel maximum likelihood (ML) classifiers were based on 1 of the 4 basic image intensities. Our multichannel classifiers: support vector machine (SVM) and Fisher linear discriminant (FLD), utilized five different sets of derived features. Each classifier generated a summary statistical map that indicated tumor likelihood in the peripheral zone (PZ) of the prostate gland. To assess classifier accuracy, the average areas under the receiver operator characteristic (ROC) curves over all subjects were compared. Our best FLD classifier achieved an average ROC area of 0.839(+/-0.064), and our best SVM classifier achieved an average ROC area of 0.761(+/-0.043). The T2W ML classifier, our best single-channel classifier, only achieved an average ROC area of 0.599(+/-0.146). Compared to the best single-channel ML classifier, our best multichannel FLD and SVM classifiers have statistically superior ROC performance (P=0.0003 and 0.0017, respectively) from pairwise two-sided t-test. By integrating the information from multiple images and capturing the textural and anatomical features in tumor areas, summary

  5. 48 CFR 3004.470 - Security requirements for access to unclassified facilities, Information Technology resources...

    Science.gov (United States)

    2010-10-01

    ... access to unclassified facilities, Information Technology resources, and sensitive information. 3004.470... Technology resources, and sensitive information. ... ACQUISITION REGULATION (HSAR) GENERAL ADMINISTRATIVE MATTERS Safeguarding Classified and Sensitive...

  6. CLASSIFYING BENIGN AND MALIGNANT MASSES USING STATISTICAL MEASURES

    Directory of Open Access Journals (Sweden)

    B. Surendiran

    2011-11-01

    Full Text Available Breast cancer is the primary and most common disease found in women which causes second highest rate of death after lung cancer. The digital mammogram is the X-ray of breast captured for the analysis, interpretation and diagnosis. According to Breast Imaging Reporting and Data System (BIRADS benign and malignant can be differentiated using its shape, size and density, which is how radiologist visualize the mammograms. According to BIRADS mass shape characteristics, benign masses tend to have round, oval, lobular in shape and malignant masses are lobular or irregular in shape. Measuring regular and irregular shapes mathematically is found to be a difficult task, since there is no single measure to differentiate various shapes. In this paper, the malignant and benign masses present in mammogram are classified using Hue, Saturation and Value (HSV weight function based statistical measures. The weight function is robust against noise and captures the degree of gray content of the pixel. The statistical measures use gray weight value instead of gray pixel value to effectively discriminate masses. The 233 mammograms from the Digital Database for Screening Mammography (DDSM benchmark dataset have been used. The PASW data mining modeler has been used for constructing Neural Network for identifying importance of statistical measures. Based on the obtained important statistical measure, the C5.0 tree has been constructed with 60-40 data split. The experimental results are found to be encouraging. Also, the results will agree to the standard specified by the American College of Radiology-BIRADS Systems.

  7. VIRTUAL MINING MODEL FOR CLASSIFYING TEXT USING UNSUPERVISED LEARNING

    Directory of Open Access Journals (Sweden)

    S. Koteeswaran

    2014-01-01

    Full Text Available In real world data mining is emerging in various era, one of its most outstanding performance is held in various research such as Big data, multimedia mining, text mining etc. Each of the researcher proves their contribution with tremendous improvements in their proposal by means of mathematical representation. Empowering each problem with solutions are classified into mathematical and implementation models. The mathematical model relates to the straight forward rules and formulas that are related to the problem definition of particular field of domain. Whereas the implementation model derives some sort of knowledge from the real time decision making behaviour such as artificial intelligence and swarm intelligence and has a complex set of rules compared with the mathematical model. The implementation model mines and derives knowledge model from the collection of dataset and attributes. This knowledge is applied to the concerned problem definition. The objective of our work is to efficiently mine knowledge from the unstructured text documents. In order to mine textual documents, text mining is applied. The text mining is the sub-domain in data mining. In text mining, the proposed Virtual Mining Model (VMM is defined for effective text clustering. This VMM involves the learning of conceptual terms; these terms are grouped in Significant Term List (STL. VMM model is appropriate combination of layer 1 arch with ABI (Analysis of Bilateral Intelligence. The frequent update of conceptual terms in the STL is more important for effective clustering. The result is shown, Artifial neural network based unsupervised learning algorithm is used for learning texual pattern in the Virtual Mining Model. For learning of such terminologies, this paper proposed Artificial Neural Network based learning algorithm.

  8. A novel clinical tool to classify facioscapulohumeral muscular dystrophy phenotypes.

    Science.gov (United States)

    Ricci, Giulia; Ruggiero, Lucia; Vercelli, Liliana; Sera, Francesco; Nikolic, Ana; Govi, Monica; Mele, Fabiano; Daolio, Jessica; Angelini, Corrado; Antonini, Giovanni; Berardinelli, Angela; Bucci, Elisabetta; Cao, Michelangelo; D'Amico, Maria Chiara; D'Angelo, Grazia; Di Muzio, Antonio; Filosto, Massimiliano; Maggi, Lorenzo; Moggio, Maurizio; Mongini, Tiziana; Morandi, Lucia; Pegoraro, Elena; Rodolico, Carmelo; Santoro, Lucio; Siciliano, Gabriele; Tomelleri, Giuliano; Villa, Luisa; Tupler, Rossella

    2016-06-01

    Based on the 7-year experience of the Italian Clinical Network for FSHD, we revised the FSHD clinical form to describe, in a harmonized manner, the phenotypic spectrum observed in FSHD. The new Comprehensive Clinical Evaluation Form (CCEF) defines various clinical categories by the combination of different features. The inter-rater reproducibility of the CCEF was assessed between two examiners using kappa statistics by evaluating 56 subjects carrying the molecular marker used for FSHD diagnosis. The CCEF classifies: (1) subjects presenting facial and scapular girdle muscle weakness typical of FSHD (category A, subcategories A1-A3), (2) subjects with muscle weakness limited to scapular girdle or facial muscles (category B subcategories B1, B2), (3) asymptomatic/healthy subjects (category C, subcategories C1, C2), (4) subjects with myopathic phenotype presenting clinical features not consistent with FSHD canonical phenotype (D, subcategories D1, D2). The inter-rater reliability study showed an excellent concordance of the final four CCEF categories with a κ equal to 0.90; 95 % CI (0.71; 0.97). Absolute agreement was observed for categories C and D, an excellent agreement for categories A [κ = 0.88; 95 % CI (0.75; 1.00)], and a good agreement for categories B [κ = 0.79; 95 % CI (0.57; 1.00)]. The CCEF supports the harmonized phenotypic classification of patients and families. The categories outlined by the CCEF may assist diagnosis, genetic counseling and natural history studies. Furthermore, the CCEF categories could support selection of patients in randomized clinical trials. This precise categorization might also promote the search of genetic factor(s) contributing to the phenotypic spectrum of disease. PMID:27126453

  9. Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers

    International Nuclear Information System (INIS)

    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging and pattern recognition techniques are good tools to create a learning database in the first step and to predict the class label of incoming data in order to assess the development of the disease, i.e., the conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease, which is the most critical brain disease for the senior population. Advanced medical imaging such as the volumetric MRI can detect changes in the size of brain regions due to the loss of the brain tissues. Measuring regions that atrophy during the progress of Alzheimer's disease can help neurologists in detecting and staging the disease. In the present investigation, we present a pseudo-automatic scheme that reads volumetric MRI, extracts the middle slices of the brain region, performs segmentation in order to detect the region of brain's ventricle, generates a feature vector that characterizes this region, creates an SQL database that contains the generated data, and finally classifies the images based on the extracted features. For our results, we have used the MRI data sets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.

  10. Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers

    Science.gov (United States)

    Matoug, S.; Abdel-Dayem, A.; Passi, K.; Gross, W.; Alqarni, M.

    2012-02-01

    Alzheimer's disease (AD) is the most common form of dementia affecting seniors age 65 and over. When AD is suspected, the diagnosis is usually confirmed with behavioural assessments and cognitive tests, often followed by a brain scan. Advanced medical imaging and pattern recognition techniques are good tools to create a learning database in the first step and to predict the class label of incoming data in order to assess the development of the disease, i.e., the conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease, which is the most critical brain disease for the senior population. Advanced medical imaging such as the volumetric MRI can detect changes in the size of brain regions due to the loss of the brain tissues. Measuring regions that atrophy during the progress of Alzheimer's disease can help neurologists in detecting and staging the disease. In the present investigation, we present a pseudo-automatic scheme that reads volumetric MRI, extracts the middle slices of the brain region, performs segmentation in order to detect the region of brain's ventricle, generates a feature vector that characterizes this region, creates an SQL database that contains the generated data, and finally classifies the images based on the extracted features. For our results, we have used the MRI data sets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database.

  11. Automatic discrimination between safe and unsafe swallowing using a reputation-based classifier

    Directory of Open Access Journals (Sweden)

    Nikjoo Mohammad S

    2011-11-01

    Full Text Available Abstract Background Swallowing accelerometry has been suggested as a potential non-invasive tool for bedside dysphagia screening. Various vibratory signal features and complementary measurement modalities have been put forth in the literature for the potential discrimination between safe and unsafe swallowing. To date, automatic classification of swallowing accelerometry has exclusively involved a single-axis of vibration although a second axis is known to contain additional information about the nature of the swallow. Furthermore, the only published attempt at automatic classification in adult patients has been based on a small sample of swallowing vibrations. Methods In this paper, a large corpus of dual-axis accelerometric signals were collected from 30 older adults (aged 65.47 ± 13.4 years, 15 male referred to videofluoroscopic examination on the suspicion of dysphagia. We invoked a reputation-based classifier combination to automatically categorize the dual-axis accelerometric signals into safe and unsafe swallows, as labeled via videofluoroscopic review. From these participants, a total of 224 swallowing samples were obtained, 164 of which were labeled as unsafe swallows (swallows where the bolus entered the airway and 60 as safe swallows. Three separate support vector machine (SVM classifiers and eight different features were selected for classification. Results With selected time, frequency and information theoretic features, the reputation-based algorithm distinguished between safe and unsafe swallowing with promising accuracy (80.48 ± 5.0%, high sensitivity (97.1 ± 2% and modest specificity (64 ± 8.8%. Interpretation of the most discriminatory features revealed that in general, unsafe swallows had lower mean vibration amplitude and faster autocorrelation decay, suggestive of decreased hyoid excursion and compromised coordination, respectively. Further, owing to its performance-based weighting of component classifiers, the static

  12. Pattern recognition applied to seismic signals of Llaima volcano (Chile): An evaluation of station-dependent classifiers

    Science.gov (United States)

    Curilem, Millaray; Huenupan, Fernando; Beltrán, Daniel; San Martin, Cesar; Fuentealba, Gustavo; Franco, Luis; Cardona, Carlos; Acuña, Gonzalo; Chacón, Max; Khan, M. Salman; Becerra Yoma, Nestor

    2016-04-01

    Automatic pattern recognition applied to seismic signals from volcanoes may assist seismic monitoring by reducing the workload of analysts, allowing them to focus on more challenging activities, such as producing reports, implementing models, and understanding volcanic behaviour. In a previous work, we proposed a structure for automatic classification of seismic events in Llaima volcano, one of the most active volcanoes in the Southern Andes, located in the Araucanía Region of Chile. A database of events taken from three monitoring stations on the volcano was used to create a classification structure, independent of which station provided the signal. The database included three types of volcanic events: tremor, long period, and volcano-tectonic and a contrast group which contains other types of seismic signals. In the present work, we maintain the same classification scheme, but we consider separately the stations information in order to assess whether the complementary information provided by different stations improves the performance of the classifier in recognising seismic patterns. This paper proposes two strategies for combining the information from the stations: i) combining the features extracted from the signals from each station and ii) combining the classifiers of each station. In the first case, the features extracted from the signals from each station are combined forming the input for a single classification structure. In the second, a decision stage combines the results of the classifiers for each station to give a unique output. The results confirm that the station-dependent strategies that combine the features and the classifiers from several stations improves the classification performance, and that the combination of the features provides the best performance. The results show an average improvement of 9% in the classification accuracy when compared with the station-independent method.

  13. Classified and clustered data constellation: An efficient approach of 3D urban data management

    Science.gov (United States)

    Azri, Suhaibah; Ujang, Uznir; Castro, Francesc Antón; Rahman, Alias Abdul; Mioc, Darka

    2016-03-01

    The growth of urban areas has resulted in massive urban datasets and difficulties handling and managing issues related to urban areas. Huge and massive datasets can degrade data retrieval and information analysis performance. In addition, the urban environment is very difficult to manage because it involves various types of data, such as multiple types of zoning themes in the case of urban mixed-use development. Thus, a special technique for efficient handling and management of urban data is necessary. This paper proposes a structure called Classified and Clustered Data Constellation (CCDC) for urban data management. CCDC operates on the basis of two filters: classification and clustering. To boost up the performance of information retrieval, CCDC offers a minimal percentage of overlap among nodes and coverage area to avoid repetitive data entry and multipath query. The results of tests conducted on several urban mixed-use development datasets using CCDC verify that it efficiently retrieves their semantic and spatial information. Further, comparisons conducted between CCDC and existing clustering and data constellation techniques, from the aspect of preservation of minimal overlap and coverage, confirm that the proposed structure is capable of preserving the minimum overlap and coverage area among nodes. Our overall results indicate that CCDC is efficient in handling and managing urban data, especially urban mixed-use development applications.

  14. Moves on the Street: Classifying Crime Hotspots Using Aggregated Anonymized Data on People Dynamics.

    Science.gov (United States)

    Bogomolov, Andrey; Lepri, Bruno; Staiano, Jacopo; Letouzé, Emmanuel; Oliver, Nuria; Pianesi, Fabio; Pentland, Alex

    2015-09-01

    The wealth of information provided by real-time streams of data has paved the way for life-changing technological advancements, improving the quality of life of people in many ways, from facilitating knowledge exchange to self-understanding and self-monitoring. Moreover, the analysis of anonymized and aggregated large-scale human behavioral data offers new possibilities to understand global patterns of human behavior and helps decision makers tackle problems of societal importance. In this article, we highlight the potential societal benefits derived from big data applications with a focus on citizen safety and crime prevention. First, we introduce the emergent new research area of big data for social good. Next, we detail a case study tackling the problem of crime hotspot classification, that is, the classification of which areas in a city are more likely to witness crimes based on past data. In the proposed approach we use demographic information along with human mobility characteristics as derived from anonymized and aggregated mobile network data. The hypothesis that aggregated human behavioral data captured from the mobile network infrastructure, in combination with basic demographic information, can be used to predict crime is supported by our findings. Our models, built on and evaluated against real crime data from London, obtain accuracy of almost 70% when classifying whether a specific area in the city will be a crime hotspot or not in the following month. PMID:27442957

  15. Mindfulness for Students Classified with Emotional/Behavioral Disorder

    Science.gov (United States)

    Malow, Micheline S.; Austin, Vance L.

    2016-01-01

    A six-week investigation utilizing a standard mindfulness for adolescents curriculum and norm-based standardized resiliency scale was implemented in a self-contained school for students with Emotional/Behavioral Disorders (E/BD). Informal integration of mindfulness activities into a classroom setting was examined for ecological appropriateness and…

  16. Comparing Latent Dirichlet Allocation and Latent Semantic Analysis as Classifiers

    Science.gov (United States)

    Anaya, Leticia H.

    2011-01-01

    In the Information Age, a proliferation of unstructured text electronic documents exists. Processing these documents by humans is a daunting task as humans have limited cognitive abilities for processing large volumes of documents that can often be extremely lengthy. To address this problem, text data computer algorithms are being developed.…

  17. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  18. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    Science.gov (United States)

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. PMID:25329326

  19. Combination of designed immune based classifiers for ERP assessment in a P300-based GKT

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Moradi

    2012-08-01

    Full Text Available Constructing a precise classifier is an important issue in pattern recognition task. Combination the decision of several competing classifiers to achieve improved classification accuracy has become interested in many research areas. In this study, Artificial Immune system (AIS as an effective artificial intelligence technique was used for designing of several efficient classifiers. Combination of multiple immune based classifiers was tested on ERP assessment in a P300-based GKT (Guilty Knowledge Test. Experiment results showed that the proposed classifier named Compact Artificial Immune System (CAIS was a successful classification method and could be competitive to other classifiers such as K-nearest neighbourhood (KNN, Linear Discriminant Analysis (LDA and Support Vector Machine (SVM. Also, in the experiments, it was observed that using the decision fusion techniques for multiple classifier combination lead to better recognition results. The best rate of recognition by CAIS was 80.90% that has been improved in compare to other applied classification methods in our study.

  20. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation.

    Directory of Open Access Journals (Sweden)

    Sebastian Bach

    Full Text Available Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, not providing any information about what made them arrive at a particular decision. This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers. We introduce a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. These pixel contributions can be visualized as heatmaps and are provided to a human expert who can intuitively not only verify the validity of the classification decision, but also focus further analysis on regions of potential interest. We evaluate our method for classifiers trained on PASCAL VOC 2009 images, synthetic image data containing geometric shapes, the MNIST handwritten digits data set and for the pre-trained ImageNet model available as part of the Caffe open source package.

  1. Win percentage: a novel measure for assessing the suitability of machine classifiers for biological problems

    Science.gov (United States)

    2012-01-01

    Background Selecting an appropriate classifier for a particular biological application poses a difficult problem for researchers and practitioners alike. In particular, choosing a classifier depends heavily on the features selected. For high-throughput biomedical datasets, feature selection is often a preprocessing step that gives an unfair advantage to the classifiers built with the same modeling assumptions. In this paper, we seek classifiers that are suitable to a particular problem independent of feature selection. We propose a novel measure, called "win percentage", for assessing the suitability of machine classifiers to a particular problem. We define win percentage as the probability a classifier will perform better than its peers on a finite random sample of feature sets, giving each classifier equal opportunity to find suitable features. Results First, we illustrate the difficulty in evaluating classifiers after feature selection. We show that several classifiers can each perform statistically significantly better than their peers given the right feature set among the top 0.001% of all feature sets. We illustrate the utility of win percentage using synthetic data, and evaluate six classifiers in analyzing eight microarray datasets representing three diseases: breast cancer, multiple myeloma, and neuroblastoma. After initially using all Gaussian gene-pairs, we show that precise estimates of win percentage (within 1%) can be achieved using a smaller random sample of all feature pairs. We show that for these data no single classifier can be considered the best without knowing the feature set. Instead, win percentage captures the non-zero probability that each classifier will outperform its peers based on an empirical estimate of performance. Conclusions Fundamentally, we illustrate that the selection of the most suitable classifier (i.e., one that is more likely to perform better than its peers) not only depends on the dataset and application but also on the

  2. Win percentage: a novel measure for assessing the suitability of machine classifiers for biological problems

    Directory of Open Access Journals (Sweden)

    Parry R Mitchell

    2012-03-01

    Full Text Available Abstract Background Selecting an appropriate classifier for a particular biological application poses a difficult problem for researchers and practitioners alike. In particular, choosing a classifier depends heavily on the features selected. For high-throughput biomedical datasets, feature selection is often a preprocessing step that gives an unfair advantage to the classifiers built with the same modeling assumptions. In this paper, we seek classifiers that are suitable to a particular problem independent of feature selection. We propose a novel measure, called "win percentage", for assessing the suitability of machine classifiers to a particular problem. We define win percentage as the probability a classifier will perform better than its peers on a finite random sample of feature sets, giving each classifier equal opportunity to find suitable features. Results First, we illustrate the difficulty in evaluating classifiers after feature selection. We show that several classifiers can each perform statistically significantly better than their peers given the right feature set among the top 0.001% of all feature sets. We illustrate the utility of win percentage using synthetic data, and evaluate six classifiers in analyzing eight microarray datasets representing three diseases: breast cancer, multiple myeloma, and neuroblastoma. After initially using all Gaussian gene-pairs, we show that precise estimates of win percentage (within 1% can be achieved using a smaller random sample of all feature pairs. We show that for these data no single classifier can be considered the best without knowing the feature set. Instead, win percentage captures the non-zero probability that each classifier will outperform its peers based on an empirical estimate of performance. Conclusions Fundamentally, we illustrate that the selection of the most suitable classifier (i.e., one that is more likely to perform better than its peers not only depends on the dataset and

  3. See Change: Classifying single observation transients from HST using SNCosmo

    Science.gov (United States)

    Sofiatti Nunes, Caroline; Perlmutter, Saul; Nordin, Jakob; Rubin, David; Lidman, Chris; Deustua, Susana E.; Fruchter, Andrew S.; Aldering, Greg Scott; Brodwin, Mark; Cunha, Carlos E.; Eisenhardt, Peter R.; Gonzalez, Anthony H.; Jee, Myungkook J.; Hildebrandt, Hendrik; Hoekstra, Henk; Santos, Joana; Stanford, S. Adam; Stern, Dana R.; Fassbender, Rene; Richard, Johan; Rosati, Piero; Wechsler, Risa H.; Muzzin, Adam; Willis, Jon; Boehringer, Hans; Gladders, Michael; Goobar, Ariel; Amanullah, Rahman; Hook, Isobel; Huterer, Dragan; Huang, Jiasheng; Kim, Alex G.; Kowalski, Marek; Linder, Eric; Pain, Reynald; Saunders, Clare; Suzuki, Nao; Barbary, Kyle H.; Rykoff, Eli S.; Meyers, Joshua; Spadafora, Anthony L.; Hayden, Brian; Wilson, Gillian; Rozo, Eduardo; Hilton, Matt; Dixon, Samantha; Yen, Mike

    2016-01-01

    The Supernova Cosmology Project (SCP) is executing "See Change", a large HST program to look for possible variation in dark energy using supernovae at z>1. As part of the survey, we often must make time-critical follow-up decisions based on multicolor detection at a single epoch. We demonstrate the use of the SNCosmo software package to obtain simulated fluxes in the HST filters for type Ia and core-collapse supernovae at various redshifts. These simulations allow us to compare photometric data from HST with the distribution of the simulated SNe through methods such as Random Forest, a learning method for classification, and Gaussian Kernel Estimation. The results help us make informed decisions about triggered follow up using HST and ground based observatories to provide time-critical information needed about transients. Examples of this technique applied in the context of See Change are shown.

  4. 32 CFR 2001.50 - Telecommunications automated information systems and network security.

    Science.gov (United States)

    2010-07-01

    ... and network security. 2001.50 Section 2001.50 National Defense Other Regulations Relating to National Defense INFORMATION SECURITY OVERSIGHT OFFICE, NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED... network security. Each agency head shall ensure that classified information electronically...

  5. A determination of the optimum time of year for remotely classifying marsh vegetation from LANDSAT multispectral scanner data. [Louisiana

    Science.gov (United States)

    Butera, M. K. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. A technique was used to determine the optimum time for classifying marsh vegetation from computer-processed LANDSAT MSS data. The technique depended on the analysis of data derived from supervised pattern recognition by maximum likelihood theory. A dispersion index, created by the ratio of separability among the class spectral means to variability within the classes, defined the optimum classification time. Data compared from seven LANDSAT passes acquired over the same area of Louisiana marsh indicated that June and September were optimum marsh mapping times to collectively classify Baccharis halimifolia, Spartina patens, Spartina alterniflora, Juncus roemericanus, and Distichlis spicata. The same technique was used to determine the optimum classification time for individual species. April appeared to be the best month to map Juncus roemericanus; May, Spartina alterniflora; June, Baccharis halimifolia; and September, Spartina patens and Distichlis spicata. This information is important, for instance, when a single species is recognized to indicate a particular environmental condition.

  6. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  7. Multivariate models to classify Tuscan virgin olive oils by zone.

    Directory of Open Access Journals (Sweden)

    Alessandri, Stefano

    1999-10-01

    Full Text Available In order to study and classify Tuscan virgin olive oils, 179 samples were collected. They were obtained from drupes harvested during the first half of November, from three different zones of the Region. The sampling was repeated for 5 years. Fatty acids, phytol, aliphatic and triterpenic alcohols, triterpenic dialcohols, sterols, squalene and tocopherols were analyzed. A subset of variables was considered. They were selected in a preceding work as the most effective and reliable, from the univariate point of view. The analytical data were transformed (except for the cycloartenol to compensate annual variations, the mean related to the East zone was subtracted from each value, within each year. Univariate three-class models were calculated and further variables discarded. Then multivariate three-zone models were evaluated, including phytol (that was always selected and all the combinations of palmitic, palmitoleic and oleic acid, tetracosanol, cycloartenol and squalene. Models including from two to seven variables were studied. The best model shows by-zone classification errors less than 40%, by-zone within-year classification errors that are less than 45% and a global classification error equal to 30%. This model includes phytol, palmitic acid, tetracosanol and cycloartenol.

    Para estudiar y clasificar aceites de oliva vírgenes Toscanos, se utilizaron 179 muestras, que fueron obtenidas de frutos recolectados durante la primera mitad de Noviembre, de tres zonas diferentes de la Región. El muestreo fue repetido durante 5 años. Se analizaron ácidos grasos, fitol, alcoholes alifáticos y triterpénicos, dialcoholes triterpénicos, esteroles, escualeno y tocoferoles. Se consideró un subconjunto de variables que fueron seleccionadas en un trabajo anterior como el más efectivo y fiable, desde el punto de vista univariado. Los datos analíticos se transformaron (excepto para el cicloartenol para compensar las variaciones anuales, rest

  8. Heterogeneous classifier fusion for ligand-based virtual screening: or, how decision making by committee can be a good thing.

    Science.gov (United States)

    Riniker, Sereina; Fechner, Nikolas; Landrum, Gregory A

    2013-11-25

    The concept of data fusion - the combination of information from different sources describing the same object with the expectation to generate a more accurate representation - has found application in a very broad range of disciplines. In the context of ligand-based virtual screening (VS), data fusion has been applied to combine knowledge from either different active molecules or different fingerprints to improve similarity search performance. Machine-learning (ML) methods based on fusion of multiple homogeneous classifiers, in particular random forests, have also been widely applied in the ML literature. The heterogeneous version of classifier fusion - fusing the predictions from different model types - has been less explored. Here, we investigate heterogeneous classifier fusion for ligand-based VS using three different ML methods, RF, naïve Bayes (NB), and logistic regression (LR), with four 2D fingerprints, atom pairs, topological torsions, RDKit fingerprint, and circular fingerprint. The methods are compared using a previously developed benchmarking platform for 2D fingerprints which is extended to ML methods in this article. The original data sets are filtered for difficulty, and a new set of challenging data sets from ChEMBL is added. Data sets were also generated for a second use case: starting from a small set of related actives instead of diverse actives. The final fused model consistently outperforms the other approaches across the broad variety of targets studied, indicating that heterogeneous classifier fusion is a very promising approach for ligand-based VS. The new data sets together with the adapted source code for ML methods are provided in the Supporting Information .

  9. What about the regolith, the saprolite and the bedrock? Proposals for classifying the subsolum in WRB

    Science.gov (United States)

    Juilleret, Jérôme; Dondeyne, Stefaan; Hissler, Christophe

    2014-05-01

    Since soil surveys in the past were mainly conducted in support of agriculture, soil classification tended to focus on the solum representing mainly the upper part of the soil cover that is exploited by crops; the subsolum was largely neglected. When dealing with environmental issues - such as vegetation ecology, groundwater recharge, water quality or waste disposal - an integrated knowledge of the solum to subsolum continuum is required. In the World Reference Base for soil resources (WRB), the lower boundary for soil classification is set at 2 m, including both loose parent material as well as weathered and continuous rock. With the raised concern for environmental issues and global warming, classification concepts in WRB have been widened over the last decades. Cryosols were included as a separate Reference Soil Group to account for soils affected by perennial frost; Technosols were included to account for soils dominated by technical human activity. Terms for describing and classifying the subsolum are however still lacking. Nevertheless, during soil surveys a wealth of information on the subsolum is also collected. In Luxembourg, detailed soil surveys are conducted according to a national legend which is correlated to WRB. Quantitative data on characteristics of the subsolum, such as bedding, cleavage, fractures density and dipping of the layer, are recorded for their importance in relation to subsurface hydrology. Drawing from this experience, we propose defining four "subsolum materials" and which could be integrated into WRB as qualifiers. Regolitic materials are composed of soil and rock fragments deposited by water, solifluction, ice or wind; Paralithic materials consist of partly weathered rock with geogenic structural features; Saprolitic materials are formed from in situ weathering of the underlying geological deposits; Lithic materials correspond to unaltered bedrock. We discuss how these characteristics could be integrated into WRB and how additional

  10. An efficient fully unsupervised video object segmentation scheme using an adaptive neural-network classifier architecture.

    Science.gov (United States)

    Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S

    2003-01-01

    In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).

  11. Ensemble regularized linear discriminant analysis classifier for P300-based brain-computer interface.

    Science.gov (United States)

    Onishi, Akinari; Natsume, Kiyohisa

    2013-01-01

    This paper demonstrates a better classification performance of an ensemble classifier using a regularized linear discriminant analysis (LDA) for P300-based brain-computer interface (BCI). The ensemble classifier with an LDA is sensitive to the lack of training data because covariance matrices are estimated imprecisely. One of the solution against the lack of training data is to employ a regularized LDA. Thus we employed the regularized LDA for the ensemble classifier of the P300-based BCI. The principal component analysis (PCA) was used for the dimension reduction. As a result, an ensemble regularized LDA classifier showed significantly better classification performance than an ensemble un-regularized LDA classifier. Therefore the proposed ensemble regularized LDA classifier is robust against the lack of training data.

  12. Analysis of Parametric & Non Parametric Classifiers for Classification Technique using WEKA

    Directory of Open Access Journals (Sweden)

    Yugal kumar

    2012-07-01

    Full Text Available In the field of Machine learning & Data Mining, lot of work had been done to construct new classification techniques/ classifiers and lot of research is going on to construct further new classifiers with the help of nature inspired technique such as Genetic Algorithm, Ant Colony Optimization, Bee Colony Optimization, Neural Network, Particle Swarm Optimization etc. Many researchers provided comparative study/ analysis of classification techniques. But this paper deals with another form of analysis of classification techniques i.e. parametric and non parametric classifiers analysis. This paper identifies parametric & non parametric classifiers that are used in classification process and provides tree representation of these classifiers. For the analysis purpose, four classifiers are used in which two of them are parametric and rest of are non-parametric in nature.

  13. Classifier-ensemble incremental-learning procedure for nuclear transient identification at different operational conditions

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero, E-mail: piero.baraldi@polimi.i [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-04-15

    An important requirement for the practical implementation of empirical diagnostic systems is the capability of classifying transients in all plant operational conditions. The present paper proposes an approach based on an ensemble of classifiers for incrementally learning transients under different operational conditions. New classifiers are added to the ensemble where transients occurring in new operational conditions are not satisfactorily classified. The construction of the ensemble is made by bagging; the base classifier is a supervised Fuzzy C Means (FCM) classifier whose outcomes are combined by majority voting. The incremental learning procedure is applied to the identification of simulated transients in the feedwater system of a Boiling Water Reactor (BWR) under different reactor power levels.

  14. The Entire Quantile Path of a Risk-Agnostic SVM Classifier

    CERN Document Server

    Yu, Jin; Zhang, Jian

    2012-01-01

    A quantile binary classifier uses the rule: Classify x as +1 if P(Y = 1|X = x) >= t, and as -1 otherwise, for a fixed quantile parameter t {[0, 1]. It has been shown that Support Vector Machines (SVMs) in the limit are quantile classifiers with t = 1/2 . In this paper, we show that by using asymmetric cost of misclassification SVMs can be appropriately extended to recover, in the limit, the quantile binary classifier for any t. We then present a principled algorithm to solve the extended SVM classifier for all values of t simultaneously. This has two implications: First, one can recover the entire conditional distribution P(Y = 1|X = x) = t for t {[0, 1]. Second, we can build a risk-agnostic SVM classifier where the cost of misclassification need not be known apriori. Preliminary numerical experiments show the effectiveness of the proposed algorithm.

  15. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Directory of Open Access Journals (Sweden)

    Yinan Zhang

    2016-01-01

    Full Text Available Diabetic retinopathy (DR screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  16. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Science.gov (United States)

    An, Mingqiang

    2016-01-01

    Diabetic retinopathy (DR) screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM) is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  17. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    Energy Technology Data Exchange (ETDEWEB)

    Wurtz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kaplan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  18. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    Science.gov (United States)

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  19. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    Science.gov (United States)

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  20. Retrieval Architecture with Classified Query for Content Based Image Recognition

    Directory of Open Access Journals (Sweden)

    Rik Das

    2016-01-01

    Full Text Available The consumer behavior has been observed to be largely influenced by image data with increasing familiarity of smart phones and World Wide Web. Traditional technique of browsing through product varieties in the Internet with text keywords has been gradually replaced by the easy accessible image data. The importance of image data has portrayed a steady growth in application orientation for business domain with the advent of different image capturing devices and social media. The paper has described a methodology of feature extraction by image binarization technique for enhancing identification and retrieval of information using content based image recognition. The proposed algorithm was tested on two public datasets, namely, Wang dataset and Oliva and Torralba (OT-Scene dataset with 3688 images on the whole. It has outclassed the state-of-the-art techniques in performance measure and has shown statistical significance.

  1. Classifying Gamma-Ray Bursts with Gaussian Mixture Model

    CERN Document Server

    Yang, En-Bo; Choi, Chul-Sung; Chang, Heon-Young

    2016-01-01

    Using Gaussian Mixture Model (GMM) and Expectation Maximization Algorithm, we perform an analysis of time duration ($T_{90}$) for \\textit{CGRO}/BATSE, \\textit{Swift}/BAT and \\textit{Fermi}/GBM Gamma-Ray Bursts. The $T_{90}$ distributions of 298 redshift-known \\textit{Swift}/BAT GRBs have also been studied in both observer and rest frames. Bayesian Information Criterion has been used to compare between different GMM models. We find that two Gaussian components are better to describe the \\textit{CGRO}/BATSE and \\textit{Fermi}/GBM GRBs in the observer frame. Also, we caution that two groups are expected for the \\textit{Swift}/BAT bursts in the rest frame, which is consistent with some previous results. However, \\textit{Swift} GRBs in the observer frame seem to show a trimodal distribution, of which the superficial intermediate class may result from the selection effect of \\textit{Swift}/BAT.

  2. Empirical Analysis of Bagged SVM Classifier for Data Mining Applications

    Directory of Open Access Journals (Sweden)

    M.Govindarajan

    2013-11-01

    Full Text Available Data mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. The feasibility and the benefits of the proposed approaches are demonstrated by the means of data mining applications like intrusion detection, direct marketing, and signature verification. A variety of techniques have been employed for analysis ranging from traditional statistical methods to data mining approaches. Bagging and boosting are two relatively new but popular methods for producing ensembles. In this work, bagging is evaluated on real and benchmark data sets of intrusion detection, direct marketing, and signature verification in conjunction with as the base learner. The proposed is superior to individual approach for data mining applications in terms of classification accuracy.

  3. Feature Selection Strategies for Classifying High Dimensional Astronomical Data Sets

    CERN Document Server

    Donalek, Ciro; Djorgovski, S G; Mahabal, Ashish A; Graham, Matthew J; Fuchs, Thomas J; Turmon, Michael J; Philip, N Sajeeth; Yang, Michael Ting-Chang; Longo, Giuseppe

    2013-01-01

    The amount of collected data in many scientific fields is increasing, all of them requiring a common task: extract knowledge from massive, multi parametric data sets, as rapidly and efficiently possible. This is especially true in astronomy where synoptic sky surveys are enabling new research frontiers in the time domain astronomy and posing several new object classification challenges in multi dimensional spaces; given the high number of parameters available for each object, feature selection is quickly becoming a crucial task in analyzing astronomical data sets. Using data sets extracted from the ongoing Catalina Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a variety of feature selection strategies used to identify the subsets that give the most information and the results achieved applying these techniques to three major astronomical problems.

  4. A consensus prognostic gene expression classifier for ER positive breast cancer

    OpenAIRE

    Teschendorff, Andrew E.; Naderi, Ali; Barbosa-Morais, Nuno L.; Pinder, Sarah E; Ellis, Ian O.; Aparicio, Sam; Brenton, James D.; Caldas, Carlos

    2006-01-01

    Background A consensus prognostic gene expression classifier is still elusive in heterogeneous diseases such as breast cancer. Results Here we perform a combined analysis of three major breast cancer microarray data sets to hone in on a universally valid prognostic molecular classifier in estrogen receptor (ER) positive tumors. Using a recently developed robust measure of prognostic separation, we further validate the prognostic classifier in three external independent cohorts, confirming the...

  5. Using Multivariate Machine Learning Methods and Structural MRI to Classify Childhood Onset Schizophrenia and Healthy Controls

    OpenAIRE

    DeannaGreenstein; JamesD.Malley

    2012-01-01

    Introduction: Multivariate machine learning methods can be used to classify groups of schizophrenia patients and controls using structural magnetic resonance imaging (MRI). However, machine learning methods to date have not been extended beyond classification and contemporaneously applied in a meaningful way to clinical measures. We hypothesized that brain measures would classify groups, and that increased likelihood of being classified as a patient using regional brain measures would be posi...

  6. Adaptation in P300 braincomputer interfaces: A two-classifier cotraining approach

    OpenAIRE

    Panicker, Rajesh C.; Sun, Ying; Puthusserypady, Sadasivan

    2010-01-01

    A cotraining-based approach is introduced for constructing high-performance classifiers for P300-based braincomputer interfaces (BCIs), which were trained from very little data. It uses two classifiers: Fishers linear discriminant analysis and Bayesian linear discriminant analysis progressively teaching each other to build a final classifier, which is robust and able to learn effectively from unlabeled data. Detailed analysis of the performance is carried out through extensive cross-validatio...

  7. Onboard Classifiers for Science Event Detection on a Remote Sensing Spacecraft

    Science.gov (United States)

    Castano, Rebecca; Mazzoni, Dominic; Tang, Nghia; Greeley, Ron; Doggett, Thomas; Cichy, Ben; Chien, Steve; Davies, Ashley

    2006-01-01

    Typically, data collected by a spacecraft is downlinked to Earth and pre-processed before any analysis is performed. We have developed classifiers that can be used onboard a spacecraft to identify high priority data for downlink to Earth, providing a method for maximizing the use of a potentially bandwidth limited downlink channel. Onboard analysis can also enable rapid reaction to dynamic events, such as flooding, volcanic eruptions or sea ice break-up. Four classifiers were developed to identify cryosphere events using hyperspectral images. These classifiers include a manually constructed classifier, a Support Vector Machine (SVM), a Decision Tree and a classifier derived by searching over combinations of thresholded band ratios. Each of the classifiers was designed to run in the computationally constrained operating environment of the spacecraft. A set of scenes was hand-labeled to provide training and testing data. Performance results on the test data indicate that the SVM and manual classifiers outperformed the Decision Tree and band-ratio classifiers with the SVM yielding slightly better classifications than the manual classifier.

  8. Classification of Cancer Gene Selection Using Random Forest and Neural Network Based Ensemble Classifier

    Directory of Open Access Journals (Sweden)

    Jogendra Kushwah

    2013-06-01

    Full Text Available The free radical gene classification of cancerdiseasesis challenging job in biomedical dataengineering. The improving of classification of geneselection of cancer diseases various classifier areused, but the classification of classifier are notvalidate. So ensemble classifier is used for cancergene classification using neural network classifierwith random forest tree. The random forest tree isensembling technique of classifier in this techniquethe number of classifier ensemble of their leaf nodeof class of classifier. In this paper we combinedneuralnetwork with random forest ensembleclassifier for classification of cancer gene selectionfor diagnose analysis of cancer diseases.Theproposed method is different from most of themethods of ensemble classifier, which follow aninput output paradigm ofneural network, where themembers of the ensemble are selected from a set ofneural network classifier. the number of classifiersis determined during the rising procedure of theforest. Furthermore, the proposed method producesan ensemble not only correct, but also assorted,ensuring the two important properties that shouldcharacterize an ensemble classifier. For empiricalevaluation of our proposed method we used UCIcancer diseases data set for classification. Ourexperimental result shows that betterresult incompression of random forest tree classification

  9. Bagged ensemble of Fuzzy C-Means classifiers for nuclear transient identification

    Energy Technology Data Exchange (ETDEWEB)

    Baraldi, Piero; Razavi-Far, Roozbeh [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy); Zio, Enrico, E-mail: enrico.zio@polimi.it [Dipartimento di Energia - Sezione Ingegneria Nucleare, Politecnico di Milano, Via Ponzio 34/3, 20133 Milano (Italy); Ecole Centrale Paris-Supelec, Paris (France)

    2011-05-15

    Research highlights: > A bagged ensemble of classifiers is applied for nuclear transient identification. > Fuzzy C-Means classifiers are used as base classifiers of the ensemble. > Transients are simulated in the feedwater system of a boiling water reactor. > Ensemble is compared with a supervised, evolutionary-optimized FCM classifier. > Ensemble improves classification accuracy in cases of large or very small sizes data. - Abstract: This paper presents an ensemble-based scheme for nuclear transient identification. The approach adopted to construct the ensemble of classifiers is bagging; the novelty consists in using supervised fuzzy C-means (FCM) classifiers as base classifiers of the ensemble. The performance of the proposed classification scheme has been verified by comparison with a single supervised, evolutionary-optimized FCM classifier with respect of the task of classifying artificial datasets. The results obtained indicate that in the cases of datasets of large or very small sizes and/or complex decision boundaries, the bagging ensembles can improve classification accuracy. Then, the approach has been applied to the identification of simulated transients in the feedwater system of a boiling water reactor (BWR).

  10. Face Recognition Based on Support Vector Machine and Nearest Neighbor Classifier

    Institute of Scientific and Technical Information of China (English)

    张燕昆; 刘重庆

    2003-01-01

    Support vector machine (SVM), as a novel approach in pattern recognition, has demonstrated a success in face detection and face recognition. In this paper, a face recognition approach based on the SVM classifier with the nearest neighbor classifier (NNC) is proposed. The principal component analysis (PCA) is used to reduce the dimension and extract features. Then one-against-all stratedy is used to train the SVM classifiers. At the testing stage, we propose an algorithm by combining SVM classifier with NNC to improve the correct recognition rate. We conduct the experiment on the Cambridge ORL face database. The result shows that our approach outperforms the standard eigenface approach and some other approaches.

  11. Construction of Classifier Based on MPCA and QSA and Its Application on Classification of Pancreatic Diseases

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2013-01-01

    Full Text Available A novel method is proposed to establish the classifier which can classify the pancreatic images into normal or abnormal. Firstly, the brightness feature is used to construct high-order tensors, then using multilinear principal component analysis (MPCA extracts the eigentensors, and finally, the classifier is constructed based on support vector machine (SVM and the classifier parameters are optimized with quantum simulated annealing algorithm (QSA. In order to verify the effectiveness of the proposed algorithm, the normal SVM method has been chosen as comparing algorithm. The experimental results show that the proposed method can effectively extract the eigenfeatures and improve the classification accuracy of pancreatic images.

  12. Opening the Black Box: Toward Classifying Care and Treatment for Children and Adolescents with Behavioral and Emotional Problems within and across Care Organizations

    Science.gov (United States)

    Evenboer, K. E.; Huyghen, A. M. N.; Tuinstra, J.; Reijneveld, S. A.; Knorth, E. J.

    2016-01-01

    Objective: The Taxonomy of Care for Youth was developed to gather information about the care offered to children and adolescents with behavioral and emotional problems in various care settings. The aim was to determine similarities and differences in the content of care and thereby to classify the care offered to these children and youth within…

  13. Opening the black box : Toward classifying care and treatment for children and adolescents with behavioral and emotional problems within and across care organizations

    NARCIS (Netherlands)

    Evenboer, K.E.; Huyghen, A.M.N.; Tuinstra, J.; Reijneveld, S.A.; Knorth, E.J.

    2016-01-01

    Objective: The Taxonomy of Care for Youth was developed to gather information about the care offered to children and adolescents with behavioral and emotional problems in various care settings. The aim was to determine similarities and differences in the content of care and thereby to classify the c

  14. China's Electronic Information Product Energy Consumption Standard

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ The electronic information industry of China is facing increasingly urgent ecological challenges. This year, China will study and advance an electronic information product energy consumption standard, and establish a key list of pollution controls and classified frame system.

  15. 15 CFR 705.6 - Confidential information.

    Science.gov (United States)

    2010-01-01

    ...) BUREAU OF INDUSTRY AND SECURITY, DEPARTMENT OF COMMERCE NATIONAL SECURITY INDUSTRIAL BASE REGULATIONS EFFECT OF IMPORTED ARTICLES ON THE NATIONAL SECURITY § 705.6 Confidential information. (a) Any... the investigation that would disclose national security classified information or...

  16. Evaluation of the efficiency of biofield diagnostic system in breast cancer detection using clinical study results and classifiers.

    Science.gov (United States)

    Subbhuraam, Vinitha Sree; Ng, E Y K; Kaw, G; Acharya U, Rajendra; Chong, B K

    2012-02-01

    The division of breast cancer cells results in regions of electrical depolarisation within the breast. These regions extend to the skin surface from where diagnostic information can be obtained through measurements of the skin surface electropotentials using sensors. This technique is used by the Biofield Diagnostic System (BDS) to detect the presence of malignancy. This paper evaluates the efficiency of BDS in breast cancer detection and also evaluates the use of classifiers for improving the accuracy of BDS. 182 women scheduled for either mammography or ultrasound or both tests participated in the BDS clinical study conducted at Tan Tock Seng hospital, Singapore. Using the BDS index obtained from the BDS examination and the level of suspicion score obtained from mammography/ultrasound results, the final BDS result was deciphered. BDS demonstrated high values for sensitivity (96.23%), specificity (93.80%), and accuracy (94.51%). Also, we have studied the performance of five supervised learning based classifiers (back propagation network, probabilistic neural network, linear discriminant analysis, support vector machines, and a fuzzy classifier), by feeding selected features from the collected dataset. The clinical study results show that BDS can help physicians to differentiate benign and malignant breast lesions, and thereby, aid in making better biopsy recommendations.

  17. Evaluation of the efficiency of biofield diagnostic system in breast cancer detection using clinical study results and classifiers.

    Science.gov (United States)

    Subbhuraam, Vinitha Sree; Ng, E Y K; Kaw, G; Acharya U, Rajendra; Chong, B K

    2012-02-01

    The division of breast cancer cells results in regions of electrical depolarisation within the breast. These regions extend to the skin surface from where diagnostic information can be obtained through measurements of the skin surface electropotentials using sensors. This technique is used by the Biofield Diagnostic System (BDS) to detect the presence of malignancy. This paper evaluates the efficiency of BDS in breast cancer detection and also evaluates the use of classifiers for improving the accuracy of BDS. 182 women scheduled for either mammography or ultrasound or both tests participated in the BDS clinical study conducted at Tan Tock Seng hospital, Singapore. Using the BDS index obtained from the BDS examination and the level of suspicion score obtained from mammography/ultrasound results, the final BDS result was deciphered. BDS demonstrated high values for sensitivity (96.23%), specificity (93.80%), and accuracy (94.51%). Also, we have studied the performance of five supervised learning based classifiers (back propagation network, probabilistic neural network, linear discriminant analysis, support vector machines, and a fuzzy classifier), by feeding selected features from the collected dataset. The clinical study results show that BDS can help physicians to differentiate benign and malignant breast lesions, and thereby, aid in making better biopsy recommendations. PMID:20703753

  18. Detection of SQL Injection Using a Genetic Fuzzy Classifier System

    Directory of Open Access Journals (Sweden)

    Christine Basta

    2016-06-01

    Full Text Available SQL Injection (SQLI is one of the most popular vulnerabilities of web applications. The consequences of SQL injection attack include the possibility of stealing sensitive information or bypassing authentication procedures. SQL injection attacks have different forms and variations. One difficulty in detecting malicious attacks is that such attacks do not have a specific pattern. A new fuzzy rule-based classification system (FBRCS can tackle the requirements of the current stage of security measures. This paper proposes a genetic fuzzy system for detection of SQLI where not only the accuracy is a priority, but also the learning and the flexibility of the obtained rules. To create the rules having high generalization capabilities, our algorithm builds on initial rules, data-dependent parameters, and an enhancing function that modifies the rule evaluation measures. The enhancing function helps to assess the candidate rules more effectively based on decision subspace. The proposed system has been evaluated using a number of well-known data sets. Results show a significant enhancement in the detection procedure.

  19. Classifying and recovering from sensing failures in autonomous mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, R.R.; Hershberger, D. [Colorado School of Mines, Golden, CO (United States)

    1996-12-31

    This paper presents a characterization of sensing failures in autonomous mobile robots, a methodology for classification and recovery, and a demonstration of this approach on a mobile robot performing landmark navigation. A sensing failure is any event leading to defective perception, including sensor malfunctions, software errors, environmental changes, and errant expectations. The approach demonstrated in this paper exploits the ability of the robot to interact with its environment to acquire additional information for classification (i.e., active perception). A Generate and Test strategy is used to generate hypotheses to explain the symptom resulting from the sensing failure. The recovery scheme replaces the affected sensing processes with an alternative logical sensor. The approach is implemented as the Sensor Fusion Effects Exception Handling (SFX-EH) architecture. The advantages of SFX-EH are that it requires only a partial causal model of sensing failure, the control scheme strives for a fast response, tests are constructed so as to prevent confounding from collaborating sensors which have also failed, and the logical sensor organization allows SFX-EH to be interfaced with the behavioral level of existing robot architectures.

  20. Using classifier fusion to improve the performance of multiclass classification problems

    Science.gov (United States)

    Lynch, Robert; Willett, Peter

    2013-05-01

    The problem of multiclass classification is often modeled by breaking it down into a collection of binary classifiers, as opposed to jointly modeling all classes with a single primary classifier. Various methods can be found in the literature for decomposing the multiclass problem into a collection of binary classifiers. Typical algorithms that are studied here include each versus all remaining (EVAR), each versus all individually (EVAI), and output correction coding (OCC). With each of these methods a classifier fusion based decision rule is formulated utilizing the various binary classifiers to determine the correct classification of an unknown data point. For example, with EVAR the binary classifier with maximum output is chosen. For EVAI, the correct class is chosen using a majority voting rule, and with OCC a comparison algorithm based minimum Hamming distance metric is used. In this paper, it is demonstrated how these various methods perform utilizing the Bayesian Reduction Algorithm (BDRA) as the primary classifier. BDRA is a discrete data classification method that quantizes and reduces the dimensionality of feature data for best classification performance. In this case, BDRA is used to not only train the appropriate binary classifier pairs, but it is also used to train on the discrete classifier outputs to formulate the correct classification decision of unknown data points. In this way, it is demonstrated how to predict which binary classification based algorithm method (i.e., EVAR, EVAI, or OCC) performs best with BDRA. Experimental results are shown with real data sets taken from the Knowledge Extraction based on Evolutionary Learning (KEEL) and University of California at Irvine (UCI) Repositories of classifier Databases. In general, and for the data sets considered, it is shown that the best classification method, based on performance with unlabeled test observations, can be predicted form performance on labeled training data. Specifically, the best

  1. Classifier Directed Data Hybridization for Geographic Sample Supervised Segment Generation

    Directory of Open Access Journals (Sweden)

    Christoff Fourie

    2014-11-01

    Full Text Available Quality segment generation is a well-known challenge and research objective within Geographic Object-based Image Analysis (GEOBIA. Although methodological avenues within GEOBIA are diverse, segmentation commonly plays a central role in most approaches, influencing and being influenced by surrounding processes. A general approach using supervised quality measures, specifically user provided reference segments, suggest casting the parameters of a given segmentation algorithm as a multidimensional search problem. In such a sample supervised segment generation approach, spatial metrics observing the user provided reference segments may drive the search process. The search is commonly performed by metaheuristics. A novel sample supervised segment generation approach is presented in this work, where the spectral content of provided reference segments is queried. A one-class classification process using spectral information from inside the provided reference segments is used to generate a probability image, which in turn is employed to direct a hybridization of the original input imagery. Segmentation is performed on such a hybrid image. These processes are adjustable, interdependent and form a part of the search problem. Results are presented detailing the performances of four method variants compared to the generic sample supervised segment generation approach, under various conditions in terms of resultant segment quality, required computing time and search process characteristics. Multiple metrics, metaheuristics and segmentation algorithms are tested with this approach. Using the spectral data contained within user provided reference segments to tailor the output generally improves the results in the investigated problem contexts, but at the expense of additional required computing time.

  2. Adaptation in P300 braincomputer interfaces: A two-classifier cotraining approach

    DEFF Research Database (Denmark)

    Panicker, Rajesh C.; Sun, Ying; Puthusserypady, Sadasivan

    2010-01-01

    A cotraining-based approach is introduced for constructing high-performance classifiers for P300-based braincomputer interfaces (BCIs), which were trained from very little data. It uses two classifiers: Fishers linear discriminant analysis and Bayesian linear discriminant analysis progressively...

  3. 48 CFR 52.227-10 - Filing of Patent Applications-Classified Subject Matter.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 2 2010-10-01 2010-10-01 false Filing of Patent... Text of Provisions and Clauses 52.227-10 Filing of Patent Applications—Classified Subject Matter. As prescribed at 27.203-2, insert the following clause: Filing of Patent Applications—Classified Subject...

  4. An expert computer program for classifying stars on the MK spectral classification system

    International Nuclear Information System (INIS)

    This paper describes an expert computer program (MKCLASS) designed to classify stellar spectra on the MK Spectral Classification system in a way similar to humans—by direct comparison with the MK classification standards. Like an expert human classifier, the program first comes up with a rough spectral type, and then refines that spectral type by direct comparison with MK standards drawn from a standards library. A number of spectral peculiarities, including barium stars, Ap and Am stars, λ Bootis stars, carbon-rich giants, etc., can be detected and classified by the program. The program also evaluates the quality of the delivered spectral type. The program currently is capable of classifying spectra in the violet-green region in either the rectified or flux-calibrated format, although the accuracy of the flux calibration is not important. We report on tests of MKCLASS on spectra classified by human classifiers; those tests suggest that over the entire HR diagram, MKCLASS will classify in the temperature dimension with a precision of 0.6 spectral subclass, and in the luminosity dimension with a precision of about one half of a luminosity class. These results compare well with human classifiers.

  5. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    Science.gov (United States)

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  6. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Akinari Onishi

    Full Text Available A P300-based brain-computer interface (BCI enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300. In this study, we evaluated ensemble linear discriminant analysis (LDA classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA, or none. The results show that an ensemble stepwise LDA (SWLDA classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  7. 41 CFR 102-34.45 - How are passenger automobiles classified?

    Science.gov (United States)

    2010-07-01

    ... MANAGEMENT Obtaining Fuel Efficient Motor Vehicles § 102-34.45 How are passenger automobiles classified... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false How are passenger automobiles classified? 102-34.45 Section 102-34.45 Public Contracts and Property Management Federal...

  8. The iPhyClassifier, an interactive online tool for phytoplasma classification and taxonomic assignment

    Science.gov (United States)

    The iPhyClassifier is an Internet-based research tool for quick identification and classification of diverse phytoplasmas. The iPhyClassifier simulates laboratory restriction enzyme digestions and subsequent gel electrophoresis and generates virtual restriction fragment length polymorphism (RFLP) p...

  9. A NOVEL METHODOLOGY FOR CONSTRUCTING RULE-BASED NAÏVE BAYESIAN CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    Abdallah Alashqur

    2015-02-01

    Full Text Available Classification is an important data mining technique that is used by many applications. Several types of classifiers have been described in the research literature. Example classifiers are decision tree classifiers, rule-based classifiers, and neural networks classifiers. Another popular classification technique is naïve Bayesian classification. Naïve Bayesian classification is a probabilistic classification approach that uses Bayesian Theorem to predict the classes of unclassified records. A drawback of Naïve Bayesian Classification is that every time a new data record is to be classified, the entire dataset needs to be scanned in order to apply a set of equations that perform the classification. Scanning the dataset is normally a very costly step especially if the dataset is very large. To alleviate this problem, a new approach for using naïve Bayesian classification is introduced in this study. In this approach, a set of classification rules is constructed on top of naïve Bayesian classifier. Hence we call this approach Rule-based Naïve Bayesian Classifier (RNBC. In RNBC, the dataset is canned only once, off-line, at the time of building the classification rule set. Subsequent scanning of the dataset, is avoided. Furthermore, this study introduces a simple three-step methodology for constructing the classification rule set.

  10. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Israël, Menno; Broek, van den Egon L.; Putten, van der Peter; Khan, L.; Petrushin, V.A.

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  11. Should OCD be classified as an anxiety disorder in DSM-V?

    NARCIS (Netherlands)

    D.J. Stein; N.A. Fineberg; O.J. Bienvenu; D. Denys; C. Lochner; G. Nestadt; J.F. Leckman; S.L. Rauch; K.A. Phillips

    2010-01-01

    In DSM-III, DSM-III-R, and DSM-IV, obsessive-compulsive disorder (OCD) was classified as an anxiety disorder. In ICD-10, OCD is classified separately from the anxiety disorders, although within the same larger category as anxiety disorders (as one of the "neurotic, stress-related, and somatoform dis

  12. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    Science.gov (United States)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  13. Classification of EEG signals using a greedy algorithm for constructing a committee of weak classifiers

    International Nuclear Information System (INIS)

    A greedy algorithm has been proposed for the construction of a committee of weak EEG classifiers, which work in the simplest one-dimensional feature spaces. It has been shown that the accuracy of classification by the committee is several times higher than the accuracy of the best weak classifier

  14. Application of specific classifiers in the compound numerical adjectives; Bidel’s stylistic feature

    Directory of Open Access Journals (Sweden)

    Elyas Nooraei

    2016-02-01

    Full Text Available Abstract  Bidel Dehlavi is the greatest Persian poet of India that difficulty is the main characteristic of his style. Technical discussions about Bidel’s style and his word difficulties, demands more attention to some syntactic cases that have been remained unnoticed. One of the poet's way of speaking is frequently using of compound numerical adjectives and specific classifiers in a specific and unique way. One of his specific methods to display and offer different concepts to the reader's mind is the use of certain words in place of numerical-classifiers, which is new and by breaking norms of language, makes the reader surprised and gives him pleasure. So, understanding his ways of using compound numerical adjectives and specific classifiers demands some researches as follows:   1 Survey of classifiers in terms of imagery   2 Survey of classifiers on the basis of meaning making   3 Survey of classifiers on the basis of ambiguity making   4 Survey of classifiers in the point view of concretion or abstraction   When it comes the classifier to be material and tangible and on the other hand, the numeri be abstract, the role of classifier will become more prominent.   According to what we found on this research, it turns out the structure of firstly “concrete classifier + abstract computed noun” and then “concrete classifier + abstract computed noun” with the most frequency, are in the best position in Bidel’s poem. In general, the basic principles of Bidel’s style are as follows:   (1 The diversity and plurality of application of specific classifiers in compound numerical adjectives.   (2 Deviating from the norms of conventional languages, presented in a new and innovative way.   (3 The relationship and cohesion between classifier and computed noun.   (4 Being strong and prominent in image making.   (5 Being strong and prominent in meaning making   (6 Creating ambiguity and increasing the complexity of the word.   (7

  15. Employing Neocognitron Neural Network Base Ensemble Classifiers To Enhance Efficiency Of Classification In Handwritten Digit Datasets

    Directory of Open Access Journals (Sweden)

    Neera Saxena

    2011-07-01

    Full Text Available This paper presents an ensemble of neo-cognitron neural network base classifiers to enhance the accuracy of the system, along the experimental results. The method offers lesser computational preprocessing in comparison to other ensemble techniques as it ex-preempts feature extraction process before feeding the data into base classifiers. This is achieved by the basic nature of neo-cognitron, it is a multilayer feed-forward neural network. Ensemble of such base classifiers gives class labels for each pattern that in turn is combined to give the final class label for that pattern. The purpose of this paper is not only to exemplify learning behaviour of neo-cognitron as base classifiers, but also to purport better fashion to combine neural network based ensemble classifiers.

  16. Classifying Uncertain and Evolving Data Streams with Distributed Extreme Learning Machine

    Institute of Scientific and Technical Information of China (English)

    韩东红; 张昕; 王国仁

    2015-01-01

    Conventional classification algorithms are not well suited for the inherent uncertainty, potential concept drift, volume, and velocity of streaming data. Specialized algorithms are needed to obtain efficient and accurate classifiers for uncertain data streams. In this paper, we first introduce Distributed Extreme Learning Machine (DELM), an optimization of ELM for large matrix operations over large datasets. We then present Weighted Ensemble Classifier Based on Distributed ELM (WE-DELM), an online and one-pass algorithm for efficiently classifying uncertain streaming data with concept drift. A probability world model is built to transform uncertain streaming data into certain streaming data. Base classifiers are learned using DELM. The weights of the base classifiers are updated dynamically according to classification results. WE-DELM improves both the efficiency in learning the model and the accuracy in performing classification. Experimental results show that WE-DELM achieves better performance on different evaluation criteria, including efficiency, accuracy, and speedup.

  17. EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms

    KAUST Repository

    Rapakoulia, Trisevgeni

    2014-04-26

    Motivation: Single nucleotide polymorphisms (SNPs) are considered the most frequently occurring DNA sequence variations. Several computational methods have been proposed for the classification of missense SNPs to neutral and disease associated. However, existing computational approaches fail to select relevant features by choosing them arbitrarily without sufficient documentation. Moreover, they are limited to the problem ofmissing values, imbalance between the learning datasets and most of them do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel evolutionary embedded algorithm to locate close to optimal Support Vector Regression models. In its second step, these models are combined to extract a universal predictor, which is less prone to overfitting issues, systematizes the rebalancing of the learning sets and uses an internal approach for solving the missing values problem without loss of information. Confidence scores support all the predictions and the model becomes tunable by modifying the classification thresholds. An extensive study was performed for collecting the most relevant features for the problem of classifying SNPs, and a superset of 88 features was constructed. Experimental results show that the proposed framework outperforms well-known algorithms in terms of classification performance in the examined datasets. Finally, the proposed algorithmic framework was able to uncover the significant role of certain features such as the solvent accessibility feature, and the top-scored predictions were further validated by linking them with disease phenotypes. © The Author 2014.

  18. Evaluating the impact of red-edge band from Rapideye image for classifying insect defoliation levels

    Science.gov (United States)

    Adelabu, Samuel; Mutanga, Onisimo; Adam, Elhadi

    2014-09-01

    The prospect of regular assessments of insect defoliation using remote sensing technologies has increased in recent years through advances in the understanding of the spectral reflectance properties of vegetation. The aim of the present study was to evaluate the ability of the red edge channel of Rapideye imagery to discriminate different levels of insect defoliation in an African savanna by comparing the results of obtained from two classifiers. Random Forest and Support vector machine classification algorithms were applied using different sets of spectral analysis involving the red edge band. Results show that the integration of information from red edge increases classification accuracy of insect defoliation levels in all analysis performed in the study. For instance, when all the 5 bands of Rapideye imagery were used for classification, the overall accuracies increases about 19% and 21% for SVM and RF, respectively, as opposed to when the red edge channel was excluded. We also found out that the normalized difference red-edge index yielded a better accuracy result than normalized difference vegetation index. We conclude that the red-edge channel of relatively affordable and readily available high-resolution multispectral satellite data such as Rapideye has the potential to considerably improve insect defoliation classification especially in sub-Saharan Africa where data availability is limited.

  19. Classifying visuomotor workload in a driving simulator using subject specific spatial brain patterns.

    Directory of Open Access Journals (Sweden)

    Chris eDijksterhuis

    2013-08-01

    Full Text Available A passive Brain Computer Interface (BCI is a system that responds to the spontaneously produced brain activity of its user and could be used to develop interactive task support. A human-machine system that could benefit from brain-based task support is the driver-car interaction system. To investigate the feasibility of such a system to detect changes in visuomotor workload, 34 drivers were exposed to several levels of driving demand in a driving simulator. Driving demand was manipulated by varying driving speed and by asking the drivers to comply to individually set lane keeping performance targets. Differences in the individual driver’s workload levels were classified by applying the Common Spatial Pattern (CSP and Fisher’s linear discriminant analysis to frequency filtered electroencephalogram (EEG data during an off line classification study. Several frequency ranges, EEG cap configurations, and condition pairs were explored. It was found that classifications were most accurate when based on high frequencies, larger electrode sets, and the frontal electrodes. Depending on these factors, classification accuracies across participants reached about 95% on average. The association between high accuracies and high frequencies suggests that part of the underlying information did not originate directly from neuronal activity. Nonetheless, average classification accuracies up to 75%-80% were obtained from the lower EEG ranges that are likely to reflect neuronal activity. For a system designer, this implies that a passive BCI system may use several frequency ranges for workload classifications.

  20. ARC: Automated Resource Classifier for agglomerative functional classification of prokaryotic proteins using annotation texts

    Indian Academy of Sciences (India)

    Muthiah Gnanamani; Naveen Kumar; Srinivasan Ramachandran

    2007-08-01

    Functional classification of proteins is central to comparative genomics. The need for algorithms tuned to enable integrative interpretation of analytical data is felt globally. The availability of a general, automated software with built-in flexibility will significantly aid this activity. We have prepared ARC (Automated Resource Classifier), which is an open source software meeting the user requirements of flexibility. The default classification scheme based on keyword match is agglomerative and directs entries into any of the 7 basic non-overlapping functional classes: Cell wall, Cell membrane and Transporters ($\\mathcal{C}$), Cell division ($\\mathcal{D}$), Information ($\\mathcal{I}$), Translocation ($\\mathcal{L}$), Metabolism ($\\mathcal{M}$), Stress($\\mathcal{R}$), Signal and communication($\\mathcal{S}$) and 2 ancillary classes: Others ($\\mathcal{O}$) and Hypothetical ($\\mathcal{H}$). The keyword library of ARC was built serially by first drawing keywords from Bacillus subtilis and Escherichia coli K12. In subsequent steps, this library was further enriched by collecting terms from archaeal representative Archaeoglobus fulgidus, Gene Ontology, and Gene Symbols. ARC is 94.04% successful on 6,75,663 annotated proteins from 348 prokaryotes. Three examples are provided to illuminate the current perspectives on mycobacterial physiology and costs of proteins in 333 prokaryotes. ARC is available at http://arc.igib.res.in.