WorldWideScience

Sample records for proposed classification tool

  1. A Proposed Functional Abilities Classification Tool for Developmental Disorders Affecting Learning and Behaviour

    Directory of Open Access Journals (Sweden)

    Benjamin Klein

    2018-02-01

    Full Text Available Children with developmental disorders affecting learning and behaviour (DDALB (e.g., attention, social communication, language, and learning disabilities, etc. require individualized support across multiple environments to promote participation, quality of life, and developmental outcomes. Support to enhance participation is based largely on individual profiles of functioning (e.g., communication, cognitive, social skills, executive functioning, etc., which are highly heterogeneous within medical diagnoses. Currently educators, clinicians, and parents encounter widespread difficulties in meeting children’s needs as there is lack of universal classification of functioning and disability for use in school environments. Objective: a practical tool for functional classification broadly applicable for children with DDALB could facilitate the collaboration, identification of points of entry of support, individual program planning, and reassessment in a transparent, equitable process based on functional need and context. We propose such a tool, the Functional Abilities Classification Tool (FACT based on the concepts of the ICF (International Classification of Functioning, Disability and Health. FACT is intended to provide ability and participation classification that is complementary to medical diagnosis. For children presenting with difficulties, the proposed tool initially classifies participation over several environments. Then, functional abilities are classified and personal factors and environment are described. Points of entry for support are identified given an analysis of functional ability profile, personal factors, environmental features, and pattern of participation. Conclusion: case examples, use of the tool and implications for children, agencies, and the system are described.

  2. FACET CLASSIFICATIONS OF E-LEARNING TOOLS

    Directory of Open Access Journals (Sweden)

    Olena Yu. Balalaieva

    2013-12-01

    Full Text Available The article deals with the classification of e-learning tools based on the facet method, which suggests the separation of the parallel set of objects into independent classification groups; at the same time it is not assumed rigid classification structure and pre-built finite groups classification groups are formed by a combination of values taken from the relevant facets. An attempt to systematize the existing classification of e-learning tools from the standpoint of classification theory is made for the first time. Modern Ukrainian and foreign facet classifications of e-learning tools are described; their positive and negative features compared to classifications based on a hierarchical method are analyzed. The original author's facet classification of e-learning tools is proposed.

  3. PASTEC: an automatic transposable element classification tool.

    Directory of Open Access Journals (Sweden)

    Claire Hoede

    Full Text Available SUMMARY: The classification of transposable elements (TEs is key step towards deciphering their potential impact on the genome. However, this process is often based on manual sequence inspection by TE experts. With the wealth of genomic sequences now available, this task requires automation, making it accessible to most scientists. We propose a new tool, PASTEC, which classifies TEs by searching for structural features and similarities. This tool outperforms currently available software for TE classification. The main innovation of PASTEC is the search for HMM profiles, which is useful for inferring the classification of unknown TE on the basis of conserved functional domains of the proteins. In addition, PASTEC is the only tool providing an exhaustive spectrum of possible classifications to the order level of the Wicker hierarchical TE classification system. It can also automatically classify other repeated elements, such as SSR (Simple Sequence Repeats, rDNA or potential repeated host genes. Finally, the output of this new tool is designed to facilitate manual curation by providing to biologists with all the evidence accumulated for each TE consensus. AVAILABILITY: PASTEC is available as a REPET module or standalone software (http://urgi.versailles.inra.fr/download/repet/REPET_linux-x64-2.2.tar.gz. It requires a Unix-like system. There are two standalone versions: one of which is parallelized (requiring Sun grid Engine or Torque, and the other of which is not.

  4. [Evaluation of new and emerging health technologies. Proposal for classification].

    Science.gov (United States)

    Prados-Torres, J D; Vidal-España, F; Barnestein-Fonseca, P; Gallo-García, C; Irastorza-Aldasoro, A; Leiva-Fernández, F

    2011-01-01

    Review and develop a proposal for the classification of health technologies (HT) evaluated by the Health Technology Assessment Agencies (HTAA). Peer review of AETS of the previous proposed classification of HT. Analysis of their input and suggestions for amendments. Construction of a new classification. Pilot study with physicians. Andalusian Public Health System. Spanish HTAA. Experts from HTAA. Tutors of family medicine residents. HT Update classification previously made by the research team. Peer review by Spanish HTAA. Qualitative and quantitative analysis of responses. Construction of a new and pilot study based on 12 evaluation reports of the HTAA. We obtained 11 thematic categories that are classified into 6 major head groups: 1, prevention technology; 2, diagnostic technology; 3, therapeutic technologies; 4, diagnostic and therapeutic technologies; 5, organizational technology, and 6, knowledge management and quality of care. In the pilot there was a good concordance in the classification of 8 of the 12 reports reviewed by physicians. Experts agree on 11 thematic categories of HT. A new classification of HT with double entry (Nature and purpose of HT) is proposed. APPLICABILITY: According to experts, the classification of the work of the HTAA may represent a useful tool to transfer and manage knowledge. Moreover, an adequate classification of the HTAA reports would help clinicians and other potential users to locate them and this can facilitate their dissemination. Copyright © 2010 SECA. Published by Elsevier Espana. All rights reserved.

  5. Acute pesticide poisoning: a proposed classification tool.

    Science.gov (United States)

    Thundiyil, Josef G; Stober, Judy; Besbelli, Nida; Pronczuk, Jenny

    2008-03-01

    Cases of acute pesticide poisoning (APP) account for significant morbidity and mortality worldwide. Developing countries are particularly susceptible due to poorer regulation, lack of surveillance systems, less enforcement, lack of training and inadequate access to information systems. Previous research has demonstrated wide variability in incidence rates for APP. This is possibly due to inconsistent reporting methodology and exclusion of occupational and non-intentional poisonings. The purpose of this document is to create a standard case definition to facilitate the identification and diagnosis of all causes of APP, especially at the field level, rural clinics and primary health-care systems. This document is a synthesis of existing literature and case definitions that have been previously proposed by other authors around the world. It provides a standardized case definition and classification scheme for APP into categories of probable, possible and unlikely/unknown cases. Its use is intended to be applicable worldwide to contribute to identification of the scope of existing problems and thus promote action for improved management and prevention. By enabling a field diagnosis for APP, this standardized case definition may facilitate immediate medical management of pesticide poisoning and aid in estimating its incidence.

  6. A Proposal to Develop Interactive Classification Technology

    Science.gov (United States)

    deBessonet, Cary

    1998-01-01

    Research for the first year was oriented towards: 1) the design of an interactive classification tool (ICT); and 2) the development of an appropriate theory of inference for use in ICT technology. The general objective was to develop a theory of classification that could accommodate a diverse array of objects, including events and their constituent objects. Throughout this report, the term "object" is to be interpreted in a broad sense to cover any kind of object, including living beings, non-living physical things, events, even ideas and concepts. The idea was to produce a theory that could serve as the uniting fabric of a base technology capable of being implemented in a variety of automated systems. The decision was made to employ two technologies under development by the principal investigator, namely, SMS (Symbolic Manipulation System) and SL (Symbolic Language) [see debessonet, 1991, for detailed descriptions of SMS and SL]. The plan was to enhance and modify these technologies for use in an ICT environment. As a means of giving focus and direction to the proposed research, the investigators decided to design an interactive, classificatory tool for use in building accessible knowledge bases for selected domains. Accordingly, the proposed research was divisible into tasks that included: 1) the design of technology for classifying domain objects and for building knowledge bases from the results automatically; 2) the development of a scheme of inference capable of drawing upon previously processed classificatory schemes and knowledge bases; and 3) the design of a query/ search module for accessing the knowledge bases built by the inclusive system. The interactive tool for classifying domain objects was to be designed initially for textual corpora with a view to having the technology eventually be used in robots to build sentential knowledge bases that would be supported by inference engines specially designed for the natural or man-made environments in which the

  7. Classification and optimization of training tools for NPP simulator

    International Nuclear Information System (INIS)

    Billoen, G. van

    1994-01-01

    The training cycle of nuclear power plant (NPP) operators has evolved during the last decade in parallel with the evolution of the training tools. The phases of the training cycle can be summarized as follows: (1) basic principle learning, (2) specific functional training, (3) full operating range training, and (4) detailed accident analyses. The progress in simulation technology and man/machine interface (MMI) gives the training centers new opportunities to improve their training methods and effectiveness in the transfer of knowledge. To take advantage of these new opportunities a significant investment in simulation tools may be required. It is therefore important to propose an optimized approach when dealing with the overall equipment program for these training centers. An overall look of tools proposed on the international simulation market shows that there is a need for systematic approach in this field. Classification of the different training tools needed for each training cycle is the basis for an optimized approach in terms of hardware configuration and software specifications of the equipment to install in training centers. The 'Multi-Function Simulator' is one of the approaches. (orig.) (3 tabs.)

  8. New decision support tool for acute lymphoblastic leukemia classification

    Science.gov (United States)

    Madhukar, Monica; Agaian, Sos; Chronopoulos, Anthony T.

    2012-03-01

    In this paper, we build up a new decision support tool to improve treatment intensity choice in childhood ALL. The developed system includes different methods to accurately measure furthermore cell properties in microscope blood film images. The blood images are exposed to series of pre-processing steps which include color correlation, and contrast enhancement. By performing K-means clustering on the resultant images, the nuclei of the cells under consideration are obtained. Shape features and texture features are then extracted for classification. The system is further tested on the classification of spectra measured from the cell nuclei in blood samples in order to distinguish normal cells from those affected by Acute Lymphoblastic Leukemia. The results show that the proposed system robustly segments and classifies acute lymphoblastic leukemia based on complete microscopic blood images.

  9. Organizational change in quality management aspects: a quantitative proposal for classification

    Directory of Open Access Journals (Sweden)

    André Tavares de Aquino

    Full Text Available Abstract Periodically, organizations need to change the quality management aspects of processes and products in order to suit the demands of their internal and external (consumer and competitor market environments. In the context of the present study, quality management changes involve tools, programs, methods, standards and procedures that can be applied. The purpose of this study is to help senior management to identify types of change and, consequently, determine how it should be correctly conducted within an organization. The methodology involves a classification model, with multicriteria support, and three organizational change ratings were adopted (the extremes, type I and type II, as confirmed in the literature, and the intermediary, proposed herein. The multicriteria method used was ELECTRE TRI and the model was applied to two companies of the Textile Local Productive Arrangement in Pernambuco, Brazil. The results are interesting and show the consistency and coherence of the proposed classification model.

  10. Moving research tools into practice: the successes and challenges in promoting uptake of classification tools.

    Science.gov (United States)

    Cunningham, Barbara Jane; Hidecker, Mary Jo Cooley; Thomas-Stonell, Nancy; Rosenbaum, Peter

    2018-05-01

    In this paper, we present our experiences - both successes and challenges - in implementing evidence-based classification tools into clinical practice. We also make recommendations for others wanting to promote the uptake and application of new research-based assessment tools. We first describe classification systems and the benefits of using them in both research and practice. We then present a theoretical framework from Implementation Science to report strategies we have used to implement two research-based classification tools into practice. We also illustrate some of the challenges we have encountered by reporting results from an online survey investigating 58 Speech-language Pathologists' knowledge and use of the Communication Function Classification System (CFCS), a new tool to classify children's functional communication skills. We offer recommendations for researchers wanting to promote the uptake of new tools in clinical practice. Specifically, we identify structural, organizational, innovation, practitioner, and patient-related factors that we recommend researchers address in the design of implementation interventions. Roles and responsibilities of both researchers and clinicians in making implementations science a success are presented. Implications for rehabilitation Promoting uptake of new and evidence-based tools into clinical practice is challenging. Implementation science can help researchers to close the knowledge-to-practice gap. Using concrete examples, we discuss our experiences in implementing evidence-based classification tools into practice within a theoretical framework. Recommendations are provided for researchers wanting to implement new tools in clinical practice. Implications for researchers and clinicians are presented.

  11. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). © 2013 Elsevier B.V. All rights reserved.

  12. U.S. Geological Survey ArcMap Sediment Classification tool

    Science.gov (United States)

    O'Malley, John

    2007-01-01

    The U.S. Geological Survey (USGS) ArcMap Sediment Classification tool is a custom toolbar that extends the Environmental Systems Research Institute, Inc. (ESRI) ArcGIS 9.2 Desktop application to aid in the analysis of seabed sediment classification. The tool uses as input either a point data layer with field attributes containing percentage of gravel, sand, silt, and clay or four raster data layers representing a percentage of sediment (0-100%) for the various sediment grain size analysis: sand, gravel, silt and clay. This tool is designed to analyze the percent of sediment at a given location and classify the sediments according to either the Folk (1954, 1974) or Shepard (1954) as modified by Schlee(1973) classification schemes. The sediment analysis tool is based upon the USGS SEDCLASS program (Poppe, et al. 2004).

  13. A proposed radiological classification of childhood intra-thoracic tuberculosis

    International Nuclear Information System (INIS)

    Marais, Ben J.; Gie, Robert P.; Schaaf, H. Simon; Hesseling, Anneke C.; Donald, Peter R.; Beyers, Nulda; Starke, Jeff R.

    2004-01-01

    One of the obstacles in discussing childhood tuberculosis (TB) is the lack of standard descriptive terminology to classify the diverse spectrum of disease. Accurate disease classification is important, because the correct identification of the specific disease entity has definite prognostic significance. Accurate classification will also improve study outcome definitions and facilitate scientific communication. The aim of this paper is to provide practical guidelines for the accurate radiological classification of intra-thoracic TB in children less than 15 years of age. The proposed radiological classification is based on the underlying disease and the principles of pathological disease progression. The hope is that the proposed classification will clarify concepts and stimulate discussion that may lead to future consensus. (orig.)

  14. Building an asynchronous web-based tool for machine learning classification.

    Science.gov (United States)

    Weber, Griffin; Vinterbo, Staal; Ohno-Machado, Lucila

    2002-01-01

    Various unsupervised and supervised learning methods including support vector machines, classification trees, linear discriminant analysis and nearest neighbor classifiers have been used to classify high-throughput gene expression data. Simpler and more widely accepted statistical tools have not yet been used for this purpose, hence proper comparisons between classification methods have not been conducted. We developed free software that implements logistic regression with stepwise variable selection as a quick and simple method for initial exploration of important genetic markers in disease classification. To implement the algorithm and allow our collaborators in remote locations to evaluate and compare its results against those of other methods, we developed a user-friendly asynchronous web-based application with a minimal amount of programming using free, downloadable software tools. With this program, we show that classification using logistic regression can perform as well as other more sophisticated algorithms, and it has the advantages of being easy to interpret and reproduce. By making the tool freely and easily available, we hope to promote the comparison of classification methods. In addition, we believe our web application can be used as a model for other bioinformatics laboratories that need to develop web-based analysis tools in a short amount of time and on a limited budget.

  15. Classification of parotidectomy: a proposed modification to the European Salivary Gland Society classification system.

    Science.gov (United States)

    Wong, Wai Keat; Shetty, Subhaschandra

    2017-08-01

    Parotidectomy remains the mainstay of treatment for both benign and malignant lesions of the parotid gland. There exists a wide range of possible surgical options in parotidectomy in terms of extent of parotid tissue removed. There is increasing need for uniformity of terminology resulting from growing interest in modifications of the conventional parotidectomy. It is, therefore, of paramount importance for a standardized classification system in describing extent of parotidectomy. Recently, the European Salivary Gland Society (ESGS) proposed a novel classification system for parotidectomy. The aim of this study is to evaluate this system. A classification system proposed by the ESGS was critically re-evaluated and modified to increase its accuracy and its acceptability. Modifications mainly focused on subdividing Levels I and II into IA, IB, IIA, and IIB. From June 2006 to June 2016, 126 patients underwent 130 parotidectomies at our hospital. The classification system was tested in that cohort of patient. While the ESGS classification system is comprehensive, it does not cover all possibilities. The addition of Sublevels IA, IB, IIA, and IIB may help to address some of the clinical situations seen and is clinically relevant. We aim to test the modified classification system for partial parotidectomy to address some of the challenges mentioned.

  16. E-LEARNING TOOLS: STRUCTURE, CONTENT, CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Yuliya H. Loboda

    2012-05-01

    Full Text Available The article analyses the problems of organization of educational process with use of electronic means of education. Specifies the definition of "electronic learning", their structure and content. Didactic principles are considered, which are the basis of their creation and use. Given the detailed characteristics of e-learning tools for methodological purposes. On the basis of the allocated pedagogical problems of the use of electronic means of education presented and complemented by their classification, namely the means of theoretical and technological training, means of practical training, support tools, and comprehensive facilities.

  17. A proposed data base system for detection, classification and ...

    African Journals Online (AJOL)

    A proposed data base system for detection, classification and location of fault on electricity company of Ghana electrical distribution system. Isaac Owusu-Nyarko, Mensah-Ananoo Eugine. Abstract. No Abstract. Keywords: database, classification of fault, power, distribution system, SCADA, ECG. Full Text: EMAIL FULL TEXT ...

  18. Systematic analysis of ocular trauma by a new proposed ocular trauma classification

    Directory of Open Access Journals (Sweden)

    Bhartendu Shukla

    2017-01-01

    Full Text Available Purpose: The current classification of ocular trauma does not incorporate adnexal trauma, injuries that are attributable to a nonmechanical cause and destructive globe injuries. This study proposes a new classification system of ocular trauma which is broader-based to allow for the classification of a wider range of ocular injuries not covered by the current classification. Methods: A clinic-based cross-sectional study to validate the proposed classification. We analyzed 535 cases of ocular injury from January 1, 2012 to February 28, 2012 over a 4-year period in an eye hospital in central India using our proposed classification system and compared it with conventional classification. Results: The new classification system allowed for classification of all 535 cases of ocular injury. The conventional classification was only able to classify 364 of the 535 trauma cases. Injuries involving the adnexa, nonmechanical injuries and destructive globe injuries could not be classified by the conventional classification, thus missing about 33% of cases. Conclusions: Our classification system shows an improvement over existing ocular trauma classification as it allows for the classification of all type of ocular injuries and will allow for better and specific prognostication. This system has the potential to aid communication between physicians and result in better patient care. It can also provide a more authentic, wide spectrum of ocular injuries in correlation with etiology. By including adnexal injuries and nonmechanical injuries, we have been able to classify all 535 cases of trauma. Otherwise, about 30% of cases would have been excluded from the study.

  19. Proposed International League Against Epilepsy Classification 2010: new insights.

    Science.gov (United States)

    Udani, Vrajesh; Desai, Neelu

    2014-09-01

    The International League Against Epilepsy (ILAE) Classification of Seizures in 1981 and the Classification of the Epilepsies, in 1989 have been widely accepted the world over for the last 3 decades. Since then, there has been an explosive growth in imaging, genetics and other fields in the epilepsies which have changed many of our concepts. It was felt that a revision was in order and hence the ILAE commissioned a group of experts who submitted the initial draft of this revised classification in 2010. This review focuses on what are the strengths and weaknesses of this new proposed classification, especially in the context of a developing country.

  20. Ichthyoplankton Classification Tool using Generative Adversarial Networks and Transfer Learning

    KAUST Repository

    Aljaafari, Nura

    2018-04-15

    The study and the analysis of marine ecosystems is a significant part of the marine science research. These systems are valuable resources for fisheries, improving water quality and can even be used in drugs production. The investigation of ichthyoplankton inhabiting these ecosystems is also an important research field. Ichthyoplankton are fish in their early stages of life. In this stage, the fish have relatively similar shape and are small in size. The currently used way of identifying them is not optimal. Marine scientists typically study such organisms by sending a team that collects samples from the sea which is then taken to the lab for further investigation. These samples need to be studied by an expert and usually end needing a DNA sequencing. This method is time-consuming and requires a high level of experience. The recent advances in AI have helped to solve and automate several difficult tasks which motivated us to develop a classification tool for ichthyoplankton. We show that using machine learning techniques, such as generative adversarial networks combined with transfer learning solves such a problem with high accuracy. We show that using traditional machine learning algorithms fails to solve it. We also give a general framework for creating a classification tool when the dataset used for training is a limited dataset. We aim to build a user-friendly tool that can be used by any user for the classification task and we aim to give a guide to the researchers so that they can follow in creating a classification tool.

  1. Evaluation of a 5-tier scheme proposed for classification of sequence variants using bioinformatic and splicing assay data

    DEFF Research Database (Denmark)

    Walker, Logan C; Whiley, Phillip J; Houdayer, Claude

    2013-01-01

    BRCA1 and 176 BRCA2 unique variants, from 77 publications. At least six independent reviewers from research and/or clinical settings comprehensively examined splicing assay methods and data reported for 22 variant assays of 21 variants in four publications, and classified the variants using the 5-tier......Splicing assays are commonly undertaken in the clinical setting to assess the clinical relevance of sequence variants in disease predisposition genes. A 5-tier classification system incorporating both bioinformatic and splicing assay information was previously proposed as a method to provide...... of results, and the lack of quantitative data for the aberrant transcripts. We propose suggestions for minimum reporting guidelines for splicing assays, and improvements to the 5-tier splicing classification system to allow future evaluation of its performance as a clinical tool....

  2. A proposal of classification for acute toxicity; Una scala di tossicita`

    Energy Technology Data Exchange (ETDEWEB)

    Oddo, N. [Ecotox srl, Pregnana Milanese (Italy)

    1998-05-01

    A classification for acute toxicity is proposed, including the effects to low level exposures (Hormesis). The criteria, the measurement units and the correlations to chronic and to genotoxicity of the proposed classification are discussed. [Italiano] Viene proposta una scala per la tossicita` acuta, comprendendovi gli effetti deboli (Ormesi). Vengono discussi i criteri di formulazione della scala, le unita` di misura adottate, e le relazioni con la tossicita` cronica e con la genotossicita`.

  3. Phonosurgery of the vocal folds : a classification proposal

    NARCIS (Netherlands)

    Remacle, M; Friedrich, G; Dikkers, FG; de Jong, F

    The Phonosurgery Committee of the European Laryngological Society (ELS) has examined the definition and technical description of phonosurgical procedures. Based on this review, the committee has proposed a working classification. The current presentation is restricted to vocal fold surgery (VFS)

  4. Proposed ICDRG Classification of the Clinical Presentation of Contact Allergy

    DEFF Research Database (Denmark)

    Pongpairoj, Korbkarn; Ale, Iris; Andersen, Klaus Ejner

    2016-01-01

    The International Contact Dermatitis Research Group proposes a classification for the clinical presentation of contact allergy. The classification is based primarily on the mode of clinical presentation. The categories are direct exposure/contact dermatitis, mimicking or exacerbation of preexisting....../mucosal symptoms, oral contact dermatitis, erythroderma/exfoliative dermatitis, minor forms of presentation, and extracutaneous manifestations....

  5. A proposal of criteria for the classification of systemic sclerosis.

    Science.gov (United States)

    Nadashkevich, Oleg; Davis, Paul; Fritzler, Marvin J

    2004-11-01

    Sensitive and specific criteria for the classification of systemic sclerosis are required by clinicians and investigators to achieve higher quality clinical studies and approaches to therapy. A clinical study of systemic sclerosis patients in Europe and Canada led to a set of criteria that achieve high sensitivity and specificity. Both clinical and laboratory investigations of patients with systemic sclerosis, related conditions and diseases with clinical features that can be mistaken as part of the systemic sclerosis spectrum were undertaken. Laboratory investigations included the detection of autoantibodies to centromere proteins, Scl-70 (topoisomerase I), and fibrillarin (U3-RNP). Based on the investigation of 269 systemic sclerosis patients and 720 patients presenting with related and confounding conditions, the following set of criteria for the classification of systemic sclerosis was proposed: 1) autoantibodies to: centromere proteins, Scl-70 (topo I), fibrillarin; 2) bibasilar pulmonary fibrosis; 3) contractures of the digital joints or prayer sign; 4) dermal thickening proximal to the wrists; 5) calcinosis cutis; 6) Raynaud's phenomenon; 7) esophageal distal hypomotility or reflux-esophagitis; 8) sclerodactyly or non-pitting digital edema; 9) teleangiectasias. The classification of definite SSc requires at least three of the above criteria. Criteria for the classification of systemic sclerosis have been proposed. Preliminary testing has defined the sensitivity and specificity of these criteria as high as 99% and 100%, respectively. Testing and validation of the proposed criteria by other clinical centers is required.

  6. The k-Language Classification, a Proposed New Theory for Image Classification and Clustering at Pixel Level

    Directory of Open Access Journals (Sweden)

    Alwi Aslan

    2014-03-01

    Full Text Available This theory attempted to explore the possibility of using regular language further in image analysis, departing from the use of string to represent the region in the image. But we are not trying to show an alternative idea about how to generate a string region, where there are many different ways how the image or region produces strings representing, in this paper we propose a way how to generate regular language or group of languages which performs both classify the set of strings generated by a group of a number of image regions. Researchers began by showing a proof that there is always a regular language that accepts a set of strings that produced the image, and then use the language to perform the classification. Research then expanded to the pixel level, on whether the regular language can be used for clustering pixels in the image, the researchers propose a systematic solution of this question. As a tool used to explore regular language is deterministic finite automata. On the end part before conclusion of this paper, we add revision version of this theory. There is another point of view to revision version, added for make this method more precision and more powerfull from before.

  7. A Data Mining Classification Approach for Behavioral Malware Detection

    Directory of Open Access Journals (Sweden)

    Monire Norouzi

    2016-01-01

    Full Text Available Data mining techniques have numerous applications in malware detection. Classification method is one of the most popular data mining techniques. In this paper we present a data mining classification approach to detect malware behavior. We proposed different classification methods in order to detect malware based on the feature and behavior of each malware. A dynamic analysis method has been presented for identifying the malware features. A suggested program has been presented for converting a malware behavior executive history XML file to a suitable WEKA tool input. To illustrate the performance efficiency as well as training data and test, we apply the proposed approaches to a real case study data set using WEKA tool. The evaluation results demonstrated the availability of the proposed data mining approach. Also our proposed data mining approach is more efficient for detecting malware and behavioral classification of malware can be useful to detect malware in a behavioral antivirus.

  8. Ichthyoplankton Classification Tool using Generative Adversarial Networks and Transfer Learning

    KAUST Repository

    Aljaafari, Nura

    2018-01-01

    . This method is time-consuming and requires a high level of experience. The recent advances in AI have helped to solve and automate several difficult tasks which motivated us to develop a classification tool for ichthyoplankton. We show that using machine

  9. Safety cost management in construction companies: A proposal classification.

    Science.gov (United States)

    López-Alonso, M; Ibarrondo-Dávila, M P; Rubio, M C

    2016-06-16

    Estimating health and safety costs in the construction industry presents various difficulties, including the complexity of cost allocation, the inadequacy of data available to managers and the absence of an accounting model designed specifically for safety cost management. Very often, the costs arising from accidents in the workplace are not fully identifiable due to the hidden costs involved. This paper reviews some studies of occupational health and safety cost management and proposes a means of classifying these costs. We conducted an empirical study in which the health and safety costs of 40 construction worksites are estimated. A new classification of the health and safety cost and its categories is proposed: Safety and non-safety costs. The costs of the company's health and safety policy should be included in the information provided by the accounting system, as a starting point for analysis and control. From this perspective, a classification of health and safety costs and its categories is put forward.

  10. Contributions for classification of platelet rich plasma - proposal of a new classification: MARSPILL.

    Science.gov (United States)

    Lana, Jose Fabio Santos Duarte; Purita, Joseph; Paulus, Christian; Huber, Stephany Cares; Rodrigues, Bruno Lima; Rodrigues, Ana Amélia; Santana, Maria Helena; Madureira, João Lopo; Malheiros Luzo, Ângela Cristina; Belangero, William Dias; Annichino-Bizzacchi, Joyce Maria

    2017-07-01

    Platelet-rich plasma (PRP) has emerged as a significant therapy used in medical conditions with heterogeneous results. There are some important classifications to try to standardize the PRP procedure. The aim of this report is to describe PRP contents studying celular and molecular components, and also propose a new classification for PRP. The main focus is on mononuclear cells, which comprise progenitor cells and monocytes. In addition, there are important variables related to PRP application incorporated in this study, which are the harvest method, activation, red blood cells, number of spins, image guidance, leukocytes number and light activation. The other focus is the discussion about progenitor cells presence on peripherial blood which are interesting due to neovasculogenesis and proliferation. The function of monocytes (in tissue-macrophages) are discussed here and also its plasticity, a potential property for regenerative medicine treatments.

  11. TFM classification and staging of oral submucous fibrosis: A new proposal.

    Science.gov (United States)

    Arakeri, Gururaj; Thomas, Deepak; Aljabab, Abdulsalam S; Hunasgi, Santosh; Rai, Kirthi Kumar; Hale, Beverley; Fonseca, Felipe Paiva; Gomez, Ricardo Santiago; Rahimi, Siavash; Merkx, Matthias A W; Brennan, Peter A

    2018-04-01

    We have evaluated the rationale of existing grading and staging schemes of oral submucous fibrosis (OSMF) based on how they are categorized. A novel classification and staging scheme is proposed. A total of 300 OSMF patients were evaluated for agreement between functional, clinical, and histopathological staging. Bilateral biopsies were assessed in 25 patients to evaluate for any differences in histopathological staging of OSMF in the same mouth. Extent of clinician agreement for categorized staging data was evaluated using Cohen's weighted kappa analysis. Cross-tabulation was performed on categorical grading data to understand the intercorrelation, and the unweighted kappa analysis was used to assess the bilateral grade agreement. Probabilities of less than 0.05 were considered significant. Data were analyzed using SPSS Statistics (version 25.0, IBM, USA). A low agreement was found between all the stages depicting the independent nature of trismus, clinical features, and histopathological components (K = 0.312, 0.167, 0.152) in OSMF. Following analysis, a three-component classification scheme (TFM classification) was developed that describes the severity of each independently, grouping them using a novel three-tier staging scheme as a guide to the treatment plan. The proposed classification and staging could be useful for effective communication, categorization, and for recording data and prognosis, and for guiding treatment plans. Furthermore, the classification considers OSMF malignant transformation in detail. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Ductal carcinoma in situ: a proposal for a new classification

    NARCIS (Netherlands)

    Holland, R.; Peterse, J. L.; Millis, R. R.; Eusebi, V.; Faverly, D.; van de Vijver, M. J.; Zafrani, B.

    1994-01-01

    Details of a proposed new classification for ductal carcinoma in situ (DCIS) are presented. This is based, primarily, on cytonuclear differentiation and, secondarily, on architectural differentiation (cellular polarisation). Three categories are defined. First is poorly differentiated DCIS composed

  13. A Proposal for Cardiac Arrhythmia Classification using Complexity Measures

    Directory of Open Access Journals (Sweden)

    AROTARITEI, D.

    2017-08-01

    Full Text Available Cardiovascular diseases are one of the major problems of humanity and therefore one of their component, arrhythmia detection and classification drawn an increased attention worldwide. The presence of randomness in discrete time series, like those arising in electrophysiology, is firmly connected with computational complexity measure. This connection can be used, for instance, in the analysis of RR-intervals of electrocardiographic (ECG signal, coded as binary string, to detect and classify arrhythmia. Our approach uses three algorithms (Lempel-Ziv, Sample Entropy and T-Code to compute the information complexity applied and a classification tree to detect 13 types of arrhythmia with encouraging results. To overcome the computational effort required for complexity calculus, a cloud computing solution with executable code deployment is also proposed.

  14. 78 FR 39765 - Notice of Proposed Classification of Public Lands/Minerals for State Indemnity Selection, Colorado

    Science.gov (United States)

    2013-07-02

    ... Proposed Classification of Public Lands/Minerals for State Indemnity Selection, Colorado AGENCY: Bureau of Land Management, Interior. ACTION: Notice of Proposed Classification. SUMMARY: The Colorado State Board... public lands and mineral estate in lieu of lands to which the State was entitled but did not receive...

  15. An intelligent condition monitoring system for on-line classification of machine tool wear

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Fu; Hope, A D; Javed, M [Systems Engineering Faculty, Southampton Institute (United Kingdom)

    1998-12-31

    The development of intelligent tool condition monitoring systems is a necessary requirement for successful automation of manufacturing processes. This presentation introduces a tool wear monitoring system for milling operations. The system utilizes power, force, acoustic emission and vibration sensors to monitor tool condition comprehensively. Features relevant to tool wear are drawn from time and frequency domain signals and a fuzzy pattern recognition technique is applied to combine the multisensor information and provide reliable classification results of tool wear states. (orig.) 10 refs.

  16. An intelligent condition monitoring system for on-line classification of machine tool wear

    Energy Technology Data Exchange (ETDEWEB)

    Fu Pan; Hope, A.D.; Javed, M. [Systems Engineering Faculty, Southampton Institute (United Kingdom)

    1997-12-31

    The development of intelligent tool condition monitoring systems is a necessary requirement for successful automation of manufacturing processes. This presentation introduces a tool wear monitoring system for milling operations. The system utilizes power, force, acoustic emission and vibration sensors to monitor tool condition comprehensively. Features relevant to tool wear are drawn from time and frequency domain signals and a fuzzy pattern recognition technique is applied to combine the multisensor information and provide reliable classification results of tool wear states. (orig.) 10 refs.

  17. A Systematic Approach to Food Variety Classification as a Tool in ...

    African Journals Online (AJOL)

    A Systematic Approach to Food Variety Classification as a Tool in Dietary ... and food variety (count of all dietary items consumed during the recall period up to the ... This paper presents a pilot study carried out with an aim of demonstrating the ...

  18. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  19. HClass: Automatic classification tool for health pathologies using artificial intelligence techniques.

    Science.gov (United States)

    Garcia-Chimeno, Yolanda; Garcia-Zapirain, Begonya

    2015-01-01

    The classification of subjects' pathologies enables a rigorousness to be applied to the treatment of certain pathologies, as doctors on occasions play with so many variables that they can end up confusing some illnesses with others. Thanks to Machine Learning techniques applied to a health-record database, it is possible to make using our algorithm. hClass contains a non-linear classification of either a supervised, non-supervised or semi-supervised type. The machine is configured using other techniques such as validation of the set to be classified (cross-validation), reduction in features (PCA) and committees for assessing the various classifiers. The tool is easy to use, and the sample matrix and features that one wishes to classify, the number of iterations and the subjects who are going to be used to train the machine all need to be introduced as inputs. As a result, the success rate is shown either via a classifier or via a committee if one has been formed. A 90% success rate is obtained in the ADABoost classifier and 89.7% in the case of a committee (comprising three classifiers) when PCA is applied. This tool can be expanded to allow the user to totally characterise the classifiers by adjusting them to each classification use.

  20. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  1. Aesthetics-based classification of geological structures in outcrops for geotourism purposes: a tentative proposal

    Science.gov (United States)

    Mikhailenko, Anna V.; Nazarenko, Olesya V.; Ruban, Dmitry A.; Zayats, Pavel P.

    2017-03-01

    The current growth in geotourism requires an urgent development of classifications of geological features on the basis of criteria that are relevant to tourist perceptions. It appears that structure-related patterns are especially attractive for geotourists. Consideration of the main criteria by which tourists judge beauty and observations made in the geodiversity hotspot of the Western Caucasus allow us to propose a tentative aesthetics-based classification of geological structures in outcrops, with two classes and four subclasses. It is possible to distinguish between regular and quasi-regular patterns (i.e., striped and lined and contorted patterns) and irregular and complex patterns (paysage and sculptured patterns). Typical examples of each case are found both in the study area and on a global scale. The application of the proposed classification permits to emphasise features of interest to a broad range of tourists. Aesthetics-based (i.e., non-geological) classifications are necessary to take into account visions and attitudes of visitors.

  2. Accessory cardiac bronchus: Proposed imaging classification on multidetector CT

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Min; Kim, Young Tong; Han, Jong Kyu; Jou, Sung Shick [Dept. of Radiology, Soonchunhyang University College of Medicine, Cheonan Hospital, Cheonan (Korea, Republic of)

    2016-02-15

    To propose the classification of accessory cardiac bronchus (ACB) based on imaging using multidetector computed tomography (MDCT), and evaluate follow-up changes of ACB. This study included 58 patients diagnosed as ACB since 9 years, using MDCT. We analyzed the types, division locations and division directions of ACB, and also evaluated changes on follow-up. We identified two main types of ACB: blind-end (51.7%) and lobule (48.3%). The blind-end ACB was further classified into three subtypes: blunt (70%), pointy (23.3%) and saccular (6.7%). The lobule ACB was also further classified into three subtypes: complete (46.4%), incomplete (28.6%) and rudimentary (25%). Division location to the upper half bronchus intermedius (79.3%) and medial direction (60.3%) were the most common in all patients. The difference in division direction was statistically significant between the blind-end and lobule types (p = 0.019). Peribronchial soft tissue was found in five cases. One calcification case was identified in the lobule type. During follow-up, ACB had disappeared in two cases of the blind-end type and in one case of the rudimentary subtype. The proposed classification of ACB based on imaging, and the follow-up CT, helped us to understand the various imaging features of ACB.

  3. The research on business rules classification and specification methods

    OpenAIRE

    Baltrušaitis, Egidijus

    2005-01-01

    The work is based on the research of business rules classification and specification methods. The basics of business rules approach are discussed. The most common business rules classification and modeling methods are analyzed. Business rules modeling techniques and tools for supporting them in the information systems are presented. Basing on the analysis results business rules classification method is proposed. Templates for every business rule type are presented. Business rules structuring ...

  4. Proposals for Paraphilic Disorders in the International Classification of Diseases and Related Health Problems, Eleventh Revision (ICD-11).

    Science.gov (United States)

    Krueger, Richard B; Reed, Geoffrey M; First, Michael B; Marais, Adele; Kismodi, Eszter; Briken, Peer

    2017-07-01

    The World Health Organization is currently developing the 11th revision of the International Classifications of Diseases and Related Health Problems (ICD-11), with approval of the ICD-11 by the World Health Assembly anticipated in 2018. The Working Group on the Classification of Sexual Disorders and Sexual Health (WGSDSH) was created and charged with reviewing and making recommendations for categories related to sexuality that are contained in the chapter of Mental and Behavioural Disorders in ICD-10 (World Health Organization 1992a). Among these categories was the ICD-10 grouping F65, Disorders of sexual preference, which describes conditions now widely referred to as Paraphilic Disorders. This article reviews the evidence base, rationale, and recommendations for the proposed revisions in this area for ICD-11 and compares them with DSM-5. The WGSDSH recommended that the grouping, Disorders of sexual preference, be renamed to Paraphilic Disorders and be limited to disorders that involve sexual arousal patterns that focus on non-consenting others or are associated with substantial distress or direct risk of injury or death. Consistent with this framework, the WGSDSH also recommended that the ICD-10 categories of Fetishism, Fetishistic Transvestism, and Sadomasochism be removed from the classification and new categories of Coercive Sexual Sadism Disorder, Frotteuristic Disorder, Other Paraphilic Disorder Involving Non-Consenting Individuals, and Other Paraphilic Disorder Involving Solitary Behaviour or Consenting Individuals be added. The WGSDSH's proposals for Paraphilic Disorders in ICD-11 are based on the WHO's role as a global public health agency and the ICD's function as a public health reporting tool.

  5. What should an ideal spinal injury classification system consist of? A methodological review and conceptual proposal for future classifications.

    NARCIS (Netherlands)

    Middendorp, J.J. van; Audige, L.; Hanson, B.; Chapman, J.R.; Hosman, A.J.F.

    2010-01-01

    Since Bohler published the first categorization of spinal injuries based on plain radiographic examinations in 1929, numerous classifications have been proposed. Despite all these efforts, however, only a few have been tested for reliability and validity. This methodological, conceptual review

  6. Emotions Classification for Arabic Tweets

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... learning methods for referring to all areas of detecting, analyzing, and classifying ... In this paper, an adaptive model is proposed for emotions classification of ... WEKA data mining tool is used to implement this model and evaluate the ... defined using vector representation, storing a numerical. "importance" ...

  7. Proposal of a classification system for opportunities to innovate in skin care products.

    Science.gov (United States)

    Souza, I D da S; Almeida, T L; Takahashi, V P

    2015-10-01

    What are the opportunities to innovate in a skin care product? There are certainly many opportunities and many technologies involved. In this work, we assumed the role of identifying and categorizing these opportunities to develop a comprehensive and intelligible classification system, which could be used as a tool to support decision-making in different professional contexts. Initially, we employed the Delphi method to identify, discuss and standardize the opportunities to innovate in a skin care product. Finally, we used the classification system obtained in the previous phase to label patent applications, therefore, testing the suitability and utility of the system. At the end of the process, we achieved a 10-category classification system for opportunities to innovate in skin care products, and we also illustrated how this system could be used. The resultant classification system offers a normalized terminology for cosmetic scientists interested in dealing with the particularities of incremental and radical innovations in skin care products. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  8. Proposed Terminology and Classification of Pre-Malignant Neoplastic Conditions: A Consensus Proposal

    Directory of Open Access Journals (Sweden)

    Peter Valent

    2017-12-01

    Full Text Available Cancer evolution is a step-wise non-linear process that may start early in life or later in adulthood, and includes pre-malignant (indolent and malignant phases. Early somatic changes may not be detectable or are found by chance in apparently healthy individuals. The same lesions may be detected in pre-malignant clonal conditions. In some patients, these lesions may never become relevant clinically whereas in others, they act together with additional pro-oncogenic hits and thereby contribute to the formation of an overt malignancy. Although some pre-malignant stages of a malignancy have been characterized, no global system to define and to classify these conditions is available. To discuss open issues related to pre-malignant phases of neoplastic disorders, a working conference was organized in Vienna in August 2015. The outcomes of this conference are summarized herein and include a basic proposal for a nomenclature and classification of pre-malignant conditions. This proposal should assist in the communication among patients, physicians and scientists, which is critical as genome-sequencing will soon be offered widely for early cancer-detection.

  9. Proposal of an ISO Standard: Classification of Transients and Accidents for Pressurized Water Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jong Chull [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of); Chung, Bub Dong; Lee, Doo-Jeong; Kim, Jong In; Yoon, Ju Hyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeong, Jae Jun [Pusan National Univ., Busan (Korea, Republic of); Kim, An Sup; Lee, Sang Yoon [Korea Electric Association, Seoul (Korea, Republic of)

    2016-05-15

    Classification of the events for a nuclear power plant is a fundamental basis for defining nuclear safety functions, safety systems performing those functions, and specific acceptance criteria for safety analyses. Presently, the approaches for the event classification adopted by the nuclear suppliers are different, which makes a nuclear technology trade barrier. The IAEA and WENRA are making efforts to establish general requirements or guidelines on the classification of either plant states or defence-in-depth levels for the design of nuclear power plants. However, the requirements and guidelines do not provide the details for practical application to various types of commercial PWRs. Recently, Korea proposed a new ISO standardisation project to develop a harmonized or consolidated international standard for classifying the events in PWRs and for defining (or imposing) the acceptance criteria for reactor design and/or radiation protection corresponding to each event class. This paper briefs the method with strategies for developing the standard, the current various practices of the PWR event classification and acceptance criteria developed or adopted by several organizations in USA and Europe, and a draft of the proposed standard. The proposed standard will affect all the relevant stakeholders such as reactor designers, vendors, suppliers, utilities, regulatory bodies, and publics of the leading countries in the area of nuclear industry as well as utilities, regulatory bodies, and publics of the newly entering (starting) countries. It is expected that all of the stakeholders will benefit from the proposed deliverable which provides an internationally harmonized standard for classifying the PWR events as follows: The reactor design bases for assuring safety and related technical information can be effectively communicated and shared among them resulting in enhancement of the global nuclear safety and fosterage of the global nuclear trade. The countries starting

  10. Proposal plan of classification faceted for federal universities

    Directory of Open Access Journals (Sweden)

    Renata Santos Brandão

    2017-09-01

    Full Text Available This study aims to present a faceted classification plan for the archival management of documents in the federal universities of Brazil. For this, was done a literature review on the archival management in Brazil, the types of classification plans and the theory of the Ranganathan faceted classification, through searches in databases in the areas of Librarianship and Archivology. It was identified the classification plan used in the Federal Institutions of Higher Education to represent the functional facet and created the structural classification plan to represent the structural facet. The two classification plans were inserted into a digital repository management system to give rise to the faceted classification plan. The system used was Tainacan, free software wordpress-based used in digital document management. The developed faceted classification plan allows the user to choose and even combine the way to look for the information that guarantees agreater efficiency in the information retrieval.

  11. The history of female genital tract malformation classifications and proposal of an updated system.

    Science.gov (United States)

    Acién, Pedro; Acién, Maribel I

    2011-01-01

    A correct classification of malformations of the female genital tract is essential to prevent unnecessary and inadequate surgical operations and to compare reproductive results. An ideal classification system should be based on aetiopathogenesis and should suggest the appropriate therapeutic strategy. We conducted a systematic review of relevant articles found in PubMed, Scopus, Scirus and ISI webknowledge, and analysis of historical collections of 'female genital malformations' and 'classifications'. Of 124 full-text articles assessed for eligibility, 64 were included because they contained original general, partial or modified classifications. All the existing classifications were analysed and grouped. The unification of terms and concepts was also analysed. Traditionally, malformations of the female genital tract have been catalogued and classified as Müllerian malformations due to agenesis, lack of fusion, the absence of resorption and lack of posterior development of the Müllerian ducts. The American Fertility Society classification of the late 1980s included seven basic groups of malformations also considering the Müllerian development and the relationship of the malformations to fertility. Other classifications are based on different aspects: functional, defects in vertical fusion, embryological or anatomical (Vagina, Cervix, Uterus, Adnex and Associated Malformation: VCUAM classification). However, an embryological-clinical classification system seems to be the most appropriate. Accepting the need for a new classification system of genitourinary malformations that considers the experience gained from the application of the current classification systems, the aetiopathogenesis and that also suggests the appropriate treatment, we proposed an update of our embryological-clinical classification as a new system with six groups of female genitourinary anomalies.

  12. A Soft Intelligent Risk Evaluation Model for Credit Scoring Classification

    Directory of Open Access Journals (Sweden)

    Mehdi Khashei

    2015-09-01

    Full Text Available Risk management is one of the most important branches of business and finance. Classification models are the most popular and widely used analytical group of data mining approaches that can greatly help financial decision makers and managers to tackle credit risk problems. However, the literature clearly indicates that, despite proposing numerous classification models, credit scoring is often a difficult task. On the other hand, there is no universal credit-scoring model in the literature that can be accurately and explanatorily used in all circumstances. Therefore, the research for improving the efficiency of credit-scoring models has never stopped. In this paper, a hybrid soft intelligent classification model is proposed for credit-scoring problems. In the proposed model, the unique advantages of the soft computing techniques are used in order to modify the performance of the traditional artificial neural networks in credit scoring. Empirical results of Australian credit card data classifications indicate that the proposed hybrid model outperforms its components, and also other classification models presented for credit scoring. Therefore, the proposed model can be considered as an appropriate alternative tool for binary decision making in business and finance, especially in high uncertainty conditions.

  13. Proposal of new classification of femoral trochanteric fracture by three-dimensional computed tomography and relationship to usual plain X-ray classification.

    Science.gov (United States)

    Shoda, Etsuo; Kitada, Shimpei; Sasaki, Yu; Hirase, Hitoshi; Niikura, Takahiro; Lee, Sang Yang; Sakurai, Atsushi; Oe, Keisuke; Sasaki, Takeharu

    2017-01-01

    Classification of femoral trochanteric fractures is usually based on plain X-ray findings using the Evans, Jensen, or AO/OTA classification. However, complications such as nonunion and cut out of the lag screw or blade are seen even in stable fracture. This may be due to the difficulty of exact diagnosis of fracture pattern in plain X-ray. Computed tomography (CT) may provide more information about the fracture pattern, but such data are scarce. In the present study, it was performed to propose a classification system for femoral trochanteric fractures using three-dimensional CT (3D-CT) and investigate the relationship between this classification and conventional plain X-ray classification. Using three-dimensional (3D)-CT, fractures were classified as two, three, or four parts using combinations of the head, greater trochanter, lesser trochanter, and shaft. We identified five subgroups of three-part fractures according to the fracture pattern involving the greater and lesser trochanters. In total, 239 femoral trochanteric fractures (45 men, 194 women; average age, 84.4 years) treated in four hospitals were classified using our 3D-CT classification. The relationship between this 3D-CT classification and the AO/OTA, Evans, and Jensen X-ray classifications was investigated. In the 3D-CT classification, many fractures exhibited a large oblique fragment of the greater trochanter including the lesser trochanter. This fracture type was recognized as unstable in the 3D-CT classification but was often classified as stable in each X-ray classification. It is difficult to evaluate fracture patterns involving the greater trochanter, especially large oblique fragments including the lesser trochanter, using plain X-rays. The 3D-CT shows the fracture line very clearly, making it easy to classify the fracture pattern.

  14. Proposed Terminology and Classification of Pre-Malignant Neoplastic Conditions: A Consensus Proposal.

    Science.gov (United States)

    Valent, Peter; Akin, Cem; Arock, Michel; Bock, Christoph; George, Tracy I; Galli, Stephen J; Gotlib, Jason; Haferlach, Torsten; Hoermann, Gregor; Hermine, Olivier; Jäger, Ulrich; Kenner, Lukas; Kreipe, Hans; Majeti, Ravindra; Metcalfe, Dean D; Orfao, Alberto; Reiter, Andreas; Sperr, Wolfgang R; Staber, Philipp B; Sotlar, Karl; Schiffer, Charles; Superti-Furga, Giulio; Horny, Hans-Peter

    2017-12-01

    Cancer evolution is a step-wise non-linear process that may start early in life or later in adulthood, and includes pre-malignant (indolent) and malignant phases. Early somatic changes may not be detectable or are found by chance in apparently healthy individuals. The same lesions may be detected in pre-malignant clonal conditions. In some patients, these lesions may never become relevant clinically whereas in others, they act together with additional pro-oncogenic hits and thereby contribute to the formation of an overt malignancy. Although some pre-malignant stages of a malignancy have been characterized, no global system to define and to classify these conditions is available. To discuss open issues related to pre-malignant phases of neoplastic disorders, a working conference was organized in Vienna in August 2015. The outcomes of this conference are summarized herein and include a basic proposal for a nomenclature and classification of pre-malignant conditions. This proposal should assist in the communication among patients, physicians and scientists, which is critical as genome-sequencing will soon be offered widely for early cancer-detection. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. A hierarchical classification scheme of psoriasis images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A two-stage hierarchical classification scheme of psoriasis lesion images is proposed. These images are basically composed of three classes: normal skin, lesion and background. The scheme combines conventional tools to separate the skin from the background in the first stage, and the lesion from...

  16. A proposed United States resource classification system

    International Nuclear Information System (INIS)

    Masters, C.D.

    1980-01-01

    Energy is a world-wide problem calling for world-wide communication to resolve the many supply and distribution problems. Essential to a communication problem are a definition and comparability of elements being communicated. The US Geological Survey, with the co-operation of the US Bureau of Mines and the US Department of Energy, has devised a classification system for all mineral resources, the principles of which, it is felt, offer the possibility of world communication. At present several other systems, extant or under development (Potential Gas Committee of the USA, United Nations Resource Committee, and the American Society of Testing and Materials) are internally consistent and provide easy communication linkage. The system in use by the uranium community in the United States of America, however, ties resource quantities to forward-cost dollar values rendering them inconsistent with other classifications and therefore not comparable. This paper develops the rationale for the new USGS resource classification and notes its benefits relative to a forward-cost classification and its relationship specifically to other current classifications. (author)

  17. A proposal for a new classification of complications in craniosynostosis surgery.

    Science.gov (United States)

    Shastin, Dmitri; Peacock, Sharron; Guruswamy, Velu; Kapetanstrataki, Melpo; Bonthron, David T; Bellew, Maggie; Long, Vernon; Carter, Lachlan; Smith, Ian; Goodden, John; Russell, John; Liddington, Mark; Chumas, Paul

    2017-06-01

    OBJECTIVE Complications have been used extensively to facilitate evaluation of craniosynostosis practice. However, description of complications tends to be nonstandardized, making comparison difficult. The authors propose a new pragmatic classification of complications that relies on prospective data collection, is geared to capture significant morbidity as well as any "near misses" in a systematic fashion, and can be used as a quality improvement tool. METHODS Data on complications for all patients undergoing surgery for nonsyndromic craniosynostosis between 2010 and 2015 were collected from a prospective craniofacial audit database maintained at the authors' institution. Information on comorbidities, details of surgery, and follow-up was extracted from medical records, anesthetic and operation charts, and electronic databases. Complications were defined as any unexpected event that resulted or could have resulted in a temporary or permanent damage to the child. RESULTS A total of 108 operations for the treatment of nonsyndromic craniosynostosis were performed in 103 patients during the 5-year study period. Complications were divided into 6 types: 0) perioperative occurrences; 1) inpatient complications; 2) outpatient complications not requiring readmission; 3) complications requiring readmission; 4) unexpected long-term deficit; and 5) mortality. These types were further subdivided according to the length of stay and time after discharge. The overall complication rate was found to be 35.9%. CONCLUSIONS The proportion of children with some sort of complication using the proposed definition was much higher than commonly reported, predominantly due to the inclusion of problems often dismissed as minor. The authors believe that these complications should be included in determining complication rates, as they will cause distress to families and may point to potential areas for improving a surgical service.

  18. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    Directory of Open Access Journals (Sweden)

    Brejnev Muhizi Muhire

    Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.

  19. Gear cutting tools fundamentals of design and computation

    CERN Document Server

    Radzevich, Stephen P

    2010-01-01

    Presents the DG/K-based method of surface generation, a novel and practical mathematical method for designing gear cutting tools with optimal parameters. This book proposes a scientific classification for the various kinds of the gear machining meshes, discussing optimal designs of gear cutting tools.

  20. Adaptive phase k-means algorithm for waveform classification

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  1. Dysfunctional breathing: a review of the literature and proposal for classification

    Directory of Open Access Journals (Sweden)

    Richard Boulding

    2016-09-01

    Full Text Available Dysfunctional breathing is a term describing breathing disorders where chronic changes in breathing pattern result in dyspnoea and other symptoms in the absence or in excess of the magnitude of physiological respiratory or cardiac disease. We reviewed the literature and propose a classification system for the common dysfunctional breathing patterns described. The literature was searched using the terms: dysfunctional breathing, hyperventilation, Nijmegen questionnaire and thoraco-abdominal asynchrony. We have summarised the presentation, assessment and treatment of dysfunctional breathing, and propose that the following system be used for classification. 1 Hyperventilation syndrome: associated with symptoms both related to respiratory alkalosis and independent of hypocapnia. 2 Periodic deep sighing: frequent sighing with an irregular breathing pattern. 3 Thoracic dominant breathing: can often manifest in somatic disease, if occurring without disease it may be considered dysfunctional and results in dyspnoea. 4 Forced abdominal expiration: these patients utilise inappropriate and excessive abdominal muscle contraction to aid expiration. 5 Thoraco-abdominal asynchrony: where there is delay between rib cage and abdominal contraction resulting in ineffective breathing mechanics. This review highlights the common abnormalities, current diagnostic methods and therapeutic implications in dysfunctional breathing. Future work should aim to further investigate the prevalence, clinical associations and treatment of these presentations.

  2. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Science.gov (United States)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR

  3. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    Energy Technology Data Exchange (ETDEWEB)

    Fei, Baowei, E-mail: bfei@emory.edu [Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road Northeast, Atlanta, Georgia 30329 (United States); Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia 30322 (United States); Department of Mathematics and Computer Sciences, Emory University, Atlanta, Georgia 30322 (United States); Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Aarsvold, John N. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Nuclear Medicine Service, Atlanta Veterans Affairs Medical Center, Atlanta, Georgia 30033 (United States); Cervo, Morgan; Stark, Rebecca [The Medical Physics Graduate Program in the George W. Woodruff School, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Meltzer, Carolyn C. [Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia 30329 (United States); Department of Neurology and Department of Psychiatry and Behavior Sciences, Emory University School of Medicine, Atlanta, Georgia 30322 (United States)

    2012-10-15

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [{sup 11}C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  4. MR/PET quantification tools: Registration, segmentation, classification, and MR-based attenuation correction

    International Nuclear Information System (INIS)

    Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Raghunath, Nivedita; Votaw, John R.; Aarsvold, John N.; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.

    2012-01-01

    Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with ["1"1C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET.

  5. Proposed changes in the classification of carcinogenic chemicals in the work area.

    Science.gov (United States)

    Neumann, H G; Thielmann, H W; Filser, J G; Gelbke, H P; Greim, H; Kappus, H; Norpoth, K H; Reuter, U; Vamvakas, S; Wardenbach, P; Wichmann, H E

    1997-12-01

    Carcinogenic chemicals in the work area are currently classified into three categories in Section III of the German List of MAK and BAT Values. This classification is based on qualitative criteria and reflects essentially the weight of evidence available for judging the carcinogenic potential of the chemicals. It is proposed that these Categories--IIIA1, IIIA2, and IIIB--be retained as Categories 1, 2, and 3, to conform with EU regulations. On the basis of our advancing knowledge of reaction mechanisms and the potency of carcinogens, it is now proposed that these three categories be supplemented with two additional categories. The essential feature of substances classified in the new categories is that exposure to these chemicals does not convey a significant risk of cancer to man, provided that an appropriate exposure limit (MAK value) is observed. It is proposed that chemicals known to act typically by nongenotoxic mechanisms and for which information is available that allows evaluation of the effects of low-dose exposures be classified in Category 4. Genotoxic chemicals for which low carcinogenic potency can be expected on the basis of dose-response relationships and toxicokinetics and for which risk at low doses can be assessed will be classified in Category 5. The basis for a better differentiation of carcinogens is discussed, the new categories are defined, and possible criteria for classification are described. Examples for Category 4 (1,4-dioxane) and Category 5 (styrene) are presented. The proposed changes in classifying carcinogenic chemicals in the work area are presented for further discussion.

  6. Seizure classification in EEG signals utilizing Hilbert-Huang transform

    Directory of Open Access Journals (Sweden)

    Abdulhay Enas W

    2011-05-01

    Full Text Available Abstract Background Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG signals. Method Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. Results The t-test results in a P-value Conclusion An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.

  7. Seizure classification in EEG signals utilizing Hilbert-Huang transform.

    Science.gov (United States)

    Oweis, Rami J; Abdulhay, Enas W

    2011-05-24

    Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG) signals. Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. The t-test results in a P-value with respect to its fast response and ease to use. An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.

  8. ncRNA-class Web Tool: Non-coding RNA feature extraction and pre-miRNA classification web tool

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Theofilatos, Konstantinos A.; Papadimitriou, Stergios; Tsakalidis, Athanasios K.; Likothanassis, Spiridon D.; Mavroudi, Seferina P.

    2012-01-01

    Until recently, it was commonly accepted that most genetic information is transacted by proteins. Recent evidence suggests that the majority of the genomes of mammals and other complex organisms are in fact transcribed into non-coding RNAs (ncRNAs), many of which are alternatively spliced and/or processed into smaller products. Non coding RNA genes analysis requires the calculation of several sequential, thermodynamical and structural features. Many independent tools have already been developed for the efficient calculation of such features but to the best of our knowledge there does not exist any integrative approach for this task. The most significant amount of existing work is related to the miRNA class of non-coding RNAs. MicroRNAs (miRNAs) are small non-coding RNAs that play a significant role in gene regulation and their prediction is a challenging bioinformatics problem. Non-coding RNA feature extraction and pre-miRNA classification Web Tool (ncRNA-class Web Tool) is a publicly available web tool ( http://150.140.142.24:82/Default.aspx ) which provides a user friendly and efficient environment for the effective calculation of a set of 58 sequential, thermodynamical and structural features of non-coding RNAs, plus a tool for the accurate prediction of miRNAs. © 2012 IFIP International Federation for Information Processing.

  9. Data Center IT Equipment Energy Assessment Tools: Current State of Commercial Tools, Proposal for a Future Set of Assessment Tools

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, Ben D. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); National Univ., San Diego, CA (United States). School of Engineering

    2012-06-30

    This research project, which was conducted during the Summer and Fall of 2011, investigated some commercially available assessment tools with a focus on IT equipment to see if such tools could round out the DC Pro tool suite. In this research, the assessment capabilities of the various tools were compiled to help make “non-biased” information available to the public. This research should not be considered to be exhaustive on all existing vendor tools although a number of vendors were contacted. Large IT equipment OEM’s like IBM and Dell provide their proprietary internal automated software which does not work on any other IT equipment. However, found two companies with products that showed promise in performing automated assessments for IT equipment from different OEM vendors. This report documents the research and provides a list of software products reviewed, contacts and websites, product details, discussions with specific companies, a set of recommendations, and next steps. As a result of this research, a simple 3-level approach to an IT assessment tool is proposed along with an example of an assessment using a simple IT equipment data collection tool (Level 1, spreadsheet). The tool has been reviewed with the Green Grid and LBNL staff. The initial feedback has been positive although further refinement to the tool will be necessary. Proposed next steps include a field trial of at least two vendors’ software in two different data centers with an objective to prove the concept, ascertain the extent of energy and computational assessment, ease of installation and opportunities for continuous improvement. Based on the discussions, field trials (or case studies) are proposed with two vendors – JouleX (expected to be completed in 2012) and Sentilla.

  10. The Value of Ensari’s Proposal in Evaluating the Mucosal Pathology of Childhood Celiac Disease: Old Classification versus New Version

    Directory of Open Access Journals (Sweden)

    Gülçin Güler Şimşek

    2012-09-01

    Full Text Available Objective: Small intestinal biopsy remains the gold standard in diagnosing celiac disease (CD; however, the wide spectrum of histopathological states and differential diagnosis of CD is still a diagnostic problem for pathologists. Recently, Ensari reviewed the literature and proposed an update of the histopathological diagnosis and classification for CD. Materials and Methods: In this study, the histopathological materials of 54 children in whom CD was diagnosed at our hospital were reviewed to compare the previous Marsh and Modified Marsh-Oberhuber classifications with this new proposal. Results: In this study, we show that the Ensari classification is as accurate as the Marsh and Modified Marsh classifications in describing the consecutive states of mucosal damage seen in CD.Conclusions: Ensari’s classification is simple, practical and facilitative in diagnosing and subtyping of mucosal pathology of CD.

  11. A Java-based tool for the design of classification microarrays

    Directory of Open Access Journals (Sweden)

    Broschat Shira L

    2008-08-01

    Full Text Available Abstract Background Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. Results The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. Conclusion In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays–and mixed-plasmid microarrays in particular–it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm, several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text, and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff. Weights

  12. A Java-based tool for the design of classification microarrays.

    Science.gov (United States)

    Meng, Da; Broschat, Shira L; Call, Douglas R

    2008-08-04

    Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays-and mixed-plasmid microarrays in particular-it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm), several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text), and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff). Weights generated using stepwise discriminant analysis can be stored for

  13. Prediction and classification of respiratory motion

    CERN Document Server

    Lee, Suk Jin

    2014-01-01

    This book describes recent radiotherapy technologies including tools for measuring target position during radiotherapy and tracking-based delivery systems. This book presents a customized prediction of respiratory motion with clustering from multiple patient interactions. The proposed method contributes to the improvement of patient treatments by considering breathing pattern for the accurate dose calculation in radiotherapy systems. Real-time tumor-tracking, where the prediction of irregularities becomes relevant, has yet to be clinically established. The statistical quantitative modeling for irregular breathing classification, in which commercial respiration traces are retrospectively categorized into several classes based on breathing pattern are discussed as well. The proposed statistical classification may provide clinical advantages to adjust the dose rate before and during the external beam radiotherapy for minimizing the safety margin. In the first chapter following the Introduction  to this book, we...

  14. Proposal of a new classification scheme for periocular injuries

    Directory of Open Access Journals (Sweden)

    Devi Prasad Mohapatra

    2017-01-01

    Full Text Available Background: Eyelids are important structures and play a role in protecting the globe from trauma, brightness, in maintaining the integrity of tear films and moving the tears towards the lacrimal drainage system and contribute to aesthetic appearance of the face. Ophthalmic trauma is an important cause of morbidity among individuals and has also been responsible for additional cost of healthcare. Periocular trauma involving eyelids and adjacent structures has been found to have increased recently probably due to increased pace of life and increased dependence on machinery. A comprehensive classification of periocular trauma would help in stratifying these injuries as well as study outcomes. Material and Methods: This study was carried out at our institute from June 2015 to Dec 2015. We searched multiple English language databases for existing classification systems for periocular trauma. We designed a system of classification of periocular soft tissue injuries based on clinico-anatomical presentations. This classification was applied prospectively to patients presenting with periocular soft tissue injuries to our department. Results: A comprehensive classification scheme was designed consisting of five types of periocular injuries. A total of 38 eyelid injuries in 34 patients were evaluated in this study. According to the System for Peri-Ocular Trauma (SPOT classification, Type V injuries were most common. SPOT Type II injuries were more common isolated injuries among all zones. Discussion: Classification systems are necessary in order to provide a framework in which to scientifically study the etiology, pathogenesis, and treatment of diseases in an orderly fashion. The SPOT classification has taken into account the periocular soft tissue injuries i.e., upper eyelid, lower eyelid, medial and lateral canthus injuries., based on observed clinico-anatomical patterns of eyelid injuries. Conclusion: The SPOT classification seems to be a reliable

  15. Classification of Pulse Waveforms Using Edit Distance with Real Penalty

    Directory of Open Access Journals (Sweden)

    Zhang Dongyu

    2010-01-01

    Full Text Available Abstract Advances in sensor and signal processing techniques have provided effective tools for quantitative research in traditional Chinese pulse diagnosis (TCPD. Because of the inevitable intraclass variation of pulse patterns, the automatic classification of pulse waveforms has remained a difficult problem. In this paper, by referring to the edit distance with real penalty (ERP and the recent progress in -nearest neighbors (KNN classifiers, we propose two novel ERP-based KNN classifiers. Taking advantage of the metric property of ERP, we first develop an ERP-induced inner product and a Gaussian ERP kernel, then embed them into difference-weighted KNN classifiers, and finally develop two novel classifiers for pulse waveform classification. The experimental results show that the proposed classifiers are effective for accurate classification of pulse waveform.

  16. A Novel Algorithm for Imbalance Data Classification Based on Neighborhood Hypergraph

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2014-01-01

    Full Text Available The classification problem for imbalance data is paid more attention to. So far, many significant methods are proposed and applied to many fields. But more efficient methods are needed still. Hypergraph may not be powerful enough to deal with the data in boundary region, although it is an efficient tool to knowledge discovery. In this paper, the neighborhood hypergraph is presented, combining rough set theory and hypergraph. After that, a novel classification algorithm for imbalance data based on neighborhood hypergraph is developed, which is composed of three steps: initialization of hyperedge, classification of training data set, and substitution of hyperedge. After conducting an experiment of 10-fold cross validation on 18 data sets, the proposed algorithm has higher average accuracy than others.

  17. Android Malware Classification Using K-Means Clustering Algorithm

    Science.gov (United States)

    Hamid, Isredza Rahmi A.; Syafiqah Khalid, Nur; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Chai Wen, Chuah

    2017-08-01

    Malware was designed to gain access or damage a computer system without user notice. Besides, attacker exploits malware to commit crime or fraud. This paper proposed Android malware classification approach based on K-Means clustering algorithm. We evaluate the proposed model in terms of accuracy using machine learning algorithms. Two datasets were selected to demonstrate the practicing of K-Means clustering algorithms that are Virus Total and Malgenome dataset. We classify the Android malware into three clusters which are ransomware, scareware and goodware. Nine features were considered for each types of dataset such as Lock Detected, Text Detected, Text Score, Encryption Detected, Threat, Porn, Law, Copyright and Moneypak. We used IBM SPSS Statistic software for data classification and WEKA tools to evaluate the built cluster. The proposed K-Means clustering algorithm shows promising result with high accuracy when tested using Random Forest algorithm.

  18. A simple working classification proposed for the latrogenic lesions of teeth and associated structures in the oral cavity.

    Science.gov (United States)

    Shamim, Thorakkal

    2013-09-01

    Iatrogenic lesions can affect both hard and soft tissues in the oral cavity, induced by the dentist's activity, manner or therapy. There is no approved simple working classification for the iatrogenic lesions of teeth and associated structures in the oral cavity in the literature. A simple working classification is proposed here for iatrogenic lesions of teeth and associated structures in the oral cavity based on its relation with dental specialities. The dental specialities considered in this classification are conservative dentistry and endodontics, orthodontics, oral and maxillofacial surgery and prosthodontics. This classification will be useful for the dental clinician who is dealing with diseases of oral cavity.

  19. Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics

    Science.gov (United States)

    Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas

    2014-01-01

    Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727

  20. Reflecting on the structure of soil classification systems: insights from a proposal for integrating subsoil data into soil information systems

    Science.gov (United States)

    Dondeyne, Stefaan; Juilleret, Jérôme; Vancampenhout, Karen; Deckers, Jozef; Hissler, Christophe

    2017-04-01

    Classification of soils in both World Reference Base for soil resources (WRB) and Soil Taxonomy hinges on the identification of diagnostic horizons and characteristics. However as these features often occur within the first 100 cm, these classification systems convey little information on subsoil characteristics. An integrated knowledge of the soil, soil-to-substratum and deeper substratum continuum is required when dealing with environmental issues such as vegetation ecology, water quality or the Critical Zone in general. Therefore, we recently proposed a classification system of the subsolum complementing current soil classification systems. By reflecting on the structure of the subsoil classification system which is inspired by WRB, we aim at fostering a discussion on some potential future developments of WRB. For classifying the subsolum we define Regolite, Saprolite, Saprock and Bedrock as four Subsolum Reference Groups each corresponding to different weathering stages of the subsoil. Principal qualifiers can be used to categorize intergrades of these Subsoil Reference Groups while morphologic and lithologic characteristics can be presented with supplementary qualifiers. We argue that adopting a low hierarchical structure - akin to WRB and in contrast to a strong hierarchical structure as in Soil Taxonomy - offers the advantage of having an open classification system avoiding the need for a priori knowledge of all possible combinations which may be encountered in the field. Just as in WRB we also propose to use principal and supplementary qualifiers as a second level of classification. However, in contrast to WRB we propose to reserve the principal qualifiers for intergrades and to regroup the supplementary qualifiers into thematic categories (morphologic or lithologic). Structuring the qualifiers in this manner should facilitate the integration and handling of both soil and subsoil classification units into soil information systems and calls for paying

  1. 78 FR 2447 - Proposed Information Collection Request (ICR) for the Worker Classification Survey; Comment Request

    Science.gov (United States)

    2013-01-11

    ... minimum wage and/or overtime, as well as programs like unemployment insurance and workers' compensation... DEPARTMENT OF LABOR Wage and Hour Division Proposed Information Collection Request (ICR) for the Worker Classification Survey; Comment Request AGENCY: Wage and Hour Division, Labor. ACTION: Notice...

  2. [A magnetoencephalographic study of generalised developmental disorders. A new proposal for their classification].

    Science.gov (United States)

    Muñoz Yunta, J A; Palau Baduell, M; Salvado Salvado, B; Amo, C; Fernandez Lucas, A; Maestu, F; Ortiz, T

    2004-02-01

    Autistic spectrum disorders (ASD) is a term that is not included in DSM IV or in ICD 10, which are the diagnostic tools most commonly used by clinical professionals but can offer problems in research when it comes to finding homogenous groups. From a neuropaediatric point of view, there is a need for a classification of the generalised disorders affecting development and for this purpose we used Wing's triad, which defines the continuum of the autistic spectrum, and the information provided by magnetoencephalography (MEG) as grouping elements. Specific generalised developmental disorders were taken as being those syndromes that partially expressed some autistic trait, but with their own personality so that they could be considered to be a specific disorder. ASD were classified as being primary, cryptogenic or secondary. The primary disorders, in turn, express a continuum that ranges from Savant syndrome to Asperger's syndrome and the different degrees of early infantile autism. MEG is a functional neuroimaging technique that has enabled us to back up this classification.

  3. Primary care physicians' use of the proposed classification of common mental disorders for ICD-11

    DEFF Research Database (Denmark)

    Goldberg, David P.; Lam, Tai-Pong; Minhas, Fareed

    2017-01-01

    Background. The World Health Organization is revising the classification of common mental disorders in primary care for ICD-11. Major changes from the ICD-10 primary care version have been proposed for: (i) mood and anxiety disorders; and (ii) presentations of multiple somatic symptoms (bodily...... stress syndrome). This three-part field study explored the implementation of the revised classification by primary care physicians (PCPs) in five countries. Methods. Participating PCPs in Brazil, China, Mexico, Pakistan and Spain were asked to use the revised classification, first in patients...... that they suspected might be psychologically distressed (Part 1), and second in patients with multiple somatic symptoms causing distress or disability not wholly attributable to a known physical pathology, or with high levels of health anxiety (Part 2). Patients referred to Part 1 or Part 2 underwent a structured...

  4. Radioactive wastes: a proposal to its classification

    International Nuclear Information System (INIS)

    Domenech N, H.; Garcia L, N.; Hernandez S, A.

    1996-01-01

    On the basis of the quantities and the characteristics of the stored radioactive wastes in Cuba and the IAEA system of wastes classification, the concentration activities that would be used as limits for those categories are evaluated. This approach suggests a limit of 10 TBq/m 3 for short lived liquid wastes of Low and Intermediate Level (less than 30 years) and 5 TBq/m 3 for long lived liquid wastes (more than 30 years). For solid wastes the suggested limits are ten times lower. Taking into account the small quantities of arising wastes and to make easy its segregation, collection and disposal, a low level waste sub-classification in three new categories, whether or not they may be direct discharged, is suggested. As lower classification limit, while not specific exemption levels are established in the country, the use of an ALI min fraction is emphasized, meanwhile the total discharged activity will be no greater than 10 MBq or 100 MBq when the discharge occurs over the whole year. (authors). 6 refs., 5 tabs

  5. Polsar Land Cover Classification Based on Hidden Polarimetric Features in Rotation Domain and Svm Classifier

    Science.gov (United States)

    Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.

    2017-09-01

    Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / text-decoration: overline">α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification

  6. POLSAR LAND COVER CLASSIFICATION BASED ON HIDDEN POLARIMETRIC FEATURES IN ROTATION DOMAIN AND SVM CLASSIFIER

    Directory of Open Access Journals (Sweden)

    C.-S. Tao

    2017-09-01

    Full Text Available Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets’ scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy

  7. Differentiation of osteophyte types in osteoarthritis - proposal of a histological classification.

    Science.gov (United States)

    Junker, Susann; Krumbholz, Grit; Frommer, Klaus W; Rehart, Stefan; Steinmeyer, Jürgen; Rickert, Markus; Schett, Georg; Müller-Ladner, Ulf; Neumann, Elena

    2016-01-01

    Osteoarthritis is not only characterized by cartilage degradation but also involves subchondral bone remodeling and osteophyte formation. Osteophytes are fibrocartilage-capped bony outgrowths originating from the periosteum. The pathophysiology of osteophyte formation is not completely understood. Yet, different research approaches are under way. Therefore, a histological osteophyte classification to achieve comparable results in osteophyte research was established for application to basic science research questions. The osteophytes were collected from knee joints of osteoarthritis patients (n=10, 94 osteophytes in total) after joint replacement surgery. Their size and origin in the respective joint were photo-documented. To develop an osteophyte classification, serial tissue sections were evaluated using histological (hematoxylin and eosin, Masson's trichrome, toluidine blue) and immunohistochemical staining (collagen type II). Based on the histological and immunohistochemical evaluation, osteophytes were categorized into four different types depending on the degree of ossification and the percentage of mesenchymal connective tissue. Size and localization of osteophytes were independent from the histological stages. This histological classification system of osteoarthritis osteophytes provides a helpful tool for analyzing and monitoring osteophyte development and for characterizing osteophyte types within a single human joint and may therefore contribute to achieve comparable results when analyzing histological findings in osteophytes. Copyright © 2015 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.

  8. Classification and authentication of unknown water samples using machine learning algorithms.

    Science.gov (United States)

    Kundu, Palash K; Panchariya, P C; Kundu, Madhusree

    2011-07-01

    This paper proposes the development of water sample classification and authentication, in real life which is based on machine learning algorithms. The proposed techniques used experimental measurements from a pulse voltametry method which is based on an electronic tongue (E-tongue) instrumentation system with silver and platinum electrodes. E-tongue include arrays of solid state ion sensors, transducers even of different types, data collectors and data analysis tools, all oriented to the classification of liquid samples and authentication of unknown liquid samples. The time series signal and the corresponding raw data represent the measurement from a multi-sensor system. The E-tongue system, implemented in a laboratory environment for 6 numbers of different ISI (Bureau of Indian standard) certified water samples (Aquafina, Bisleri, Kingfisher, Oasis, Dolphin, and McDowell) was the data source for developing two types of machine learning algorithms like classification and regression. A water data set consisting of 6 numbers of sample classes containing 4402 numbers of features were considered. A PCA (principal component analysis) based classification and authentication tool was developed in this study as the machine learning component of the E-tongue system. A proposed partial least squares (PLS) based classifier, which was dedicated as well; to authenticate a specific category of water sample evolved out as an integral part of the E-tongue instrumentation system. The developed PCA and PLS based E-tongue system emancipated an overall encouraging authentication percentage accuracy with their excellent performances for the aforesaid categories of water samples. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  9. An alternative approach to the determination of scaling law expressions for the L–H transition in Tokamaks utilizing classification tools instead of regression

    International Nuclear Information System (INIS)

    Gaudio, P; Gelfusa, M; Lupelli, I; Murari, A; Vega, J

    2014-01-01

    A new approach to determine the power law expressions for the threshold between the H and L mode of confinement is presented. The method is based on two powerful machine learning tools for classification: neural networks and support vector machines. Using as inputs clear examples of the systems on either side of the transition, the machine learning tools learn the input–output mapping corresponding to the equations of the boundary separating the confinement regimes. Systematic tests with synthetic data show that the machine learning tools provide results competitive with traditional statistical regression and more robust against random noise and systematic errors. The developed tools have then been applied to the multi-machine International Tokamak Physics Activity International Global Threshold Database of validated ITER-like Tokamak discharges. The machine learning tools converge on the same scaling law parameters obtained with non-linear regression. On the other hand, the developed tools allow a reduction of 50% of the uncertainty in the extrapolations to ITER. Therefore the proposed approach can effectively complement traditional regression since its application poses much less stringent requirements on the experimental data, to be used to determine the scaling laws, because they do not require examples exactly at the moment of the transition. (paper)

  10. Classification of Motor Imagery EEG Signals with Support Vector Machines and Particle Swarm Optimization

    Science.gov (United States)

    Ma, Yuliang; Ding, Xiaohui; She, Qingshan; Luo, Zhizeng; Potter, Thomas; Zhang, Yingchun

    2016-01-01

    Support vector machines are powerful tools used to solve the small sample and nonlinear classification problems, but their ultimate classification performance depends heavily upon the selection of appropriate kernel and penalty parameters. In this study, we propose using a particle swarm optimization algorithm to optimize the selection of both the kernel and penalty parameters in order to improve the classification performance of support vector machines. The performance of the optimized classifier was evaluated with motor imagery EEG signals in terms of both classification and prediction. Results show that the optimized classifier can significantly improve the classification accuracy of motor imagery EEG signals. PMID:27313656

  11. New tools for evaluating LQAS survey designs.

    Science.gov (United States)

    Hund, Lauren

    2014-02-15

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are provided for calculating the positive and negative predictive value of a design with respect to an underlying coverage distribution and the selected design parameters. These tools are illustrated using a data example from two consecutive LQAS surveys measuring Oral Rehydration Solution (ORS) preparation. Using the survey tools, the dependence of classification accuracy on benchmark selection and the width of the 'grey region' are clarified in the context of ORS preparation across seven supervision areas. Following the completion of an LQAS survey, estimation of the distribution of coverage across areas facilitates quantifying classification accuracy and can help guide intervention decisions.

  12. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... Service 7 CFR Part 27 [AMS-CN-13-0043] RIN 0581-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed rule. SUMMARY: The... optional cotton futures classification procedure--identified and known as ``registration'' by the U.S...

  13. Rough Sets as a Knowledge Discovery and Classification Tool for the Diagnosis of Students with Learning Disabilities

    Directory of Open Access Journals (Sweden)

    Yu-Chi Lin

    2011-02-01

    Full Text Available Due to the implicit characteristics of learning disabilities (LDs, the diagnosis of students with learning disabilities has long been a difficult issue. Artificial intelligence techniques like artificial neural network (ANN and support vector machine (SVM have been applied to the LD diagnosis problem with satisfactory outcomes. However, special education teachers or professionals tend to be skeptical to these kinds of black-box predictors. In this study, we adopt the rough set theory (RST, which can not only perform as a classifier, but may also produce meaningful explanations or rules, to the LD diagnosis application. Our experiments indicate that the RST approach is competitive as a tool for feature selection, and it performs better in term of prediction accuracy than other rulebased algorithms such as decision tree and ripper algorithms. We also propose to mix samples collected from sources with different LD diagnosis procedure and criteria. By pre-processing these mixed samples with simple and readily available clustering algorithms, we are able to improve the quality and support of rules generated by the RST. Overall, our study shows that the rough set approach, as a classification and knowledge discovery tool, may have great potential in playing an essential role in LD diagnosis.

  14. BIOCAT: a pattern recognition platform for customizable biological image classification and annotation.

    Science.gov (United States)

    Zhou, Jie; Lamichhane, Santosh; Sterne, Gabriella; Ye, Bing; Peng, Hanchuan

    2013-10-04

    Pattern recognition algorithms are useful in bioimage informatics applications such as quantifying cellular and subcellular objects, annotating gene expressions, and classifying phenotypes. To provide effective and efficient image classification and annotation for the ever-increasing microscopic images, it is desirable to have tools that can combine and compare various algorithms, and build customizable solution for different biological problems. However, current tools often offer a limited solution in generating user-friendly and extensible tools for annotating higher dimensional images that correspond to multiple complicated categories. We develop the BIOimage Classification and Annotation Tool (BIOCAT). It is able to apply pattern recognition algorithms to two- and three-dimensional biological image sets as well as regions of interest (ROIs) in individual images for automatic classification and annotation. We also propose a 3D anisotropic wavelet feature extractor for extracting textural features from 3D images with xy-z resolution disparity. The extractor is one of the about 20 built-in algorithms of feature extractors, selectors and classifiers in BIOCAT. The algorithms are modularized so that they can be "chained" in a customizable way to form adaptive solution for various problems, and the plugin-based extensibility gives the tool an open architecture to incorporate future algorithms. We have applied BIOCAT to classification and annotation of images and ROIs of different properties with applications in cell biology and neuroscience. BIOCAT provides a user-friendly, portable platform for pattern recognition based biological image classification of two- and three- dimensional images and ROIs. We show, via diverse case studies, that different algorithms and their combinations have different suitability for various problems. The customizability of BIOCAT is thus expected to be useful for providing effective and efficient solutions for a variety of biological

  15. Proposal for a histopathological consensus classification of the periprosthetic interface membrane.

    Science.gov (United States)

    Morawietz, L; Classen, R-A; Schröder, J H; Dynybil, C; Perka, C; Skwara, A; Neidel, J; Gehrke, T; Frommelt, L; Hansen, T; Otto, M; Barden, B; Aigner, T; Stiehl, P; Schubert, T; Meyer-Scholten, C; König, A; Ströbel, P; Rader, C P; Kirschner, S; Lintner, F; Rüther, W; Bos, I; Hendrich, C; Kriegsmann, J; Krenn, V

    2006-06-01

    The introduction of clearly defined histopathological criteria for a standardised evaluation of the periprosthetic membrane, which can appear in cases of total joint arthroplasty revision surgery. Based on histomorphological criteria, four types of periprosthetic membrane were defined: wear particle induced type (detection of foreign body particles; macrophages and multinucleated giant cells occupy at least 20% of the area; type I); infectious type (granulation tissue with neutrophilic granulocytes, plasma cells and few, if any, wear particles; type II); combined type (aspects of type I and type II occur simultaneously; type III); and indeterminate type (neither criteria for type I nor type II are fulfilled; type IV). The periprosthetic membranes of 370 patients (217 women, 153 men; mean age 67.6 years, mean period until revision surgery 7.4 years) were analysed according to the defined criteria. Frequency of histopathological membrane types was: type I 54.3%, type II 19.7%, type III 5.4%, type IV 15.4%, and not assessable 5.1%. The mean period between primary arthroplasty and revision surgery was 10.1 years for type I, 3.2 years for type II, 4.5 years for type III and 5.4 years for type IV. The correlation between histopathological and microbiological diagnosis was high (89.7%), and the inter-observer reproducibility sufficient (85%). The classification proposed enables standardised typing of periprosthetic membranes and may serve as a tool for further research on the pathogenesis of the loosening of total joint replacement. The study highlights the importance of non-infectious, non-particle induced loosening of prosthetic devices in orthopaedic surgery (membrane type IV), which was observed in 15.4% of patients.

  16. Classification of mammographic masses using geometric symmetry and fractal analysis

    Energy Technology Data Exchange (ETDEWEB)

    Guo Qi; Ruiz, V.F. [Cybernetics, School of Systems Engineering, Univ. of Reading (United Kingdom); Shao Jiaqing [Dept. of Electronics, Univ. of Kent (United Kingdom); Guo Falei [WanDe Industrial Engineering Co. (China)

    2007-06-15

    In this paper, we propose a fuzzy symmetry measure based on geometrical operations to characterise shape irregularity of mammographic mass lesion. Group theory, a powerful tool in the investigation of geometric transformation, is employed in our work to define and describe the underlying mathematical relations. We investigate the usefulness of fuzzy symmetry measure in combination with fractal analysis for classification of masses. Comparative studies show that fuzzy symmetry measure is useful for shape characterisation of mass lesions and is a good complementary feature for benign-versus-malignant classification of masses. (orig.)

  17. A software tool for automatic classification and segmentation of 2D/3D medical images

    International Nuclear Information System (INIS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-01-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided

  18. A software tool for automatic classification and segmentation of 2D/3D medical images

    Energy Technology Data Exchange (ETDEWEB)

    Strzelecki, Michal, E-mail: michal.strzelecki@p.lodz.pl [Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, 90-924 Lodz (Poland); Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur [Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, 90-924 Lodz (Poland)

    2013-02-21

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  19. Horseshoe lung - a case report with unusual bronchial and pleural anomalies and a proposed new classification

    International Nuclear Information System (INIS)

    Figa, F.H.; Yoo, S.J.; Burrows, P.E.; Turner-Gomes, S.; Freedom, R.M.

    1993-01-01

    One case of horseshoe lung with associated scimitar syndrome is presented. Unusual bronchial and pleural anomalies as delineated by CT and plain chest radiographic imaging are described. The presence of bilateal fissures led to a newly proposed classification of horseshoe lung based on pleural anatomy. (orig.)

  20. Horseshoe lung - a case report with unusual bronchial and pleural anomalies and a proposed new classification

    Energy Technology Data Exchange (ETDEWEB)

    Figa, F H [Dept. of Diagnostic Imaging and Division of Cardiology, Hospital for Sick Children, Toronto, ON (Canada); Yoo, S J; Burrows, P E [Dept. of Diagnostic Imaging and Division of Cardiology, Hospital for Sick Children, Toronto, ON (Canada); Turner-Gomes, S [McMaster Univ. Medical Center, Hamilton, ON (Canada); Freedom, R M [Dept. of Diagnostic Imaging and Division of Cardiology, Hospital for Sick Children, Toronto, ON (Canada)

    1993-03-01

    One case of horseshoe lung with associated scimitar syndrome is presented. Unusual bronchial and pleural anomalies as delineated by CT and plain chest radiographic imaging are described. The presence of bilateal fissures led to a newly proposed classification of horseshoe lung based on pleural anatomy. (orig.)

  1. Conceptual process models and quantitative analysis of classification problems in Scrum software development practices

    NARCIS (Netherlands)

    Helwerda, L.S.; Niessink, F.; Verbeek, F.J.

    2017-01-01

    We propose a novel classification method that integrates into existing agile software development practices by collecting data records generated by software and tools used in the development process. We extract features from the collected data and create visualizations that provide insights,

  2. Binary Classification Method of Social Network Users

    Directory of Open Access Journals (Sweden)

    I. A. Poryadin

    2017-01-01

    Full Text Available The subject of research is a binary classification method of social network users based on the data analysis they have placed. Relevance of the task to gain information about a person by examining the content of his/her pages in social networks is exemplified. The most common approach to its solution is a visual browsing. The order of the regional authority in our country illustrates that its using in school education is needed. The article shows restrictions on the visual browsing of pupil’s pages in social networks as a tool for the teacher and the school psychologist and justifies that a process of social network users’ data analysis should be automated. Explores publications, which describe such data acquisition, processing, and analysis methods and considers their advantages and disadvantages. The article also gives arguments to support a proposal to study the classification method of social network users. One such method is credit scoring, which is used in banks and credit institutions to assess the solvency of clients. Based on the high efficiency of the method there is a proposal for significant expansion of its using in other areas of society. The possibility to use logistic regression as the mathematical apparatus of the proposed method of binary classification has been justified. Such an approach enables taking into account the different types of data extracted from social networks. Among them: the personal user data, information about hobbies, friends, graphic and text information, behaviour characteristics. The article describes a number of existing methods of data transformation that can be applied to solve the problem. An experiment of binary gender-based classification of social network users is described. A logistic model obtained for this example includes multiple logical variables obtained by transforming the user surnames. This experiment confirms the feasibility of the proposed method. Further work is to define a system

  3. Acute pesticide poisoning: a proposed classification tool

    OpenAIRE

    Thundiyil, Josef G; Stober, Judy; Besbelli, Nida; Pronczuk, Jenny

    2008-01-01

    Cases of acute pesticide poisoning (APP) account for significant morbidity and mortality worldwide. Developing countries are particularly susceptible due to poorer regulation, lack of surveillance systems, less enforcement, lack of training and inadequate access to information systems. Previous research has demonstrated wide variability in incidence rates for APP. This is possibly due to inconsistent reporting methodology and exclusion of occupational and non-intentional poisonings. The purpo...

  4. International proposal for an acoustic classification scheme for dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2014-01-01

    Acoustic classification schemes specify different quality levels for acoustic conditions. Regulations and classification schemes for dwellings typically include criteria for airborne and impact sound insulation, façade sound insulation and service equipment noise. However, although important...... classes, implying also trade barriers. Thus, a harmonized classification scheme would be useful, and the European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", running 2009-2013 with members from 32 countries, including three overseas...... for quality of life, information about acoustic conditions is rarely available, neither for new or existing housing. Regulatory acoustic requirements will, if enforced, ensure a corresponding quality for new dwellings, but satisfactory conditions for occupants are not guaranteed. Consequently, several...

  5. Surgical options in benign parotid tumors: a proposal for classification.

    Science.gov (United States)

    Quer, Miquel; Vander Poorten, Vincent; Takes, Robert P; Silver, Carl E; Boedeker, Carsten C; de Bree, Remco; Rinaldo, Alessandra; Sanabria, Alvaro; Shaha, Ashok R; Pujol, Albert; Zbären, Peter; Ferlito, Alfio

    2017-11-01

    Different surgical options are currently available for treating benign tumors of the parotid gland, and the discussion on optimal treatment continues despite several meta-analyses. These options include more limited resections (extracapsular dissection, partial lateral parotidectomy) versus more extensive and traditional options (lateral parotid lobectomy, total parotidectomy). Different schools favor one option or another based on their experience, skills and tradition. This review provides a critical analysis of the literature regarding these options. The main limitation of all the studies is the bias of selection for different surgical approaches. For this reason, we propose a staging system that could facilitate clinical decision making and the comparison of results. We propose four categories based on the size of the tumor and its location within the parotid gland. Category I includes tumors up to 3 cm, which are mobile, close to the outer surface and close to the parotid borders. Category II includes deeper tumors up to 3 cm. Category III comprises tumors greater than 3 cm involving two levels of the parotid gland, and category IV tumors are greater than 3 cm and involve more than 2 levels. For each category and for the various pathologic types, a guideline of surgical extent is proposed. The objective of this classification is to facilitate prospective multicentric studies on surgical techniques in the treatment of benign parotid tumors and to enable the comparison of results of different clinical studies.

  6. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2015-01-01

    Full Text Available Unsupervised data classification (or clustering analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.

  7. Classification of Osteogenesis Imperfecta revisited

    NARCIS (Netherlands)

    van Dijk, F. S.; Pals, G.; van Rijn, R. R.; Nikkels, P. G. J.; Cobben, J. M.

    2010-01-01

    In 1979 Sillence proposed a classification of Osteogenesis Imperfecta (OI) in OI types I, II, III and IV. In 2004 and 2007 this classification was expanded with OI types V-VIII because of distinct clinical features and/or different causative gene mutations. We propose a revised classification of OI

  8. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery

    Directory of Open Access Journals (Sweden)

    Ming-Der Yang

    2017-06-01

    Full Text Available Rice lodging identification relies on manual in situ assessment and often leads to a compensation dispute in agricultural disaster assessment. Therefore, this study proposes a comprehensive and efficient classification technique for agricultural lands that entails using unmanned aerial vehicle (UAV imagery. In addition to spectral information, digital surface model (DSM and texture information of the images was obtained through image-based modeling and texture analysis. Moreover, single feature probability (SFP values were computed to evaluate the contribution of spectral and spatial hybrid image information to classification accuracy. The SFP results revealed that texture information was beneficial for the classification of rice and water, DSM information was valuable for lodging and tree classification, and the combination of texture and DSM information was helpful in distinguishing between artificial surface and bare land. Furthermore, a decision tree classification model incorporating SFP values yielded optimal results, with an accuracy of 96.17% and a Kappa value of 0.941, compared with that of a maximum likelihood classification model (90.76%. The rice lodging ratio in paddies at the study site was successfully identified, with three paddies being eligible for disaster relief. The study demonstrated that the proposed spatial and spectral hybrid image classification technology is a promising tool for rice lodging assessment.

  9. An enhanced topologically significant directed random walk in cancer classification using gene expression datasets

    Directory of Open Access Journals (Sweden)

    Choon Sen Seah

    2017-12-01

    Full Text Available Microarray technology has become one of the elementary tools for researchers to study the genome of organisms. As the complexity and heterogeneity of cancer is being increasingly appreciated through genomic analysis, cancerous classification is an emerging important trend. Significant directed random walk is proposed as one of the cancerous classification approach which have higher sensitivity of risk gene prediction and higher accuracy of cancer classification. In this paper, the methodology and material used for the experiment are presented. Tuning parameter selection method and weight as parameter are applied in proposed approach. Gene expression dataset is used as the input datasets while pathway dataset is used to build a directed graph, as reference datasets, to complete the bias process in random walk approach. In addition, we demonstrate that our approach can improve sensitive predictions with higher accuracy and biological meaningful classification result. Comparison result takes place between significant directed random walk and directed random walk to show the improvement in term of sensitivity of prediction and accuracy of cancer classification.

  10. Independent Comparison of Popular DPI Tools for Traffic Classification

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Carela-Español, Valentín; Barlet-Ros, Pere

    2015-01-01

    Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classifi......Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic......, application and web service). We carefully built a labeled dataset with more than 750K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full...

  11. Image-based deep learning for classification of noise transients in gravitational wave detectors

    Science.gov (United States)

    Razzano, Massimiliano; Cuoco, Elena

    2018-05-01

    The detection of gravitational waves has inaugurated the era of gravitational astronomy and opened new avenues for the multimessenger study of cosmic sources. Thanks to their sensitivity, the Advanced LIGO and Advanced Virgo interferometers will probe a much larger volume of space and expand the capability of discovering new gravitational wave emitters. The characterization of these detectors is a primary task in order to recognize the main sources of noise and optimize the sensitivity of interferometers. Glitches are transient noise events that can impact the data quality of the interferometers and their classification is an important task for detector characterization. Deep learning techniques are a promising tool for the recognition and classification of glitches. We present a classification pipeline that exploits convolutional neural networks to classify glitches starting from their time-frequency evolution represented as images. We evaluated the classification accuracy on simulated glitches, showing that the proposed algorithm can automatically classify glitches on very fast timescales and with high accuracy, thus providing a promising tool for online detector characterization.

  12. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  13. Sound classification of dwellings in the Nordic countries

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Turunen-Rise, Iiris

    1997-01-01

    be met. The classification system is based on limit values for airborne sound insulation, impact sound pressure level, reverberation time and indoor and outdoor noise levels. The purpose of the standard is to offer a tool for specification of a standardised acoustic climate and to promote constructors......A draft standard INSTA 122:1997 on sound classification of dwellings is for voting as a common national standard in the Nordic countries (Denmark, Norway, Sweden, Finland, Iceland) and in Estonia. The draft standard specifies a sound classification system with four classes A, B, C and D, where...... class C is proposed as the future minimum requirements for new dwellings. The classes B and A define criteria for dwellings with improved or very good acoustic conditions, whereas class D may be used for older, renovated dwellings in which the acoustic quality level of a new dwelling cannot reasonably...

  14. Effective Feature Selection for Classification of Promoter Sequences.

    Directory of Open Access Journals (Sweden)

    Kouser K

    Full Text Available Exploring novel computational methods in making sense of biological data has not only been a necessity, but also productive. A part of this trend is the search for more efficient in silico methods/tools for analysis of promoters, which are parts of DNA sequences that are involved in regulation of expression of genes into other functional molecules. Promoter regions vary greatly in their function based on the sequence of nucleotides and the arrangement of protein-binding short-regions called motifs. In fact, the regulatory nature of the promoters seems to be largely driven by the selective presence and/or the arrangement of these motifs. Here, we explore computational classification of promoter sequences based on the pattern of motif distributions, as such classification can pave a new way of functional analysis of promoters and to discover the functionally crucial motifs. We make use of Position Specific Motif Matrix (PSMM features for exploring the possibility of accurately classifying promoter sequences using some of the popular classification techniques. The classification results on the complete feature set are low, perhaps due to the huge number of features. We propose two ways of reducing features. Our test results show improvement in the classification output after the reduction of features. The results also show that decision trees outperform SVM (Support Vector Machine, KNN (K Nearest Neighbor and ensemble classifier LibD3C, particularly with reduced features. The proposed feature selection methods outperform some of the popular feature transformation methods such as PCA and SVD. Also, the methods proposed are as accurate as MRMR (feature selection method but much faster than MRMR. Such methods could be useful to categorize new promoters and explore regulatory mechanisms of gene expressions in complex eukaryotic species.

  15. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  16. A Sieving ANN for Emotion-Based Movie Clip Classification

    Science.gov (United States)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  17. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  18. NMD Classifier: A reliable and systematic classification tool for nonsense-mediated decay events.

    Directory of Open Access Journals (Sweden)

    Min-Kung Hsu

    Full Text Available Nonsense-mediated decay (NMD degrades mRNAs that include premature termination codons to avoid the translation and accumulation of truncated proteins. This mechanism has been found to participate in gene regulation and a wide spectrum of biological processes. However, the evolutionary and regulatory origins of NMD-targeted transcripts (NMDTs have been less studied, partly because of the complexity in analyzing NMD events. Here we report NMD Classifier, a tool for systematic classification of NMD events for either annotated or de novo assembled transcripts. This tool is based on the assumption of minimal evolution/regulation-an event that leads to the least change is the most likely to occur. Our simulation results indicate that NMD Classifier can correctly identify an average of 99.3% of the NMD-causing transcript structural changes, particularly exon inclusions/exclusions and exon boundary alterations. Researchers can apply NMD Classifier to evolutionary and regulatory studies by comparing NMD events of different biological conditions or in different organisms.

  19. Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders.

    Science.gov (United States)

    Subasi, Abdulhamit

    2013-06-01

    Support vector machine (SVM) is an extensively used machine learning method with many biomedical signal classification applications. In this study, a novel PSO-SVM model has been proposed that hybridized the particle swarm optimization (PSO) and SVM to improve the EMG signal classification accuracy. This optimization mechanism involves kernel parameter setting in the SVM training procedure, which significantly influences the classification accuracy. The experiments were conducted on the basis of EMG signal to classify into normal, neurogenic or myopathic. In the proposed method the EMG signals were decomposed into the frequency sub-bands using discrete wavelet transform (DWT) and a set of statistical features were extracted from these sub-bands to represent the distribution of wavelet coefficients. The obtained results obviously validate the superiority of the SVM method compared to conventional machine learning methods, and suggest that further significant enhancements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. The PSO-SVM yielded an overall accuracy of 97.41% on 1200 EMG signals selected from 27 subject records against 96.75%, 95.17% and 94.08% for the SVM, the k-NN and the RBF classifiers, respectively. PSO-SVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of PSO-SVM for diagnosis of neuromuscular disorders. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Annual Research Review: The Nature and Classification of Reading Disorders--A Commentary on Proposals for DSM-5

    Science.gov (United States)

    Snowling, Margaret J.; Hulme, Charles

    2012-01-01

    This article reviews our understanding of reading disorders in children and relates it to current proposals for their classification in DSM-5. There are two different, commonly occurring, forms of reading disorder in children which arise from different underlying language difficulties. Dyslexia (as defined in DSM-5), or decoding difficulty, refers…

  1. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  2. Malware distributed collection and pre-classification system using honeypot technology

    Science.gov (United States)

    Grégio, André R. A.; Oliveira, Isabela L.; Santos, Rafael D. C.; Cansian, Adriano M.; de Geus, Paulo L.

    2009-04-01

    Malware has become a major threat in the last years due to the ease of spread through the Internet. Malware detection has become difficult with the use of compression, polymorphic methods and techniques to detect and disable security software. Those and other obfuscation techniques pose a problem for detection and classification schemes that analyze malware behavior. In this paper we propose a distributed architecture to improve malware collection using different honeypot technologies to increase the variety of malware collected. We also present a daemon tool developed to grab malware distributed through spam and a pre-classification technique that uses antivirus technology to separate malware in generic classes.

  3. The Frequent Unusual Headache Syndromes: A Proposed Classification Based on Lifetime Prevalence.

    Science.gov (United States)

    Valença, Marcelo M; de Oliveira, Daniella A

    2016-01-01

    There is no agreement on a single cutoff point or prevalence for regarding a given disease as rare. The concept of what is a rare headache disorder is even less clear and the spectrum from a very frequent, frequent, occasional to rare headache syndrome is yet to be established. An attempt has been made to estimate the lifetime prevalence of each of the headache subtypes classified in the ICHD-II. Using the ICHD-II, 199 different headache subtypes were identified. The following classification was made according to the estimated lifetime prevalence of each headache disorder: very frequent (prevalence >10%); frequent (between 1 and 10%); occasional (between 0.07 and 1%); and unusual or rare (headache disorders, 7/199 (4%) as very frequent, 9/199 (5%) as frequent, and 29/199 (15%) as occasional forms of headache disorder. The unusual headache syndromes do not appear to be as infrequent in clinical practice as has been generally believed. About three-fourths of the classified headache disorders found in the ICHD-II can be considered as rare. This narrative review article may be regarded as an introduction to the concept of unusual headaches and a proposed classification of all headaches (at least those listed in the ICHD-II). © 2015 American Headache Society.

  4. Radon classification of building ground

    International Nuclear Information System (INIS)

    Slunga, E.

    1988-01-01

    The Laboratories of Building Technology and Soil Mechanics and Foundation Engineering at the Helsinki University of Technology in cooperation with The Ministry of the Environment have proposed a radon classification for building ground. The proposed classification is based on the radon concentration in soil pores and on the permeability of the foundation soil. The classification includes four radon classes: negligible, normal, high and very high. Depending on the radon class the radon-technical solution for structures is chosen. It is proposed that the classification be done in general terms in connection with the site investigations for the planning of land use and in more detail in connection with the site investigations for an individual house. (author)

  5. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    Science.gov (United States)

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  6. Video genre classification using multimodal features

    Science.gov (United States)

    Jin, Sung Ho; Bae, Tae Meon; Choo, Jin Ho; Ro, Yong Man

    2003-12-01

    We propose a video genre classification method using multimodal features. The proposed method is applied for the preprocessing of automatic video summarization or the retrieval and classification of broadcasting video contents. Through a statistical analysis of low-level and middle-level audio-visual features in video, the proposed method can achieve good performance in classifying several broadcasting genres such as cartoon, drama, music video, news, and sports. In this paper, we adopt MPEG-7 audio-visual descriptors as multimodal features of video contents and evaluate the performance of the classification by feeding the features into a decision tree-based classifier which is trained by CART. The experimental results show that the proposed method can recognize several broadcasting video genres with a high accuracy and the classification performance with multimodal features is superior to the one with unimodal features in the genre classification.

  7. Traumatic subarachnoid pleural fistula in children: case report, algorithm and classification proposal

    Directory of Open Access Journals (Sweden)

    Moscote-Salazar Luis Rafael

    2016-06-01

    Full Text Available Subarachnoid pleural fistulas are rare. They have been described as complications of thoracic surgery, penetrating injuries and spinal surgery, among others. We present the case of a 3-year-old female child, who suffer spinal cord trauma secondary to a car accident, developing a posterior subarachnoid pleural fistula. To our knowledge this is the first reported case of a pediatric patient with subarachnoid pleural fistula resulting from closed trauma, requiring intensive multimodal management. We also present a management algorithm and a proposed classification. The diagnosis of this pathology is difficult when not associated with neurological deficit. A high degree of suspicion, multidisciplinary management and timely surgical intervention allow optimal management.

  8. Farmers prevailing perception profiles regarding GM crops: A classification proposal.

    Science.gov (United States)

    Almeida, Carla; Massarani, Luisa

    2018-04-01

    Genetically modified organisms have been at the centre of a major public controversy, involving different interests and actors. While much attention has been devoted to consumer views on genetically modified food, there have been few attempts to understand the perceptions of genetically modified technology among farmers. By investigating perceptions of genetically modified organisms among Brazilian farmers, we intend to contribute towards filling this gap and thereby add the views of this stakeholder group to the genetically modified debate. A comparative analysis of our data and data from other studies indicate there is a complex variety of views on genetically modified organisms among farmers. Despite this diversity, we found variations in such views occur within limited parameters, concerned principally with expectations or concrete experiences regarding the advantages of genetically modified crops, perceptions of risks associated with them, and ethical questions they raise. We then propose a classification of prevailing profiles to represent the spectrum of perceptions of genetically modified organisms among farmers.

  9. Classification with support hyperplanes

    NARCIS (Netherlands)

    G.I. Nalbantov (Georgi); J.C. Bioch (Cor); P.J.F. Groenen (Patrick)

    2006-01-01

    textabstractA new classification method is proposed, called Support Hy- perplanes (SHs). To solve the binary classification task, SHs consider the set of all hyperplanes that do not make classification mistakes, referred to as semi-consistent hyperplanes. A test object is classified using

  10. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    Science.gov (United States)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  11. Laryngeal Cysts in Adults: Simplifying Classification and Management.

    Science.gov (United States)

    Heyes, Richard; Lott, David G

    2017-12-01

    Objective Laryngeal cysts may occur at any mucosa-lined location within the larynx and account for 5% to 10% of nonmalignant laryngeal lesions. A number of proposed classifications for laryngeal cysts exist; however, no previously published classification aims to guide management. This review analyzes contemporary laryngeal cyst management and proposes a framework for the terminology and management of cystic lesions in the larynx. Data Sources PubMed/Medline. Review Methods A primary literature search of the entire Medline database was performed for all titles of publications pertaining to laryngeal cysts and reviewed for relevance. Full manuscripts were reviewed per the relevance of their titles and abstracts, and selection into this review was according to their clinical and scientific relevance. Conclusion Laryngeal cysts have been associated with rapid-onset epiglottitis, dyspnea, stridor, and death; therefore, they should not be considered of little significance. Symptoms are varied and nonspecific. Laryngoscopy is the primary initial diagnostic tool. Cross-sectional imaging may be required, and future use of endolaryngeal ultrasound and optical coherence tomography may revolutionize practice. Where possible, cysts should be completely excised, and there is growing evidence that a transoral approach is superior to transcervical excision for nearly all cysts. Histology provides definitive diagnosis, and oncocytic cysts require close follow-up. Implications for Practice A new classification system is proposed that increases clarity in terminology, with the aim of better preparing surgeons and authors for future advances in the understanding and management of laryngeal cysts.

  12. Wireless Magnetic Sensor Network for Road Traffic Monitoring and Vehicle Classification

    Directory of Open Access Journals (Sweden)

    Velisavljevic Vladan

    2016-12-01

    Full Text Available Efficiency of transportation of people and goods is playing a vital role in economic growth. A key component for enabling effective planning of transportation networks is the deployment and operation of autonomous monitoring and traffic analysis tools. For that reason, such systems have been developed to register and classify road traffic usage. In this paper, we propose a novel system for road traffic monitoring and classification based on highly energy efficient wireless magnetic sensor networks. We develop novel algorithms for vehicle speed and length estimation and vehicle classification that use multiple magnetic sensors. We also demonstrate that, using such a low-cost system with simplified installation and maintenance compared to current solutions, it is possible to achieve highly accurate estimation and a high rate of positive vehicle classification.

  13. DEPA classification: a proposal for standardising PRP use and a retrospective application of available devices.

    Science.gov (United States)

    Magalon, J; Chateau, A L; Bertrand, B; Louis, M L; Silvestre, A; Giraudo, L; Veran, J; Sabatier, F

    2016-01-01

    Significant biological differences in platelet-rich plasma (PRP) preparations have been highlighted and could explain the large variability in the clinical benefit of PRP reported in the literature. The scientific community now recommends the use of classification for PRP injection; however, these classifications are focused on platelet and leucocyte concentrations. This presents the disadvantages of (1) not taking into account the final volume of the preparation; (2) omitting the presence of red blood cells in PRP and (3) not assessing the efficiency of production. On the basis of standards classically used in the Cell Therapy field, we propose the DEPA (Dose of injected platelets, Efficiency of production, Purity of the PRP, Activation of the PRP) classification to extend the characterisation of the injected PRP preparation. We retrospectively applied this classification on 20 PRP preparations for which biological characteristics were available in the literature. Dose of injected platelets varies from 0.21 to 5.43 billion, corresponding to a 25-fold increase. Only a Magellan device was able to obtain an A score for this parameter. Assessments of the efficiency of production reveal that no device is able to recover more than 90% of platelets from the blood. Purity of the preparation reveals that a majority of the preparations are contaminated by red blood cells as only three devices reach an A score for this parameter, corresponding to a percentage of platelets compared with red blood cells and leucocytes over 90%. These findings should provide significant help to clinicians in selecting a system that meets their specific needs for a given indication.

  14. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  15. Formalization of Technological Knowledge in the Field of Metallurgy using Document Classification Tools Supported with Semantic Techniques

    Directory of Open Access Journals (Sweden)

    Regulski K.

    2017-06-01

    Full Text Available The process of knowledge formalization is an essential part of decision support systems development. Creating a technological knowledge base in the field of metallurgy encountered problems in acquisition and codifying reusable computer artifacts based on text documents. The aim of the work was to adapt the algorithms for classification of documents and to develop a method of semantic integration of a created repository. Author used artificial intelligence tools: latent semantic indexing, rough sets, association rules learning and ontologies as a tool for integration. The developed methodology allowed for the creation of semantic knowledge base on the basis of documents in natural language in the field of metallurgy.

  16. Automatic earthquake detection and classification with continuous hidden Markov models: a possible tool for monitoring Las Canadas caldera in Tenerife

    Energy Technology Data Exchange (ETDEWEB)

    Beyreuther, Moritz; Wassermann, Joachim [Department of Earth and Environmental Sciences (Geophys. Observatory), Ludwig Maximilians Universitaet Muenchen, D-80333 (Germany); Carniel, Roberto [Dipartimento di Georisorse e Territorio Universitat Degli Studi di Udine, I-33100 (Italy)], E-mail: roberto.carniel@uniud.it

    2008-10-01

    A possible interaction of (volcano-) tectonic earthquakes with the continuous seismic noise recorded in the volcanic island of Tenerife was recently suggested, but existing catalogues seem to be far from being self consistent, calling for the development of automatic detection and classification algorithms. In this work we propose the adoption of a methodology based on Hidden Markov Models (HMMs), widely used already in other fields, such as speech classification.

  17. Malware Classification Based on the Behavior Analysis and Back Propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Pan Zhi-Peng

    2016-01-01

    Full Text Available With the development of the Internet, malwares have also been expanded on the network systems rapidly. In order to deal with the diversity and amount of the variants, a number of automated behavior analysis tools have emerged as the time requires. Yet these tools produce detailed behavior reports of the malwares, it still needs to specify its category and judge its criticality manually. In this paper, we propose an automated malware classification approach based on the behavior analysis. We firstly perform dynamic analyses to obtain the detailed behavior profiles of the malwares, which are then used to abstract the main features of the malwares and serve as the inputs of the Back Propagation (BP Neural Network model.The experimental results demonstrate that our classification technique is able to classify the malware variants effectively and detect malware accurately.

  18. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  19. FPGA Implementation of Blue Whale Calls Classifier Using High-Level Programming Tool

    Directory of Open Access Journals (Sweden)

    Mohammed Bahoura

    2016-02-01

    Full Text Available In this paper, we propose a hardware-based architecture for automatic blue whale calls classification based on short-time Fourier transform and multilayer perceptron neural network. The proposed architecture is implemented on field programmable gate array (FPGA using Xilinx System Generator (XSG and the Nexys-4 Artix-7 FPGA board. This high-level programming tool allows us to design, simulate and execute the compiled design in Matlab/Simulink environment quickly and easily. Intermediate signals obtained at various steps of the proposed system are presented for typical blue whale calls. Classification performances based on the fixed-point XSG/FPGA implementation are compared to those obtained by the floating-point Matlab simulation, using a representative database of the blue whale calls.

  20. Seismic texture classification. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Vinther, R.

    1997-12-31

    The seismic texture classification method, is a seismic attribute that can both recognize the general reflectivity styles and locate variations from these. The seismic texture classification performs a statistic analysis for the seismic section (or volume) aiming at describing the reflectivity. Based on a set of reference reflectivities the seismic textures are classified. The result of the seismic texture classification is a display of seismic texture categories showing both the styles of reflectivity from the reference set and interpolations and extrapolations from these. The display is interpreted as statistical variations in the seismic data. The seismic texture classification is applied to seismic sections and volumes from the Danish North Sea representing both horizontal stratifications and salt diapers. The attribute succeeded in recognizing both general structure of successions and variations from these. Also, the seismic texture classification is not only able to display variations in prospective areas (1-7 sec. TWT) but can also be applied to deep seismic sections. The seismic texture classification is tested on a deep reflection seismic section (13-18 sec. TWT) from the Baltic Sea. Applied to this section the seismic texture classification succeeded in locating the Moho, which could not be located using conventional interpretation tools. The seismic texture classification is a seismic attribute which can display general reflectivity styles and deviations from these and enhance variations not found by conventional interpretation tools. (LN)

  1. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  2. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  3. A scope classification of data quality requirements for food composition data.

    Science.gov (United States)

    Presser, Karl; Hinterberger, Hans; Weber, David; Norrie, Moira

    2016-02-15

    Data quality is an important issue when managing food composition data since the usage of the data can have a significant influence on policy making and further research. Although several frameworks for data quality have been proposed, general tools and measures are still lacking. As a first step in this direction, we investigated data quality requirements for an information system to manage food composition data, called FoodCASE. The objective of our investigation was to find out if different requirements have different impacts on the intrinsic data quality that must be regarded during data quality assessment and how these impacts can be described. We refer to the resulting classification with its categories as the scope classification of data quality requirements. As proof of feasibility, the scope classification has been implemented in the FoodCASE system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. A New Tool for Climatic Analysis Using the Koppen Climate Classification

    Science.gov (United States)

    Larson, Paul R.; Lohrengel, C. Frederick, II

    2011-01-01

    The purpose of climate classification is to help make order of the seemingly endless spatial distribution of climates. The Koppen classification system in a modified format is the most widely applied system in use today. This system may not be the best nor most complete climate classification that can be conceived, but it has gained widespread…

  5. New proposals for the international classification of diseases-11 revision of pain diagnoses

    DEFF Research Database (Denmark)

    Rief, Winfried; Kaasa, Stein; Jensen, Rigmor

    2012-01-01

    The representation of pain diagnoses in current classification systems like International Classification of Diseases (ICD)-10 and Diagnostic and Statistical Manual of Mental Disorders (DSM)-IV does not adequately reflect the state of the art of pain research, and does not sufficiently support...... the clinical management and research programs for pain conditions. Moreover, there is an urgent need to harmonize classification of pain syndromes of special expert groups (eg, International Classification of Headache Disorders) and general classification systems (eg, ICD-11, DSM-V). Therefore, this paper...

  6. New tools for evaluating LQAS survey designs

    OpenAIRE

    Hund, Lauren

    2014-01-01

    Lot Quality Assurance Sampling (LQAS) surveys have become increasingly popular in global health care applications. Incorporating Bayesian ideas into LQAS survey design, such as using reasonable prior beliefs about the distribution of an indicator, can improve the selection of design parameters and decision rules. In this paper, a joint frequentist and Bayesian framework is proposed for evaluating LQAS classification accuracy and informing survey design parameters. Simple software tools are pr...

  7. Machine learning algorithms for mode-of-action classification in toxicity assessment.

    Science.gov (United States)

    Zhang, Yile; Wong, Yau Shu; Deng, Jian; Anton, Cristina; Gabos, Stephan; Zhang, Weiping; Huang, Dorothy Yu; Jin, Can

    2016-01-01

    Real Time Cell Analysis (RTCA) technology is used to monitor cellular changes continuously over the entire exposure period. Combining with different testing concentrations, the profiles have potential in probing the mode of action (MOA) of the testing substances. In this paper, we present machine learning approaches for MOA assessment. Computational tools based on artificial neural network (ANN) and support vector machine (SVM) are developed to analyze the time-concentration response curves (TCRCs) of human cell lines responding to tested chemicals. The techniques are capable of learning data from given TCRCs with known MOA information and then making MOA classification for the unknown toxicity. A novel data processing step based on wavelet transform is introduced to extract important features from the original TCRC data. From the dose response curves, time interval leading to higher classification success rate can be selected as input to enhance the performance of the machine learning algorithm. This is particularly helpful when handling cases with limited and imbalanced data. The validation of the proposed method is demonstrated by the supervised learning algorithm applied to the exposure data of HepG2 cell line to 63 chemicals with 11 concentrations in each test case. Classification success rate in the range of 85 to 95 % are obtained using SVM for MOA classification with two clusters to cases up to four clusters. Wavelet transform is capable of capturing important features of TCRCs for MOA classification. The proposed SVM scheme incorporated with wavelet transform has a great potential for large scale MOA classification and high-through output chemical screening.

  8. Development of Tier 1 screening tool for soil and groundwater vulnerability assessment in Korea using classification algorithm in a neural network

    Science.gov (United States)

    Shin, K. H.; Kim, K. H.; Ki, S. J.; Lee, H. G.

    2017-12-01

    The vulnerability assessment tool at a Tier 1 level, although not often used for regulatory purposes, helps establish pollution prevention and management strategies in the areas of potential environmental concern such as soil and ground water. In this study, the Neural Network Pattern Recognition Tool embedded in MATLAB was used to allow the initial screening of soil and groundwater pollution based on data compiled across about 1000 previously contaminated sites in Korea. The input variables included a series of parameters which were tightly related to downward movement of water and contaminants through soil and ground water, whereas multiple classes were assigned to the sum of concentrations of major pollutants detected. Results showed that in accordance with diverse pollution indices for soil and ground water, pollution levels in both media were strongly modulated by site-specific characteristics such as intrinsic soil and other geologic properties, in addition to pollution sources and rainfall. However, classification accuracy was very sensitive to the number of classes defined as well as the types of the variables incorporated, requiring careful selection of input variables and output categories. Therefore, we believe that the proposed methodology is used not only to modify existing pollution indices so that they are more suitable for addressing local vulnerability, but also to develop a unique assessment tool to support decision making based on locally or nationally available data. This study was funded by a grant from the GAIA project(2016000560002), Korea Environmental Industry & Technology Institute, Republic of Korea.

  9. Stellar Spectral Classification with Locality Preserving Projections ...

    Indian Academy of Sciences (India)

    With the help of computer tools and algorithms, automatic stellar spectral classification has become an area of current interest. The process of stellar spectral classification mainly includes two steps: dimension reduction and classification. As a popular dimensionality reduction technique, Principal Component Analysis (PCA) ...

  10. An Artificial Intelligence Classification Tool and Its Application to Gamma-Ray Bursts

    Science.gov (United States)

    Hakkila, Jon; Haglin, David J.; Roiger, Richard J.; Giblin, Timothy; Paciesas, William S.; Pendleton, Geoffrey N.; Mallozzi, Robert S.

    2004-01-01

    Despite being the most energetic phenomenon in the known universe, the astrophysics of gamma-ray bursts (GRBs) has still proven difficult to understand. It has only been within the past five years that the GRB distance scale has been firmly established, on the basis of a few dozen bursts with x-ray, optical, and radio afterglows. The afterglows indicate source redshifts of z=1 to z=5, total energy outputs of roughly 10(exp 52) ergs, and energy confined to the far x-ray to near gamma-ray regime of the electromagnetic spectrum. The multi-wavelength afterglow observations have thus far provided more insight on the nature of the GRB mechanism than the GRB observations; far more papers have been written about the few observed gamma-ray burst afterglows in the past few years than about the thousands of detected gamma-ray bursts. One reason the GRB central engine is still so poorly understood is that GRBs have complex, overlapping characteristics that do not appear to be produced by one homogeneous process. At least two subclasses have been found on the basis of duration, spectral hardness, and fluence (time integrated flux); Class 1 bursts are softer, longer, and brighter than Class 2 bursts (with two second durations indicating a rough division). A third GRB subclass, overlapping the other two, has been identified using statistical clustering techniques; Class 3 bursts are intermediate between Class 1 and Class 2 bursts in brightness and duration, but are softer than Class 1 bursts. We are developing a tool to aid scientists in the study of GRB properties. In the process of developing this tool, we are building a large gamma-ray burst classification database. We are also scientifically analyzing some GRB data as we develop the tool. Tool development thus proceeds in tandem with the dataset for which it is being designed. The tool invokes a modified KDD (Knowledge Discovery in Databases) process, which is described as follows.

  11. Hydrological Classification, a Practical Tool for Mangrove Restoration.

    Science.gov (United States)

    Van Loon, Anne F; Te Brake, Bram; Van Huijgevoort, Marjolein H J; Dijksma, Roel

    2016-01-01

    Mangrove restoration projects, aimed at restoring important values of mangrove forests after degradation, often fail because hydrological conditions are disregarded. We present a simple, but robust methodology to determine hydrological suitability for mangrove species, which can guide restoration practice. In 15 natural and 8 disturbed sites (i.e. disused shrimp ponds) in three case study regions in south-east Asia, water levels were measured and vegetation species composition was determined. Using an existing hydrological classification for mangroves, sites were classified into hydrological classes, based on duration of inundation, and vegetation classes, based on occurrence of mangrove species. For the natural sites hydrological and vegetation classes were similar, showing clear distribution of mangrove species from wet to dry sites. Application of the classification to disturbed sites showed that in some locations hydrological conditions had been restored enough for mangrove vegetation to establish, in some locations hydrological conditions were suitable for various mangrove species but vegetation had not established naturally, and in some locations hydrological conditions were too wet for any mangrove species (natural or planted) to grow. We quantified the effect that removal of obstructions such as dams would have on the hydrology and found that failure of planting at one site could have been prevented. The hydrological classification needs relatively little data, i.e. water levels for a period of only one lunar tidal cycle without additional measurements, and uncertainties in the measurements and analysis are relatively small. For the study locations, the application of the hydrological classification gave important information about how to restore the hydrology to suitable conditions to improve natural regeneration or to plant mangrove species, which could not have been obtained by estimating elevation only. Based on this research a number of recommendations

  12. Hydrological Classification, a Practical Tool for Mangrove Restoration.

    Directory of Open Access Journals (Sweden)

    Anne F Van Loon

    Full Text Available Mangrove restoration projects, aimed at restoring important values of mangrove forests after degradation, often fail because hydrological conditions are disregarded. We present a simple, but robust methodology to determine hydrological suitability for mangrove species, which can guide restoration practice. In 15 natural and 8 disturbed sites (i.e. disused shrimp ponds in three case study regions in south-east Asia, water levels were measured and vegetation species composition was determined. Using an existing hydrological classification for mangroves, sites were classified into hydrological classes, based on duration of inundation, and vegetation classes, based on occurrence of mangrove species. For the natural sites hydrological and vegetation classes were similar, showing clear distribution of mangrove species from wet to dry sites. Application of the classification to disturbed sites showed that in some locations hydrological conditions had been restored enough for mangrove vegetation to establish, in some locations hydrological conditions were suitable for various mangrove species but vegetation had not established naturally, and in some locations hydrological conditions were too wet for any mangrove species (natural or planted to grow. We quantified the effect that removal of obstructions such as dams would have on the hydrology and found that failure of planting at one site could have been prevented. The hydrological classification needs relatively little data, i.e. water levels for a period of only one lunar tidal cycle without additional measurements, and uncertainties in the measurements and analysis are relatively small. For the study locations, the application of the hydrological classification gave important information about how to restore the hydrology to suitable conditions to improve natural regeneration or to plant mangrove species, which could not have been obtained by estimating elevation only. Based on this research a number

  13. Central Sensitization-Based Classification for Temporomandibular Disorders: A Pathogenetic Hypothesis

    Directory of Open Access Journals (Sweden)

    Annalisa Monaco

    2017-01-01

    Full Text Available Dysregulation of Autonomic Nervous System (ANS and central pain pathways in temporomandibular disorders (TMD is a growing evidence. Authors include some forms of TMD among central sensitization syndromes (CSS, a group of pathologies characterized by central morphofunctional alterations. Central Sensitization Inventory (CSI is useful for clinical diagnosis. Clinical examination and CSI cannot identify the central site(s affected in these diseases. Ultralow frequency transcutaneous electrical nerve stimulation (ULFTENS is extensively used in TMD and in dental clinical practice, because of its effects on descending pain modulation pathways. The Diagnostic Criteria for TMD (DC/TMD are the most accurate tool for diagnosis and classification of TMD. However, it includes CSI to investigate central aspects of TMD. Preliminary data on sensory ULFTENS show it is a reliable tool for the study of central and autonomic pathways in TMD. An alternative classification based on the presence of Central Sensitization and on individual response to sensory ULFTENS is proposed. TMD may be classified into 4 groups: (a TMD with Central Sensitization ULFTENS Responders; (b TMD with Central Sensitization ULFTENS Nonresponders; (c TMD without Central Sensitization ULFTENS Responders; (d TMD without Central Sensitization ULFTENS Nonresponders. This pathogenic classification of TMD may help to differentiate therapy and aetiology.

  14. Metadata Dictionary Database: A Proposed Tool for Academic Library Metadata Management

    Science.gov (United States)

    Southwick, Silvia B.; Lampert, Cory

    2011-01-01

    This article proposes a metadata dictionary (MDD) be used as a tool for metadata management. The MDD is a repository of critical data necessary for managing metadata to create "shareable" digital collections. An operational definition of metadata management is provided. The authors explore activities involved in metadata management in…

  15. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

    Science.gov (United States)

    Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen

    2017-12-01

    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.

  16. A software tool for ecosystem services assessments

    Science.gov (United States)

    Riegels, Niels; Klinting, Anders; Butts, Michael; Middelboe, Anne Lise; Mark, Ole

    2017-04-01

    proposed project can be estimated to determine whether the project affects drivers, pressures, states or a combination of these. • In part III, information about impacts on drivers, pressures, and states is used to identify ESS impacted by a proposed project. Potential beneficiaries of impacted ESS are also identified. • In part IV, changes in ESS are estimated. These estimates include changes in the provision of ESS, the use of ESS, and the value of ESS. • A sustainability assessment in Part V estimates the broader impact of a proposed project according to social, environmental, governance and other criteria. The ESS evaluation software tool is designed to assist an evaluation or study leader carrying out an ESS assessment. The tool helps users move through the logic of the ESS evaluation and make sense of relationships between elements of the DPSIR framework, the CICES classification scheme, and the FEGS approach. The tool also provides links to useful indicators and assessment methods in order to help users quantify changes in ESS and ESS values. The software tool is developed in collaboration with the DESSIN user group, who will use the software to estimate changes in ESS resulting from the implementation of green technologies addressing water quality and water scarcity issues. Although the software is targeted to this user group, it will be made available for free to the public after the conclusion of the project.

  17. Ontologies vs. Classification Systems

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2009-01-01

    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...... classification systems and meta data taxonomies, should be based on ontologies....

  18. Classification of multiple sclerosis lesions using adaptive dictionary learning.

    Science.gov (United States)

    Deshpande, Hrishikesh; Maurel, Pierre; Barillot, Christian

    2015-12-01

    This paper presents a sparse representation and an adaptive dictionary learning based method for automated classification of multiple sclerosis (MS) lesions in magnetic resonance (MR) images. Manual delineation of MS lesions is a time-consuming task, requiring neuroradiology experts to analyze huge volume of MR data. This, in addition to the high intra- and inter-observer variability necessitates the requirement of automated MS lesion classification methods. Among many image representation models and classification methods that can be used for such purpose, we investigate the use of sparse modeling. In the recent years, sparse representation has evolved as a tool in modeling data using a few basis elements of an over-complete dictionary and has found applications in many image processing tasks including classification. We propose a supervised classification approach by learning dictionaries specific to the lesions and individual healthy brain tissues, which include white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF). The size of the dictionaries learned for each class plays a major role in data representation but it is an even more crucial element in the case of competitive classification. Our approach adapts the size of the dictionary for each class, depending on the complexity of the underlying data. The algorithm is validated using 52 multi-sequence MR images acquired from 13 MS patients. The results demonstrate the effectiveness of our approach in MS lesion classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A typology of educationally focused medical simulation tools.

    Science.gov (United States)

    Alinier, Guillaume

    2007-10-01

    The concept of simulation as an educational tool in healthcare is not a new idea but its use has really blossomed over the last few years. This enthusiasm is partly driven by an attempt to increase patient safety and also because the technology is becoming more affordable and advanced. Simulation is becoming more commonly used for initial training purposes as well as for continuing professional development, but people often have very different perceptions of the definition of the term simulation, especially in an educational context. This highlights the need for a clear classification of the technology available but also about the method and teaching approach employed. The aims of this paper are to discuss the current range of simulation approaches and propose a clear typology of simulation teaching aids. Commonly used simulation techniques have been identified and discussed in order to create a classification that reports simulation techniques, their usual mode of delivery, the skills they can address, the facilities required, their typical use, and their pros and cons. This paper presents a clear classification scheme of educational simulation tools and techniques with six different technological levels. They are respectively: written simulations, three-dimensional models, screen-based simulators, standardized patients, intermediate fidelity patient simulators, and interactive patient simulators. This typology allows the accurate description of the simulation technology and the teaching methods applied. Thus valid comparison of educational tools can be made as to their potential effectiveness and verisimilitude at different training stages. The proposed typology of simulation methodologies available for educational purposes provides a helpful guide for educators and participants which should help them to realise the potential learning outcomes at different technological simulation levels in relation to the training approach employed. It should also be a useful

  20. 75 FR 78213 - Proposed Information Collection; Comment Request; 2012 Economic Census Classification Report for...

    Science.gov (United States)

    2010-12-15

    ... 8-digit North American Industry Classification System (NAICS) based code for use in the 2012... classification due to changes in NAICS for 2012. Collecting this classification information will ensure the... the reporting burden on sampled sectors. Proper NAICS classification data ensures high quality...

  1. Probabilistic topic modeling for the analysis and classification of genomic sequences

    Science.gov (United States)

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  2. A proposed classification system for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1987-06-01

    This report presents a proposal for quantitative and generally applicable risk-based definitions of high-level and other radioactive wastes. On the basis of historical descriptions and definitions of high-level waste (HLW), in which HLW has been defined in terms of its source as waste from reprocessing of spent nuclear fuel, we propose a more general definition based on the concept that HLW has two distinct attributes: HLW is (1) highly radioactive and (2) requires permanent isolation. This concept leads to a two-dimensional waste classification system in which one axis, related to ''requires permanent isolation,'' is associated with long-term risks from waste disposal and the other axis, related to ''highly radioactive,'' is associated with shorter-term risks due to high levels of decay heat and external radiation. We define wastes that require permanent isolation as wastes with concentrations of radionuclides exceeding the Class-C limits that are generally acceptable for near-surface land disposal, as specified in the US Nuclear Regulatory Commission's rulemaking 10 CFR Part 61 and its supporting documentation. HLW then is waste requiring permanent isolation that also is highly radioactive, and we define ''highly radioactive'' as a decay heat (power density) in the waste greater than 50 W/m 3 or an external radiation dose rate at a distance of 1 m from the waste greater than 100 rem/h (1 Sv/h), whichever is the more restrictive. This proposal also results in a definition of Transuranic (TRU) Waste and Equivalent as waste that requires permanent isolation but is not highly radioactive and a definition of low-level waste (LLW) as waste that does not require permanent isolation without regard to whether or not it is highly radioactive

  3. Automatic classification of blank substrate defects

    Science.gov (United States)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  4. Forensic age assessment by 3.0T MRI of the knee: proposal of a new MRI classification of ossification stages.

    Science.gov (United States)

    Vieth, Volker; Schulz, Ronald; Heindel, Walter; Pfeiffer, Heidi; Buerke, Boris; Schmeling, Andreas; Ottow, Christian

    2018-03-13

    To explore the possibility of determining majority via a morphology-based examination of the epiphyseal-diaphyseal fusion by 3.0 T magnetic resonance imaging (MRI), a prospective cross-sectional study developing and applying a new stage classification was conducted. 344 male and 350 female volunteers of German nationality between the ages of 12-24 years were scanned between May 2013 and June 2015. A 3.0 T MRI scanner was used, acquiring a T1-weighted (T1-w) turbo spin-echo sequence (TSE) and a T2-weighted (T2-w) TSE sequence with fat suppression by spectral pre-saturation with inversion recovery (SPIR). The gathered information was sifted and a five-stage classification was formulated as a hypothesis. The images were then assessed using this classification. The relevant statistics were defined, the intra- and interobserver agreements were determined, and the differences between the sexes were analysed. The application of the new classification made it possible to correctly assess majority in both sexes by the examination of the epiphyses of the knee joint. The intra- and interobserver agreement levels were very good (κ > 0.80). The Mann-Whitney-U Test implied significant sex-related differences for most stages. Applying the presented MRI classification, it is possible to determine the completion of the 18th year of life in either sex by 3.0 T MRI of the knee joint. • Based on prospective referential data a new MRI classification was formulated. • The setting allows assessment of the age of an individual's skeletal development. • The classification scheme allows the reliable determination of majority in both sexes. • The staging shows a high reproducibility for instructed and trained professional personnel. • The proposed classification is likely to be adaptable to other long bone epiphyses.

  5. [New International Classification of Chronic Pancreatitis (M-ANNHEIM multifactor classification system, 2007): principles, merits, and demerits].

    Science.gov (United States)

    Tsimmerman, Ia S

    2008-01-01

    The new International Classification of Chronic Pancreatitis (designated as M-ANNHEIM) proposed by a group of German specialists in late 2007 is reviewed. All its sections are subjected to analysis (risk group categories, clinical stages and phases, variants of clinical course, diagnostic criteria for "established" and "suspected" pancreatitis, instrumental methods and functional tests used in the diagnosis, evaluation of the severity of the disease using a scoring system, stages of elimination of pain syndrome). The new classification is compared with the earlier classification proposed by the author. Its merits and demerits are discussed.

  6. Butterfly Classification by HSI and RGB Color Models Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Jorge E. Grajales-Múnera

    2013-11-01

    Full Text Available This study aims the classification of Butterfly species through the implementation of Neural Networks and Image Processing. A total of 9 species of Morpho genre which has blue as a characteristic color are processed. For Butterfly segmentation we used image processing tools such as: Binarization, edge processing and mathematical morphology. For data processing RGB values are obtained for every image which are converted to HSI color model to identify blue pixels and obtain the data to the proposed Neural Networks: Back-Propagation and Perceptron. For analysis and verification of results confusion matrix are built and analyzed with the results of neural networks with the lowest error levels. We obtain error levels close to 1% in classification of some Butterfly species.

  7. Classification of hydration status using electrocardiogram and machine learning

    Science.gov (United States)

    Kaveh, Anthony; Chung, Wayne

    2013-10-01

    The electrocardiogram (ECG) has been used extensively in clinical practice for decades to non-invasively characterize the health of heart tissue; however, these techniques are limited to time domain features. We propose a machine classification system using support vector machines (SVM) that uses temporal and spectral information to classify health state beyond cardiac arrhythmias. Our method uses single lead ECG to classify volume depletion (or dehydration) without the lengthy and costly blood analysis tests traditionally used for detecting dehydration status. Our method builds on established clinical ECG criteria for identifying electrolyte imbalances and lends to automated, computationally efficient implementation. The method was tested on the MIT-BIH PhysioNet database to validate this purely computational method for expedient disease-state classification. The results show high sensitivity, supporting use as a cost- and time-effective screening tool.

  8. Automated radial basis function neural network based image classification system for diabetic retinopathy detection in retinal images

    Science.gov (United States)

    Anitha, J.; Vijila, C. Kezi Selva; Hemanth, D. Jude

    2010-02-01

    Diabetic retinopathy (DR) is a chronic eye disease for which early detection is highly essential to avoid any fatal results. Image processing of retinal images emerge as a feasible tool for this early diagnosis. Digital image processing techniques involve image classification which is a significant technique to detect the abnormality in the eye. Various automated classification systems have been developed in the recent years but most of them lack high classification accuracy. Artificial neural networks are the widely preferred artificial intelligence technique since it yields superior results in terms of classification accuracy. In this work, Radial Basis function (RBF) neural network based bi-level classification system is proposed to differentiate abnormal DR Images and normal retinal images. The results are analyzed in terms of classification accuracy, sensitivity and specificity. A comparative analysis is performed with the results of the probabilistic classifier namely Bayesian classifier to show the superior nature of neural classifier. Experimental results show promising results for the neural classifier in terms of the performance measures.

  9. Modified Angle's Classification for Primary Dentition.

    Science.gov (United States)

    Chandranee, Kaushik Narendra; Chandranee, Narendra Jayantilal; Nagpal, Devendra; Lamba, Gagandeep; Choudhari, Purva; Hotwani, Kavita

    2017-01-01

    This study aims to propose a modification of Angle's classification for primary dentition and to assess its applicability in children from Central India, Nagpur. Modification in Angle's classification has been proposed for application in primary dentition. Small roman numbers i/ii/iii are used for primary dentition notation to represent Angle's Class I/II/III molar relationships as in permanent dentition, respectively. To assess applicability of modified Angle's classification a cross-sectional preschool 2000 children population from central India; 3-6 years of age residing in Nagpur metropolitan city of Maharashtra state were selected randomly as per the inclusion and exclusion criteria. Majority 93.35% children were found to have bilateral Class i followed by 2.5% bilateral Class ii and 0.2% bilateral half cusp Class iii molar relationships as per the modified Angle's classification for primary dentition. About 3.75% children had various combinations of Class ii relationships and 0.2% children were having Class iii subdivision relationship. Modification of Angle's classification for application in primary dentition has been proposed. A cross-sectional investigation using new classification revealed various 6.25% Class ii and 0.4% Class iii molar relationships cases in preschool children population in a metropolitan city of Nagpur. Application of the modified Angle's classification to other population groups is warranted to validate its routine application in clinical pediatric dentistry.

  10. Available Tools and Challenges Classifying Cutting-Edge and Historical Astronomical Documents

    Science.gov (United States)

    Lagerstrom, Jill

    2015-08-01

    The STScI Library assists the Science Policies Division in evaluating and choosing scientific keywords and categories for proposals for the Hubble Space Telescope mission and the upcoming James Webb Space Telescope mission. In addition we are often faced with the question “what is the shape of the astronomical literature?” However, subject classification in astronomy in recent times has not been cultivated. This talk will address the available tools and challenges of classifying cutting-edge as well as historical astronomical documents. In at the process, we will give an overview of current and upcoming practices of subject classification in astronomy.

  11. Colombia: Territorial classification

    International Nuclear Information System (INIS)

    Mendoza Morales, Alberto

    1998-01-01

    The article is about the approaches of territorial classification, thematic axes, handling principles and territorial occupation, politician and administrative units and administration regions among other topics. Understanding as Territorial Classification the space distribution on the territory of the country, of the geographical configurations, the human communities, the political-administrative units and the uses of the soil, urban and rural, existent and proposed

  12. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  13. Multivariate Approaches to Classification in Extragalactic Astronomy

    Directory of Open Access Journals (Sweden)

    Didier eFraix-Burnet

    2015-08-01

    Full Text Available Clustering objects into synthetic groups is a natural activity of any science. Astrophysics is not an exception and is now facing a deluge of data. For galaxies, the one-century old Hubble classification and the Hubble tuning fork are still largely in use, together with numerous mono- or bivariate classifications most often made by eye. However, a classification must be driven by the data, and sophisticated multivariate statistical tools are used more and more often. In this paper we review these different approaches in order to situate them in the general context of unsupervised and supervised learning. We insist on the astrophysical outcomes of these studies to show that multivariate analyses provide an obvious path toward a renewal of our classification of galaxies and are invaluable tools to investigate the physics and evolution of galaxies.

  14. Proposal of a trigger tool to assess adverse events in dental care.

    Science.gov (United States)

    Corrêa, Claudia Dolores Trierweiler Sampaio de Oliveira; Mendes, Walter

    2017-11-21

    The aim of this study was to propose a trigger tool for research of adverse events in outpatient dentistry in Brazil. The tool was elaborated in two stages: (i) to build a preliminary set of triggers, a literature review was conducted to identify the composition of trigger tools used in other areas of health and the principal adverse events found in dentistry; (ii) to validate the preliminarily constructed triggers a panel of experts was organized using the modified Delphi method. Fourteen triggers were elaborated in a tool with explicit criteria to identify potential adverse events in dental care, essential for retrospective patient chart reviews. Studies on patient safety in dental care are still incipient when compared to other areas of health care. This study intended to contribute to the research in this field. The contribution by the literature and guidance from the expert panel allowed elaborating a set of triggers to detect adverse events in dental care, but additional studies are needed to test the instrument's validity.

  15. Development of a classification system for cup anemometers - CLASSCUP

    DEFF Research Database (Denmark)

    Friis Pedersen, Troels

    2003-01-01

    the objectives to quantify the errors associated with the use of cup anemometers, and to determine the requirements for an optimum design of a cup anemometer, and to develop a classification system forquantification of systematic errors of cup anemometers. The present report describes this proposed...... classification system. A classification method for cup anemometers has been developed, which proposes general external operational ranges to be used. Anormal category range connected to ideal sites of the IEC power performance standard was made, and another extended category range for complex terrain...... was proposed. General classification indices were proposed for all types of cup anemometers. As a resultof the classification, the cup anemometer will be assigned to a certain class: 0.5, 1, 2, 3 or 5 with corresponding intrinsic errors (%) as a vector instrument (3D) or as a horizontal instrument (2D...

  16. A proposed classification system for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1989-01-01

    On the basis of the definition of high-level wastes (HLW) in the Nuclear Waste Policy Act of 1982 and previous descriptions of reprocessing wastes, a definition is proposed based on the concept that HLW is any waste which is highly radioactive and requires permanent isolation. This conceptual definition of HLW leads to a two-dimensional waste classification system in which one axis, related to 'highly radioactive', is associated with shorter-term risks from waste management and disposal due to high levels of decay heat and external radiation, and the other axis, related to 'requires permanent isolation', is associated with longer-term risks from waste disposal. Wastes that are highly radioactive are defined quantitatively as wastes with a decay heat (power density) greater than 50 W/m 3 or an external dose-equivalent rate greater than 100 rem/h (1 Sv/h) at a distance of 1 m from the waste, whichever is more restrictive. Wastes that require permanent isolation are defined quantitatively as wastes with concentrations of radionuclides greater than the Class-C limits that are generally acceptable for near-surface land disposal, as obtained from the Nuclear Regulatory Commission's 10 CFR Part 61 and its associated methodology. This proposal leads to similar definitions of two other waste classes: transuranic (TRU) waste and equivalent is any waste that requires permanent isolation but is not highly radioactive; and low-level waste (LLW) is any waste that does not require permanent isolation, without regard to whether or not it is highly radioactive. 31 refs.; 3 figs.; 4 tabs

  17. Couinaud's classification v.s. Cho's classification. Their feasibility in the right hepatic lobe

    International Nuclear Information System (INIS)

    Shioyama, Yasukazu; Ikeda, Hiroaki; Sato, Motohito; Yoshimi, Fuyo; Kishi, Kazushi; Sato, Morio; Kimura, Masashi

    2008-01-01

    The objective of this study was to investigate if the new classification system proposed by Cho is feasible to clinical usage comparing with the classical Couinaud's one. One hundred consecutive cases of abdominal CT were studied using a 64 or an 8 slice multislice CT and created three dimensional portal vein images for analysis by the Workstation. We applied both Cho's classification and the classical Couinaud's one for each cases according to their definitions. Three diagnostic radiologists assessed their feasibility as category one (unable to classify) to five (clear to classify with total suit with the original classification criteria). And in each cases, we tried to judge whether Cho's or the classical Couinaud' classification could more easily transmit anatomical information. Analyzers could classified portal veins clearly (category 5) in 77 to 80% of cases and clearly (category 5) or almost clearly (category 4) in 86-93% along with both classifications. In the feasibility of classification, there was no statistically significant difference between two classifications. In 15 cases we felt that using Couinaud's classification is more convenient for us to transmit anatomical information to physicians than using Cho's one, because in these cases we noticed two large portal veins ramify from right main portal vein cranialy and caudaly and then we could not classify P5 as a branch of antero-ventral segment (AVS). Conversely in 17 cases we felt Cho's classification is more convenient because we could not divide right posterior branch as P6 and P7 and in these cases the right posterior portal vein ramified to several small branches. The anterior fissure vein was clearly noticed in only 60 cases. Comparing the classical Couinaud's classification and Cho's one in feasility of classification, there was no statistically significant difference. We propose we routinely report hepatic anatomy with the classical Couinauds classification and in the preoperative cases we

  18. Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm

    Directory of Open Access Journals (Sweden)

    E. Parvinnia

    2014-01-01

    Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.

  19. A comparison of autonomous techniques for multispectral image analysis and classification

    Science.gov (United States)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  20. A web-based neurological pain classifier tool utilizing Bayesian decision theory for pain classification in spinal cord injury patients

    Science.gov (United States)

    Verma, Sneha K.; Chun, Sophia; Liu, Brent J.

    2014-03-01

    Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.

  1. Conformal radiotherapy: principles and classification

    International Nuclear Information System (INIS)

    Rosenwald, J.C.; Gaboriaud, G.; Pontvert, D.

    1999-01-01

    'Conformal radiotherapy' is the name fixed by usage and given to a new form of radiotherapy resulting from the technological improvements observed during the last ten years. While this terminology is now widely used, no precise definition can be found in the literature. Conformal radiotherapy refers to an approach in which the dose distribution is more closely 'conformed' or adapted to the actual shape of the target volume. However, the achievement of a consensus on a more specific definition is hampered by various difficulties, namely in characterizing the degree of 'conformality'. We have therefore suggested a classification scheme be established on the basis of the tools and the procedures actually used for all steps of the process, i.e., from prescription to treatment completion. Our classification consists of four levels: schematically, at level 0, there is no conformation (rectangular fields); at level 1, a simple conformation takes place, on the basis of conventional 2D imaging; at level 2, a 3D reconstruction of the structures is used for a more accurate conformation; and level 3 includes research and advanced dynamic techniques. We have used our personal experience, contacts with colleagues and data from the literature to analyze all the steps of the planning process, and to define the tools and procedures relevant to a given level. The corresponding tables have been discussed and approved at the European level within the Dynarad concerted action. It is proposed that the term 'conformal radiotherapy' be restricted to procedures where all steps are at least at level 2. (author)

  2. A Fast SVM-Based Tongue's Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis

    Science.gov (United States)

    Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori

    2017-01-01

    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds. PMID:29065640

  3. A Fast SVM-Based Tongue's Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis.

    Science.gov (United States)

    Kamarudin, Nur Diyana; Ooi, Chia Yee; Kawanabe, Tadaaki; Odaguchi, Hiroshi; Kobayashi, Fuminori

    2017-01-01

    In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye's ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue's multicolour classification based on a support vector machine (SVM) whose support vectors are reduced by our proposed k -means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k -means clustering is used to cluster a tongue image into four clusters of image background (black), deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.

  4. Modified angle's classification for primary dentition

    Directory of Open Access Journals (Sweden)

    Kaushik Narendra Chandranee

    2017-01-01

    Full Text Available Aim: This study aims to propose a modification of Angle's classification for primary dentition and to assess its applicability in children from Central India, Nagpur. Methods: Modification in Angle's classification has been proposed for application in primary dentition. Small roman numbers i/ii/iii are used for primary dentition notation to represent Angle's Class I/II/III molar relationships as in permanent dentition, respectively. To assess applicability of modified Angle's classification a cross-sectional preschool 2000 children population from central India; 3–6 years of age residing in Nagpur metropolitan city of Maharashtra state were selected randomly as per the inclusion and exclusion criteria. Results: Majority 93.35% children were found to have bilateral Class i followed by 2.5% bilateral Class ii and 0.2% bilateral half cusp Class iii molar relationships as per the modified Angle's classification for primary dentition. About 3.75% children had various combinations of Class ii relationships and 0.2% children were having Class iii subdivision relationship. Conclusions: Modification of Angle's classification for application in primary dentition has been proposed. A cross-sectional investigation using new classification revealed various 6.25% Class ii and 0.4% Class iii molar relationships cases in preschool children population in a metropolitan city of Nagpur. Application of the modified Angle's classification to other population groups is warranted to validate its routine application in clinical pediatric dentistry.

  5. Cluster Validity Classification Approaches Based on Geometric Probability and Application in the Classification of Remotely Sensed Images

    Directory of Open Access Journals (Sweden)

    LI Jian-Wei

    2014-08-01

    Full Text Available On the basis of the cluster validity function based on geometric probability in literature [1, 2], propose a cluster analysis method based on geometric probability to process large amount of data in rectangular area. The basic idea is top-down stepwise refinement, firstly categories then subcategories. On all clustering levels, use the cluster validity function based on geometric probability firstly, determine clusters and the gathering direction, then determine the center of clustering and the border of clusters. Through TM remote sensing image classification examples, compare with the supervision and unsupervised classification in ERDAS and the cluster analysis method based on geometric probability in two-dimensional square which is proposed in literature 2. Results show that the proposed method can significantly improve the classification accuracy.

  6. Host Rock Classification (HRC) system for nuclear waste disposal in crystalline bedrock

    International Nuclear Information System (INIS)

    Hagros, A.

    2006-01-01

    A new rock mass classification scheme, the Host Rock Classification system (HRC-system) has been developed for evaluating the suitability of volumes of rock mass for the disposal of high-level nuclear waste in Precambrian crystalline bedrock. To support the development of the system, the requirements of host rock to be used for disposal have been studied in detail and the significance of the various rock mass properties have been examined. The HRC-system considers both the long-term safety of the repository and the constructability in the rock mass. The system is specific to the KBS-3V disposal concept and can be used only at sites that have been evaluated to be suitable at the site scale. By using the HRC-system, it is possible to identify potentially suitable volumes within the site at several different scales (repository, tunnel and canister scales). The selection of the classification parameters to be included in the HRC-system is based on an extensive study on the rock mass properties and their various influences on the long-term safety, the constructability and the layout and location of the repository. The parameters proposed for the classification at the repository scale include fracture zones, strength/stress ratio, hydraulic conductivity and the Groundwater Chemistry Index. The parameters proposed for the classification at the tunnel scale include hydraulic conductivity, Q' and fracture zones and the parameters proposed for the classification at the canister scale include hydraulic conductivity, Q', fracture zones, fracture width (aperture + filling) and fracture trace length. The parameter values will be used to determine the suitability classes for the volumes of rock to be classified. The HRC-system includes four suitability classes at the repository and tunnel scales and three suitability classes at the canister scale and the classification process is linked to several important decisions regarding the location and acceptability of many components of

  7. Classification and Analysis of Computer Network Traffic

    DEFF Research Database (Denmark)

    Bujlow, Tomasz

    2014-01-01

    various classification modes (decision trees, rulesets, boosting, softening thresholds) regarding the classification accuracy and the time required to create the classifier. We showed how to use our VBS tool to obtain per-flow, per-application, and per-content statistics of traffic in computer networks...

  8. A practicable approach for periodontal classification

    Science.gov (United States)

    Mittal, Vishnu; Bhullar, Raman Preet K.; Bansal, Rachita; Singh, Karanprakash; Bhalodi, Anand; Khinda, Paramjit K.

    2013-01-01

    The Diagnosis and classification of periodontal diseases has remained a dilemma since long. Two distinct concepts have been used to define diseases: Essentialism and Nominalism. Essentialistic concept implies the real existence of disease whereas; nominalistic concept states that the names of diseases are the convenient way of stating concisely the endpoint of a diagnostic process. It generally advances from assessment of symptoms and signs toward knowledge of causation and gives a feasible option to name the disease for which etiology is either unknown or it is too complex to access in routine clinical practice. Various classifications have been proposed by the American Academy of Periodontology (AAP) in 1986, 1989 and 1999. The AAP 1999 classification is among the most widely used classification. But this classification also has demerits which provide impediment for its use in day to day practice. Hence a classification and diagnostic system is required which can help the clinician to access the patient's need and provide a suitable treatment which is in harmony with the diagnosis for that particular case. Here is an attempt to propose a practicable classification and diagnostic system of periodontal diseases for better treatment outcome. PMID:24379855

  9. The Classification of Romanian High-Schools

    Science.gov (United States)

    Ivan, Ion; Milodin, Daniel; Naie, Lucian

    2006-01-01

    The article tries to tackle the issue of high-schools classification from one city, district or from Romania. The classification criteria are presented. The National Database of Education is also presented and the application of criteria is illustrated. An algorithm for high-school multi-rang classification is proposed in order to build classes of…

  10. Bosniak Classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens

    2014-01-01

    Background: The Bosniak classification is a diagnostic tool for the differentiation of cystic changes in the kidney. The process of categorizing renal cysts may be challenging, involving a series of decisions that may affect the final diagnosis and clinical outcome such as surgical management....... Purpose: To investigate the inter- and intra-observer agreement among experienced uroradiologists when categorizing complex renal cysts according to the Bosniak classification. Material and Methods: The original categories of 100 cystic renal masses were chosen as “Gold Standard” (GS), established...... to the calculated weighted κ all readers performed “very good” for both inter-observer and intra-observer variation. Most variation was seen in cysts catagorized as Bosniak II, IIF, and III. These results show that radiologists who evaluate complex renal cysts routinely may apply the Bosniak classification...

  11. Performance of in silico prediction tools for the classification of rare BRCA1/2 missense variants in clinical diagnostics.

    Science.gov (United States)

    Ernst, Corinna; Hahnen, Eric; Engel, Christoph; Nothnagel, Michael; Weber, Jonas; Schmutzler, Rita K; Hauke, Jan

    2018-03-27

    The use of next-generation sequencing approaches in clinical diagnostics has led to a tremendous increase in data and a vast number of variants of uncertain significance that require interpretation. Therefore, prediction of the effects of missense mutations using in silico tools has become a frequently used approach. Aim of this study was to assess the reliability of in silico prediction as a basis for clinical decision making in the context of hereditary breast and/or ovarian cancer. We tested the performance of four prediction tools (Align-GVGD, SIFT, PolyPhen-2, MutationTaster2) using a set of 236 BRCA1/2 missense variants that had previously been classified by expert committees. However, a major pitfall in the creation of a reliable evaluation set for our purpose is the generally accepted classification of BRCA1/2 missense variants using the multifactorial likelihood model, which is partially based on Align-GVGD results. To overcome this drawback we identified 161 variants whose classification is independent of any previous in silico prediction. In addition to the performance as stand-alone tools we examined the sensitivity, specificity, accuracy and Matthews correlation coefficient (MCC) of combined approaches. PolyPhen-2 achieved the lowest sensitivity (0.67), specificity (0.67), accuracy (0.67) and MCC (0.39). Align-GVGD achieved the highest values of specificity (0.92), accuracy (0.92) and MCC (0.73), but was outperformed regarding its sensitivity (0.90) by SIFT (1.00) and MutationTaster2 (1.00). All tools suffered from poor specificities, resulting in an unacceptable proportion of false positive results in a clinical setting. This shortcoming could not be bypassed by combination of these tools. In the best case scenario, 138 families would be affected by the misclassification of neutral variants within the cohort of patients of the German Consortium for Hereditary Breast and Ovarian Cancer. We show that due to low specificities state-of-the-art in silico

  12. A proposal for a pharmacokinetic interaction significance classification system (PISCS) based on predicted drug exposure changes and its potential application to alert classifications in product labelling.

    Science.gov (United States)

    Hisaka, Akihiro; Kusama, Makiko; Ohno, Yoshiyuki; Sugiyama, Yuichi; Suzuki, Hiroshi

    2009-01-01

    Pharmacokinetic drug-drug interactions (DDIs) are one of the major causes of adverse events in pharmacotherapy, and systematic prediction of the clinical relevance of DDIs is an issue of significant clinical importance. In a previous study, total exposure changes of many substrate drugs of cytochrome P450 (CYP) 3A4 caused by coadministration of inhibitor drugs were successfully predicted by using in vivo information. In order to exploit these predictions in daily pharmacotherapy, the clinical significance of the pharmacokinetic changes needs to be carefully evaluated. The aim of the present study was to construct a pharmacokinetic interaction significance classification system (PISCS) in which the clinical significance of DDIs was considered with pharmacokinetic changes in a systematic manner. Furthermore, the classifications proposed by PISCS were compared in a detailed manner with current alert classifications in the product labelling or the summary of product characteristics used in Japan, the US and the UK. A matrix table was composed by stratifying two basic parameters of the prediction: the contribution ratio of CYP3A4 to the oral clearance of substrates (CR), and the inhibition ratio of inhibitors (IR). The total exposure increase was estimated for each cell in the table by associating CR and IR values, and the cells were categorized into nine zones according to the magnitude of the exposure increase. Then, correspondences between the DDI significance and the zones were determined for each drug group considering the observed exposure changes and the current classification in the product labelling. Substrate drugs of CYP3A4 selected from three therapeutic groups, i.e. HMG-CoA reductase inhibitors (statins), calcium-channel antagonists/blockers (CCBs) and benzodiazepines (BZPs), were analysed as representative examples. The product labelling descriptions of drugs in Japan, US and UK were obtained from the websites of each regulatory body. Among 220

  13. PolSAR Land Cover Classification Based on Roll-Invariant and Selected Hidden Polarimetric Features in the Rotation Domain

    Directory of Open Access Journals (Sweden)

    Chensong Tao

    2017-07-01

    Full Text Available Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR. Target polarimetric response is strongly dependent on its orientation. Backscattering responses of the same target with different orientations to the SAR flight path may be quite different. This target orientation diversity effect hinders PolSAR image understanding and interpretation. Roll-invariant polarimetric features such as entropy, anisotropy, mean alpha angle, and total scattering power are independent of the target orientation and are commonly adopted for PolSAR image classification. On the other aspect, target orientation diversity also contains rich information which may not be sensed by roll-invariant polarimetric features. In this vein, only using the roll-invariant polarimetric features may limit the final classification accuracy. To address this problem, this work uses the recently reported uniform polarimetric matrix rotation theory and a visualization and characterization tool of polarimetric coherence pattern to investigate hidden polarimetric features in the rotation domain along the radar line of sight. Then, a feature selection scheme is established and a set of hidden polarimetric features are selected in the rotation domain. Finally, a classification method is developed using the complementary information between roll-invariant and selected hidden polarimetric features with a support vector machine (SVM/decision tree (DT classifier. Comparison experiments are carried out with NASA/JPL AIRSAR and multi-temporal UAVSAR data. For AIRSAR data, the overall classification accuracy of the proposed classification method is 95.37% (with SVM/96.38% (with DT, while that of the conventional classification method is 93.87% (with SVM/94.12% (with DT, respectively. Meanwhile, for multi-temporal UAVSAR data, the mean overall classification accuracy of the proposed method is up to 97.47% (with SVM/99.39% (with DT, which is also higher

  14. Benthic indicators to use in Ecological Quality classification of Mediterranean soft bottom marine ecosystems, including a new Biotic Index

    Directory of Open Access Journals (Sweden)

    N. SIMBOURA

    2002-12-01

    Full Text Available A general scheme for approaching the objective of Ecological Quality Status (EcoQ classification of zoobenthic marine ecosystems is presented. A system based on soft bottom benthic indicator species and related habitat types is suggested to be used for testing the typological definition of a given water body in the Mediterranean. Benthic indices including the Shannon-Wiener diversity index and the species richness are re-evaluated for use in classification. Ranges of values and of ecological quality categories are given for the diversity and species richness in different habitat types. A new biotic index (BENTIX is proposed based on the relative percentages of three ecological groups of species grouped according to their sensitivity or tolerance to disturbance factors and weighted proportionately to obtain a formula rendering a five step numerical scale of ecological quality classification. Its advantage against former biotic indices lies in the fact that it reduces the number of the ecological groups involved which makes it simpler and easier in its use. The Bentix index proposed is tested and validated with data from Greek and western Mediterranean ecosystems and examples are presented. Indicator species associated with specific habitat types and pollution indicator species, scored according to their degree of tolerance to pollution, are listed in a table. The Bentix index is compared and evaluated against the indices of diversity and species richness for use in classification. The advantages of the BENTIX index as a classification tool for ECoQ include independence from habitat type, sample size and taxonomic effort, high discriminative power and simplicity in its use which make it a robust, simple and effective tool for application in the Mediterranean Sea.

  15. Classification of Flotation Frothers

    Directory of Open Access Journals (Sweden)

    Jan Drzymala

    2018-02-01

    Full Text Available In this paper, a scheme of flotation frothers classification is presented. The scheme first indicates the physical system in which a frother is present and four of them i.e., pure state, aqueous solution, aqueous solution/gas system and aqueous solution/gas/solid system are distinguished. As a result, there are numerous classifications of flotation frothers. The classifications can be organized into a scheme described in detail in this paper. The frother can be present in one of four physical systems, that is pure state, aqueous solution, aqueous solution/gas and aqueous solution/gas/solid system. It results from the paper that a meaningful classification of frothers relies on choosing the physical system and next feature, trend, parameter or parameters according to which the classification is performed. The proposed classification can play a useful role in characterizing and evaluation of flotation frothers.

  16. A convolutional neural network-based screening tool for X-ray serial crystallography.

    Science.gov (United States)

    Ke, Tsung Wei; Brewster, Aaron S; Yu, Stella X; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K

    2018-05-01

    A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. open access.

  17. Constructing criticality by classification

    DEFF Research Database (Denmark)

    Machacek, Erika

    2017-01-01

    " in the bureaucratic practice of classification: Experts construct material criticality in assessments as they allot information on the materials to the parameters of the assessment framework. In so doing, they ascribe a new set of connotations to the materials, namely supply risk, and their importance to clean energy......, legitimizing a criticality discourse.Specifically, the paper introduces a typology delineating the inferences made by the experts from their produced recommendations in the classification of rare earth element criticality. The paper argues that the classification is a specific process of constructing risk....... It proposes that the expert bureaucratic practice of classification legitimizes (i) the valorisation that was made in the drafting of the assessment framework for the classification, and (ii) political operationalization when enacted that might have (non-)distributive implications for the allocation of public...

  18. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  19. Kardashev’s classification at 50+: A fine vehicle with room for improvement

    Directory of Open Access Journals (Sweden)

    Ćirković M.M.

    2015-01-01

    Full Text Available We review the history and status of the famous classification of extraterrestrial civilizations given by the great Russian astrophysicist Nikolai Semenovich Kardashev, roughly half a century after it has been proposed. While Kardashev’s classification (or Kardashev’s scale has often been seen as oversimplified, and multiple improvements, refinements, and alternatives to it have been suggested, it is still one of the major tools for serious theoretical investigation of SETI issues. During these 50+ years, several attempts at modifying or reforming the classification have been made; we review some of them here, together with presenting some of the scenarios which present difficulties to the standard version. Recent results in both theoretical and observational SETI studies, especially the ˆG infrared survey (2014-2015, have persuasively shown that the emphasis on detectability inherent in Kardashev’s classification obtains new significance and freshness. Several new movements and conceptual frameworks, such as the Dysonian SETI, tally extremely well with these developments. So, the apparent simplicity of the classification is highly deceptive: Kardashev’s work offers a wealth of still insufficiently studied methodological and epistemological ramifications and it remains, in both letter and spirit, perhaps the worthiest legacy of the SETI “founding fathers”. [Projekat Ministarstva nauke Republike Srbije, br. ON176021

  20. Modeling of tool path for the CNC sheet cutting machines

    Science.gov (United States)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  1. A tool for enhancing strategic health planning: a modeled use of the International Classification of Functioning, Disability and Health.

    Science.gov (United States)

    Sinclair, Lisa Bundara; Fox, Michael H; Betts, Donald R

    2013-01-01

    This article describes use of the International Classification of Functioning, Disability and Health (ICF) as a tool for strategic planning. The ICF is the international classification system for factors that influence health, including Body Structures, Body Functions, Activities and Participation and Environmental Factors. An overview of strategic planning and the ICF are provided. Selected ICF concepts and nomenclature are used to demonstrate its utility in helping develop a classic planning framework, objectives, measures and actions. Some issues and resolutions for applying the ICF are described. Applying the ICF for strategic health planning is an innovative approach that fosters the inclusion of social ecological health determinants and broad populations. If employed from the onset of planning, the ICF can help public health organizations systematically conceptualize, organize and communicate a strategic health plan. Published 2012. This article is a US Government work and is in the public domain in the USA.

  2. A Fast SVM-Based Tongue’s Colour Classification Aided by k-Means Clustering Identifiers and Colour Attributes as Computer-Assisted Tool for Tongue Diagnosis

    Directory of Open Access Journals (Sweden)

    Nur Diyana Kamarudin

    2017-01-01

    Full Text Available In tongue diagnosis, colour information of tongue body has kept valuable information regarding the state of disease and its correlation with the internal organs. Qualitatively, practitioners may have difficulty in their judgement due to the instable lighting condition and naked eye’s ability to capture the exact colour distribution on the tongue especially the tongue with multicolour substance. To overcome this ambiguity, this paper presents a two-stage tongue’s multicolour classification based on a support vector machine (SVM whose support vectors are reduced by our proposed k-means clustering identifiers and red colour range for precise tongue colour diagnosis. In the first stage, k-means clustering is used to cluster a tongue image into four clusters of image background (black, deep red region, red/light red region, and transitional region. In the second-stage classification, red/light red tongue images are further classified into red tongue or light red tongue based on the red colour range derived in our work. Overall, true rate classification accuracy of the proposed two-stage classification to diagnose red, light red, and deep red tongue colours is 94%. The number of support vectors in SVM is improved by 41.2%, and the execution time for one image is recorded as 48 seconds.

  3. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    Science.gov (United States)

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  4. Multiview vector-valued manifold regularization for multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Xu, Chang; Xu, Chao; Liu, Hong; Wen, Yonggang

    2013-05-01

    In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV(3)MR) to integrate multiple features. MV(3)MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV(3)MR for image classification.

  5. Gas Classification Using Deep Convolutional Neural Networks

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  6. Gas Classification Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  7. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Directory of Open Access Journals (Sweden)

    Muhammad Faisal Siddiqui

    Full Text Available A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT, principal component analysis (PCA, and least squares support vector machine (LS-SVM are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%. Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities

  8. Efficient Fingercode Classification

    Science.gov (United States)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  9. A novel Neuro-fuzzy classification technique for data mining

    Directory of Open Access Journals (Sweden)

    Soumadip Ghosh

    2014-11-01

    Full Text Available In our study, we proposed a novel Neuro-fuzzy classification technique for data mining. The inputs to the Neuro-fuzzy classification system were fuzzified by applying generalized bell-shaped membership function. The proposed method utilized a fuzzification matrix in which the input patterns were associated with a degree of membership to different classes. Based on the value of degree of membership a pattern would be attributed to a specific category or class. We applied our method to ten benchmark data sets from the UCI machine learning repository for classification. Our objective was to analyze the proposed method and, therefore compare its performance with two powerful supervised classification algorithms Radial Basis Function Neural Network (RBFNN and Adaptive Neuro-fuzzy Inference System (ANFIS. We assessed the performance of these classification methods in terms of different performance measures such as accuracy, root-mean-square error, kappa statistic, true positive rate, false positive rate, precision, recall, and f-measure. In every aspect the proposed method proved to be superior to RBFNN and ANFIS algorithms.

  10. Deep learning for tumor classification in imaging mass spectrometry.

    Science.gov (United States)

    Behrmann, Jens; Etmann, Christian; Boskamp, Tobias; Casadonte, Rita; Kriegsmann, Jörg; Maaß, Peter

    2018-04-01

    Tumor classification using imaging mass spectrometry (IMS) data has a high potential for future applications in pathology. Due to the complexity and size of the data, automated feature extraction and classification steps are required to fully process the data. Since mass spectra exhibit certain structural similarities to image data, deep learning may offer a promising strategy for classification of IMS data as it has been successfully applied to image classification. Methodologically, we propose an adapted architecture based on deep convolutional networks to handle the characteristics of mass spectrometry data, as well as a strategy to interpret the learned model in the spectral domain based on a sensitivity analysis. The proposed methods are evaluated on two algorithmically challenging tumor classification tasks and compared to a baseline approach. Competitiveness of the proposed methods is shown on both tasks by studying the performance via cross-validation. Moreover, the learned models are analyzed by the proposed sensitivity analysis revealing biologically plausible effects as well as confounding factors of the considered tasks. Thus, this study may serve as a starting point for further development of deep learning approaches in IMS classification tasks. https://gitlab.informatik.uni-bremen.de/digipath/Deep_Learning_for_Tumor_Classification_in_IMS. jbehrmann@uni-bremen.de or christianetmann@uni-bremen.de. Supplementary data are available at Bioinformatics online.

  11. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  12. An integrated user-friendly ArcMAP tool for bivariate statistical modelling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.

    2015-03-01

    Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  13. Pelvic Arterial Anatomy Relevant to Prostatic Artery Embolisation and Proposal for Angiographic Classification

    Energy Technology Data Exchange (ETDEWEB)

    Assis, André Moreira de, E-mail: andre.maa@gmail.com; Moreira, Airton Mota, E-mail: motamoreira@gmail.com; Paula Rodrigues, Vanessa Cristina de, E-mail: vanessapaular@yahoo.com.br [University of Sao Paulo Medical School, Interventional Radiology and Endovascular Surgery Department, Radiology Institute (Brazil); Harward, Sardis Honoria, E-mail: sardis.harward@merit.com [The Dartmouth Center for Health Care Delivery Science (United States); Antunes, Alberto Azoubel, E-mail: antunesuro@uol.com.br; Srougi, Miguel, E-mail: srougi@usp.br [University of Sao Paulo Medical School, Urology Department (Brazil); Carnevale, Francisco Cesar, E-mail: fcarnevale@uol.com.br [University of Sao Paulo Medical School, Interventional Radiology and Endovascular Surgery Department, Radiology Institute (Brazil)

    2015-08-15

    PurposeTo describe and categorize the angiographic findings regarding prostatic vascularization, propose an anatomic classification, and discuss its implications for the PAE procedure.MethodsAngiographic findings from 143 PAE procedures were reviewed retrospectively, and the origin of the inferior vesical artery (IVA) was classified into five subtypes as follows: type I: IVA originating from the anterior division of the internal iliac artery (IIA), from a common trunk with the superior vesical artery (SVA); type II: IVA originating from the anterior division of the IIA, inferior to the SVA origin; type III: IVA originating from the obturator artery; type IV: IVA originating from the internal pudendal artery; and type V: less common origins of the IVA. Incidences were calculated by percentage.ResultsTwo hundred eighty-six pelvic sides (n = 286) were analyzed, and 267 (93.3 %) were classified into I–IV types. Among them, the most common origin was type IV (n = 89, 31.1 %), followed by type I (n = 82, 28.7 %), type III (n = 54, 18.9 %), and type II (n = 42, 14.7 %). Type V anatomy was seen in 16 cases (5.6 %). Double vascularization, defined as two independent prostatic branches in one pelvic side, was seen in 23 cases (8.0 %).ConclusionsDespite the large number of possible anatomical variations of male pelvis, four main patterns corresponded to almost 95 % of the cases. Evaluation of anatomy in a systematic fashion, following a standard classification, will make PAE a faster, safer, and more effective procedure.

  14. NEURAL NETWORKS AS A CLASSIFICATION TOOL BIOTECHNOLOGICAL SYSTEMS (FOR EXAMPLE FLOUR PRODUCTION

    Directory of Open Access Journals (Sweden)

    V. K. Bitykov

    2015-01-01

    Full Text Available Summary. To date, artificial intelligence systems are the most common type to classify objects of different quality. The proposed modeling technology to predict the quality of flour products by using artificial neural networks allows to solve problems of analysis of the factors determining the quality of the products. Interest in artificial neural networks has grown due to the fact that they can change their behavior depending on external environment. This factor more than any other responsible for the interest that they cause. After the presentation of input signals (possibly together with the desired outputs, they self-configurable to provide the desired reaction. We developed a set of training algorithms, each with their own strengths and weaknesses. The solution to the problem of classification is one of the most important applications of neural networks, which represents a problem of attributing the sample to one of several non-intersecting sets. To solve this problem developed algorithms for synthesis of NA with the use of nonlinear activation functions, the algorithms for training the network. Training the NS involves determining the weights of layers of neurons. Training the NA occurs with the teacher, that is, the network must meet the values of both input and desired output signals, and it is according to some internal algorithm adjusts the weights of their synaptic connections. The work was built an artificial neural network, multilayer perceptron example. With the help of correlation analysis in total sample revealed that the traits are correlated at the significance level of 0.01 with grade quality bread. The classification accuracy exceeds 90%.

  15. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  16. On the Feature Selection and Classification Based on Information Gain for Document Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Asriyanti Indah Pratiwi

    2018-01-01

    Full Text Available Sentiment analysis in a movie review is the needs of today lifestyle. Unfortunately, enormous features make the sentiment of analysis slow and less sensitive. Finding the optimum feature selection and classification is still a challenge. In order to handle an enormous number of features and provide better sentiment classification, an information-based feature selection and classification are proposed. The proposed method reduces more than 90% unnecessary features while the proposed classification scheme achieves 96% accuracy of sentiment classification. From the experimental results, it can be concluded that the combination of proposed feature selection and classification achieves the best performance so far.

  17. A new tool in the classification of rational conformal field theories

    International Nuclear Information System (INIS)

    Christe, P.; Ravanini, F.

    1988-10-01

    The fact that in any rational conformal field theory (RCFT) 4-point functions on the sphere must satisfy an ordinary differential equation gives a simple condition on the conformal dimensions of primary fields. We discuss how this can help in the classification program of RCFT. As an example all associative fusion rules with less than four non-trivial primary fields and N ijk <<1 are discussed. Another application to the classification of chiral algebras is briefly mentioned. (orig.)

  18. The Importance of Classification to Business Model Research

    OpenAIRE

    Susan Lambert

    2015-01-01

    Purpose: To bring to the fore the scientific significance of classification and its role in business model theory building. To propose a method by which existing classifications of business models can be analyzed and new ones developed. Design/Methodology/Approach: A review of the scholarly literature relevant to classifications of business models is presented along with a brief overview of classification theory applicable to business model research. Existing business model classification...

  19. Paper Tools and Periodic Tables: Newlands and Mendeleev Draw Grids.

    Science.gov (United States)

    Gordin, Michael D

    2018-02-01

    This essay elaborates on Ursula Klein's methodological concept of "paper tools" by drawing on several examples from the history of the periodic table. Moving from John A. R. Newlands's "Law of Octaves," to Dmitrii Mendeleev's first drafts of his periodic system in 1869, to Mendeleev's chemical speculations on the place of the ether within his classification, one sees that the ways in which the scientists presented the balance between empirical data and theoretical manipulation proved crucial for the chemical community's acceptance or rejection of their proposed innovations. This negotiated balance illustrates an underemphasised feature of Klein's conceptualisation of the ways in which a paper tool generates new knowledge.

  20. AN OBJECT-BASED METHOD FOR CHINESE LANDFORM TYPES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Ding

    2016-06-01

    Full Text Available Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM. In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  1. An alternative respiratory sounds classification system utilizing artificial neural networks

    Directory of Open Access Journals (Sweden)

    Rami J Oweis

    2015-04-01

    Full Text Available Background: Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. Methods: This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs and adaptive neuro-fuzzy inference systems (ANFIS toolboxes. The methods have been applied to 10 different respiratory sounds for classification. Results: The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. Conclusions: The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  2. Proposal of rock mass behavior classification based on convergence measurement in shaft sinking through sedimentary soft rocks

    International Nuclear Information System (INIS)

    Tsusaka, Kimikazu

    2010-01-01

    Japan Atomic Energy Agency has been excavating deep shafts through sedimentary soft rocks in Horonobe, Hokkaido. From the viewpoint of the observational construction, site engineers need a practical guide to evaluate the field measurements conducted with shaft sinking. The author analyzed the relationship among initial deformation rate, observed deformation, the ratio of the modulus of elasticity of rock mass to the initial stress, and the magnitude of inelastic behavior of rock based on convergence measurements and investigation of rock mass properties on shaft walls. As a result, the rock mass behavior classification for shaft sinking which consists of three classes was proposed. (author)

  3. Semi-Supervised Classification for Fault Diagnosis in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Ma, Jian Ping; Jiang, Jin

    2014-01-01

    Pattern classification methods have become important tools for fault diagnosis in industrial systems. However, it is normally difficult to obtain reliable labeled data to train a supervised pattern classification model for applications in a nuclear power plant (NPP). However, unlabeled data easily become available through increased deployment of supervisory, control, and data acquisition (SCADA) systems. In this paper, a fault diagnosis scheme based on semi-supervised classification (SSC) method is developed with specific applications for NPP. In this scheme, newly measured plant data are treated as unlabeled data. They are integrated with selected labeled data to train a SSC model which is then used to estimate labels of the new data. Compared to exclusive supervised approaches, the proposed scheme requires significantly less number of labeled data to train a classifier. Furthermore, it is shown that higher degree of uncertainties in the labeled data can be tolerated. The developed scheme has been validated using the data generated from a desktop NPP simulator and also from a physical NPP simulator using a graph-based SSC algorithm. Two case studies have been used in the validation process. In the first case study, three faults have been simulated on the desktop simulator. These faults have all been classified successfully with only four labeled data points per fault case. In the second case, six types of fault are simulated on the physical NPP simulator. All faults have been successfully diagnosed. The results have demonstrated that SSC is a promising tool for fault diagnosis

  4. Classification of the lymphatic drainage status of a primary tumor: a proposal

    International Nuclear Information System (INIS)

    Munz, D.L.; Maza, S.; Ivancevic, V.; Geworski, L.

    2000-01-01

    Aim: Creation of a classification of the lymphatic drainage status of a primary tumour. It shall enable comparison of different approaches, standardisation and quality control. Methods: Identification and topographic localisation of the sentinel node(s) using lymphatic radionuclide gamma camera imaging and/or gamma probe detection and/or vital dye mapping. Results: A classification comprising four classes (D-Class I-IV) and distinct subclasses (A-E) proved to be simply to be learned and applicable as well as reliably reproducible. It is based on the number of sentinel lymph nodes and their locations and can be combined with the pathological and molecular biological lymph node status. D-classes/subclasses obtained in 420 patients with malignant melanoma of the skin are presented. Conclusions: The classification is applicable to different approaches. Its diagnostic, therapeutic and prognostic value should be studied prospectively in those primary tumours which preferably metastasise via their draining lymphatic vessels. (orig.) [de

  5. Alignment of ICNP? 2.0 Ontology and a proposed INCP? Brazilian Ontology1

    OpenAIRE

    Carvalho, Carina Maris Gaspar; Cubas, Marcia Regina; Malucelli, Andreia; da N?brega, Maria Miriam Lima

    2014-01-01

    OBJECTIVE: to align the International Classification for Nursing Practice (ICNP®) Version 2.0 ontology and a proposed INCP® Brazilian Ontology.METHOD: document-based, exploratory and descriptive study, the empirical basis of which was provided by the ICNP® 2.0 Ontology and the INCP® Brazilian Ontology. The ontology alignment was performed using a computer tool with algorithms to identify correspondences between concepts, which were organized and analyzed according to their presence or absence...

  6. Force Sensor Based Tool Condition Monitoring Using a Heterogeneous Ensemble Learning Model

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2014-11-01

    Full Text Available Tool condition monitoring (TCM plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM, hidden Markov model (HMM and radius basis function (RBF are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability.

  7. The Periodic Table and the Philosophy of Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2011-01-01

    This paper discusses some problems in the philosophy of classification based on a discussion of the periodic system of chemistry and physics. The emerging interdisciplinary field ‘philosophy of classification’ is briefly introduced and related to the field of knowledge organization (KO) within...... Library and Information Science (LIS). It is argued that KO needs to be better integrated with the broader field of classification theory and research. The paper considers some core issues such as whether classifications are pragmatic human tools or neutral reflections of nature, how classifications...

  8. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    Science.gov (United States)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  9. Can we improve accuracy and reliability of MRI interpretation in children with optic pathway glioma? Proposal for a reproducible imaging classification

    Energy Technology Data Exchange (ETDEWEB)

    Lambron, Julien; Frampas, Eric; Toulgoat, Frederique [University Hospital, Department of Radiology, Nantes (France); Rakotonjanahary, Josue [University Hospital, Department of Pediatric Oncology, Angers (France); University Paris Diderot, INSERM CIE5 Robert Debre Hospital, Assistance Publique-Hopitaux de Paris (AP-HP), Paris (France); Loisel, Didier [University Hospital, Department of Radiology, Angers (France); Carli, Emilie de; Rialland, Xavier [University Hospital, Department of Pediatric Oncology, Angers (France); Delion, Matthieu [University Hospital, Department of Neurosurgery, Angers (France)

    2016-02-15

    Magnetic resonance (MR) images from children with optic pathway glioma (OPG) are complex. We initiated this study to evaluate the accuracy of MR imaging (MRI) interpretation and to propose a simple and reproducible imaging classification for MRI. We randomly selected 140 MRIs from among 510 MRIs performed on 104 children diagnosed with OPG in France from 1990 to 2004. These images were reviewed independently by three radiologists (F.T., 15 years of experience in neuroradiology; D.L., 25 years of experience in pediatric radiology; and J.L., 3 years of experience in radiology) using a classification derived from the Dodge and modified Dodge classifications. Intra- and interobserver reliabilities were assessed using the Bland-Altman method and the kappa coefficient. These reviews allowed the definition of reliable criteria for MRI interpretation. The reviews showed intraobserver variability and large discrepancies among the three radiologists (kappa coefficient varying from 0.11 to 1). These variabilities were too large for the interpretation to be considered reproducible over time or among observers. A consensual analysis, taking into account all observed variabilities, allowed the development of a definitive interpretation protocol. Using this revised protocol, we observed consistent intra- and interobserver results (kappa coefficient varying from 0.56 to 1). The mean interobserver difference for the solid portion of the tumor with contrast enhancement was 0.8 cm{sup 3} (limits of agreement = -16 to 17). We propose simple and precise rules for improving the accuracy and reliability of MRI interpretation for children with OPG. Further studies will be necessary to investigate the possible prognostic value of this approach. (orig.)

  10. Semantic Document Image Classification Based on Valuable Text Pattern

    Directory of Open Access Journals (Sweden)

    Hossein Pourghassem

    2011-01-01

    Full Text Available Knowledge extraction from detected document image is a complex problem in the field of information technology. This problem becomes more intricate when we know, a negligible percentage of the detected document images are valuable. In this paper, a segmentation-based classification algorithm is used to analysis the document image. In this algorithm, using a two-stage segmentation approach, regions of the image are detected, and then classified to document and non-document (pure region regions in the hierarchical classification. In this paper, a novel valuable definition is proposed to classify document image in to valuable or invaluable categories. The proposed algorithm is evaluated on a database consisting of the document and non-document image that provide from Internet. Experimental results show the efficiency of the proposed algorithm in the semantic document image classification. The proposed algorithm provides accuracy rate of 98.8% for valuable and invaluable document image classification problem.

  11. Hydrologic Landscape Classification to Estimate Bristol Bay Watershed Hydrology

    Science.gov (United States)

    The use of hydrologic landscapes has proven to be a useful tool for broad scale assessment and classification of landscapes across the United States. These classification systems help organize larger geographical areas into areas of similar hydrologic characteristics based on cl...

  12. Discrimination between authentic and adulterated liquors by near-infrared spectroscopy and ensemble classification

    Science.gov (United States)

    Chen, Hui; Tan, Chao; Wu, Tong; Wang, Li; Zhu, Wanping

    2014-09-01

    Chinese liquor is one of the famous distilled spirits and counterfeit liquor is becoming a serious problem in the market. Especially, age liquor is facing the crisis of confidence because it is difficult for consumer to identify the marked age, which prompts unscrupulous traders to pose off low-grade liquors as high-grade liquors. An ideal method for authenticity confirmation of liquors should be non-invasive, non-destructive and timely. The combination of near-infrared spectroscopy with chemometrics proves to be a good way to reach these premises. A new strategy is proposed for classification and verification of the adulteration of liquors by using NIR spectroscopy and chemometric classification, i.e., ensemble support vector machines (SVM). Three measures, i.e., accuracy, sensitivity and specificity were used for performance evaluation. The results confirmed that the strategy can serve as a screening tool applied to verify adulteration of the liquor, that is, a prior step used to condition the sample to a deeper analysis only when a positive result for adulteration is obtained by the proposed methodology.

  13. Extension classification method for low-carbon product cases

    Directory of Open Access Journals (Sweden)

    Yanwei Zhao

    2016-05-01

    Full Text Available In product low-carbon design, intelligent decision systems integrated with certain classification algorithms recommend the existing design cases to designers. However, these systems mostly dependent on prior experience, and product designers not only expect to get a satisfactory case from an intelligent system but also hope to achieve assistance in modifying unsatisfactory cases. In this article, we proposed a new categorization method composed of static and dynamic classification based on extension theory. This classification method can be integrated into case-based reasoning system to get accurate classification results and to inform designers of detailed information about unsatisfactory cases. First, we establish the static classification model for cases by dependent function in a hierarchical structure. Then for dynamic classification, we make transformation for cases based on case model, attributes, attribute values, and dependent function, thus cases can take qualitative changes. Finally, the applicability of proposed method is demonstrated through a case study of screw air compressor cases.

  14. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  15. Integration of heterogeneous features for remote sensing scene classification

    Science.gov (United States)

    Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang

    2018-01-01

    Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.

  16. Object Classification in Semi Structured Enviroment Using Forward-Looking Sonar

    Directory of Open Access Journals (Sweden)

    Matheus dos Santos

    2017-09-01

    Full Text Available The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper.

  17. Evaluation and Classification of Syntax Usage in Determining Short-Text Semantic Similarity

    Directory of Open Access Journals (Sweden)

    V. Batanović

    2014-06-01

    Full Text Available This paper outlines and categorizes ways of using syntactic information in a number of algorithms for determining the semantic similarity of short texts. We consider the use of word order information, part-of-speech tagging, parsing and semantic role labeling. We analyze and evaluate the effects of syntax usage on algorithm performance by utilizing the results of a paraphrase detection test on the Microsoft Research Paraphrase Corpus. We also propose a new classification of algorithms based on their applicability to languages with scarce natural language processing tools.

  18. Automotive System for Remote Surface Classification.

    Science.gov (United States)

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-04-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.

  19. Effective Exchange Rate Classifications and Growth

    OpenAIRE

    Justin M. Dubas; Byung-Joo Lee; Nelson C. Mark

    2005-01-01

    We propose an econometric procedure for obtaining de facto exchange rate regime classifications which we apply to study the relationship between exchange rate regimes and economic growth. Our classification method models the de jure regimes as outcomes of a multinomial logit choice problem conditional on the volatility of a country's effective exchange rate, a bilateral exchange rate and international reserves. An `effective' de facto exchange rate regime classification is then obtained by as...

  20. Proposing a Hybrid Model Based on Robson's Classification for Better Impact on Trends of Cesarean Deliveries.

    Science.gov (United States)

    Hans, Punit; Rohatgi, Renu

    2017-06-01

    To construct a hybrid model classification for cesarean section (CS) deliveries based on the woman-characteristics (Robson's classification with additional layers of indications for CS, keeping in view low-resource settings available in India). This is a cross-sectional study conducted at Nalanda Medical College, Patna. All the women delivered from January 2016 to May 2016 in the labor ward were included. Results obtained were compared with the values obtained for India, from secondary analysis of WHO multi-country survey (2010-2011) by Joshua Vogel and colleagues' study published in "The Lancet Global Health." The three classifications (indication-based, Robson's and hybrid model) applied for categorization of the cesarean deliveries from the same sample of data and a semiqualitative evaluations done, considering the main characteristics, strengths and weaknesses of each classification system. The total number of women delivered during study period was 1462, out of which CS deliveries were 471. Overall, CS rate calculated for NMCH, hospital in this specified period, was 32.21% ( p  = 0.001). Hybrid model scored 23/23, and scores of Robson classification and indication-based classification were 21/23 and 10/23, respectively. Single-study centre and referral bias are the limitations of the study. Given the flexibility of the classifications, we constructed a hybrid model based on the woman-characteristics system with additional layers of other classification. Indication-based classification answers why, Robson classification answers on whom, while through our hybrid model we get to know why and on whom cesarean deliveries are being performed.

  1. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  2. Gynecomastia Classification for Surgical Management: A Systematic Review and Novel Classification System.

    Science.gov (United States)

    Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas

    2017-03-01

    Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.

  3. Evaluation of classification systems for nonspecific idiopathic orbital inflammation

    NARCIS (Netherlands)

    Bijlsma, Ward R.; van 't Hullenaar, Fleur C.; Mourits, Maarten P.; Kalmann, Rachel

    2012-01-01

    To systematically analyze existing classification systems for idiopathic orbital inflammation (IOI) and propose and test a new best practice classification system. A systematic literature search was conducted to find all studies that described and applied a classification system for IOI.

  4. An Online Multisensor Data Fusion Framework for Radar Emitter Classification

    Directory of Open Access Journals (Sweden)

    Dongqing Zhou

    2016-01-01

    Full Text Available Radar emitter classification is a special application of data clustering for classifying unknown radar emitters in airborne electronic support system. In this paper, a novel online multisensor data fusion framework is proposed for radar emitter classification under the background of network centric warfare. The framework is composed of local processing and multisensor fusion processing, from which the rough and precise classification results are obtained, respectively. What is more, the proposed algorithm does not need prior knowledge and training process; it can dynamically update the number of the clusters and the cluster centers when new pulses arrive. At last, the experimental results show that the proposed framework is an efficacious way to solve radar emitter classification problem in networked warfare.

  5. Global classification of human facial healthy skin using PLS discriminant analysis and clustering analysis.

    Science.gov (United States)

    Guinot, C; Latreille, J; Tenenhaus, M; Malvy, D J

    2001-04-01

    Today's classifications of healthy skin are predominantly based on a very limited number of skin characteristics, such as skin oiliness or susceptibility to sun exposure. The aim of the present analysis was to set up a global classification of healthy facial skin, using mathematical models. This classification is based on clinical, biophysical skin characteristics and self-reported information related to the skin, as well as the results of a theoretical skin classification assessed separately for the frontal and the malar zones of the face. In order to maximize the predictive power of the models with a minimum of variables, the Partial Least Square (PLS) discriminant analysis method was used. The resulting PLS components were subjected to clustering analyses to identify the plausible number of clusters and to group the individuals according to their proximities. Using this approach, four PLS components could be constructed and six clusters were found relevant. So, from the 36 hypothetical combinations of the theoretical skin types classification, we tended to a strengthened six classes proposal. Our data suggest that the association of the PLS discriminant analysis and the clustering methods leads to a valid and simple way to classify healthy human skin and represents a potentially useful tool for cosmetic and dermatological research.

  6. Joint Feature Selection and Classification for Multilabel Learning.

    Science.gov (United States)

    Huang, Jun; Li, Guorong; Huang, Qingming; Wu, Xindong

    2018-03-01

    Multilabel learning deals with examples having multiple class labels simultaneously. It has been applied to a variety of applications, such as text categorization and image annotation. A large number of algorithms have been proposed for multilabel learning, most of which concentrate on multilabel classification problems and only a few of them are feature selection algorithms. Current multilabel classification models are mainly built on a single data representation composed of all the features which are shared by all the class labels. Since each class label might be decided by some specific features of its own, and the problems of classification and feature selection are often addressed independently, in this paper, we propose a novel method which can perform joint feature selection and classification for multilabel learning, named JFSC. Different from many existing methods, JFSC learns both shared features and label-specific features by considering pairwise label correlations, and builds the multilabel classifier on the learned low-dimensional data representations simultaneously. A comparative study with state-of-the-art approaches manifests a competitive performance of our proposed method both in classification and feature selection for multilabel learning.

  7. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  8. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    KAUST Repository

    Najibi, Seyed Morteza

    2017-02-08

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  9. Maxillectomy defects: a suggested classification scheme.

    Science.gov (United States)

    Akinmoladun, V I; Dosumu, O O; Olusanya, A A; Ikusika, O F

    2013-06-01

    The term "maxillectomy" has been used to describe a variety of surgical procedures for a spectrum of diseases involving a diverse anatomical site. Hence, classifications of maxillectomy defects have often made communication difficult. This article highlights this problem, emphasises the need for a uniform system of classification and suggests a classification system which is simple and comprehensive. Articles related to this subject, especially those with specified classifications of maxillary surgical defects were sourced from the internet through Google, Scopus and PubMed using the search terms maxillectomy defects classification. A manual search through available literature was also done. The review of the materials revealed many classifications and modifications of classifications from the descriptive, reconstructive and prosthodontic perspectives. No globally acceptable classification exists among practitioners involved in the management of diseases in the mid-facial region. There were over 14 classifications of maxillary defects found in the English literature. Attempts made to address the inadequacies of previous classifications have tended to result in cumbersome and relatively complex classifications. A single classification that is based on both surgical and prosthetic considerations is most desirable and is hereby proposed.

  10. Design of a hybrid model for cardiac arrhythmia classification based on Daubechies wavelet transform.

    Science.gov (United States)

    Rajagopal, Rekha; Ranganathan, Vidhyapriya

    2018-06-05

    Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. The design phase of the classification model comprises the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through Daubechies wavelet transform, and arrhythmia classification using a collaborative decision from the K nearest neighbor classifier (KNN) and a support vector machine (SVM). The proposed model is able to classify 5 arrhythmia classes as per the ANSI/AAMI EC57: 1998 classification standard. Level 1 of the proposed model involves classification using the KNN and the classifier is trained with examples from all classes. Level 2 involves classification using an SVM and is trained specifically to classify overlapped classes. The final classification of a test heartbeat pertaining to a particular class is done using the proposed KNN/SVM hybrid model. The experimental results demonstrated that the average sensitivity of the proposed model was 92.56%, the average specificity 99.35%, the average positive predictive value 98.13%, the average F-score 94.5%, and the average accuracy 99.78%. The results obtained using the proposed model were compared with the results of discriminant, tree, and KNN classifiers. The proposed model is able to achieve a high classification accuracy.

  11. A framework for product description classification in e-commerce

    NARCIS (Netherlands)

    Vandic, D.; Frasincar, F.; Kaymak, U.

    We propose the Hierarchical Product Classification (HPC) framework for the purpose of classifying products using a hierarchical product taxonomy. The framework uses a classification system with multiple classification nodes, each residing on a different level of the taxonomy. The innovative part of

  12. Stream Classification Tool User Manual: For Use in Applications in Hydropower-Related Evironmental Mitigation

    Energy Technology Data Exchange (ETDEWEB)

    McManamay, Ryan A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Troia, Matthew J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); DeRolph, Christopher R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Samu, Nicole M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-01-01

    Stream classifications are an inventory of different types of streams. Classifications help us explore similarities and differences among different types of streams, make inferences regarding stream ecosystem behavior, and communicate the complexities of ecosystems. We developed a nested, layered, and spatially contiguous stream classification to characterize the biophysical settings of stream reaches within the Eastern United States (~ 900,000 reaches). The classification is composed of five natural characteristics (hydrology, temperature, size, confinement, and substrate) along with several disturbance regime layers, and each was selected because of their relevance to hydropower mitigation. We developed the classification at the stream reach level using the National Hydrography Dataset Plus Version 1 (1:100k scale). The stream classification is useful to environmental mitigation for hydropower dams in multiple ways. First, it creates efficiency in the regulatory process by creating an objective and data-rich means to address meaningful mitigation actions. Secondly, the SCT addresses data gaps as it quickly provides an inventory of hydrology, temperature, morphology, and ecological communities for the immediate project area, but also surrounding streams. This includes identifying potential reference streams as those that are proximate to the hydropower facility and fall within the same class. These streams can potentially be used to identify ideal environmental conditions or identify desired ecological communities. In doing so, the stream provides some context for how streams may function, respond to dam regulation, and an overview of specific mitigation needs. Herein, we describe the methodology in developing each stream classification layer and provide a tutorial to guide applications of the classification (and associated data) in regulatory settings, such as hydropower (re)licensing.

  13. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  14. Quality-Oriented Classification of Aircraft Material Based on SVM

    Directory of Open Access Journals (Sweden)

    Hongxia Cai

    2014-01-01

    Full Text Available The existing material classification is proposed to improve the inventory management. However, different materials have the different quality-related attributes, especially in the aircraft industry. In order to reduce the cost without sacrificing the quality, we propose a quality-oriented material classification system considering the material quality character, Quality cost, and Quality influence. Analytic Hierarchy Process helps to make feature selection and classification decision. We use the improved Kraljic Portfolio Matrix to establish the three-dimensional classification model. The aircraft materials can be divided into eight types, including general type, key type, risk type, and leveraged type. Aiming to improve the classification accuracy of various materials, the algorithm of Support Vector Machine is introduced. Finally, we compare the SVM and BP neural network in the application. The results prove that the SVM algorithm is more efficient and accurate and the quality-oriented material classification is valuable.

  15. Acute leukemia classification by ensemble particle swarm model selection.

    Science.gov (United States)

    Escalante, Hugo Jair; Montes-y-Gómez, Manuel; González, Jesús A; Gómez-Gil, Pilar; Altamirano, Leopoldo; Reyes, Carlos A; Reta, Carolina; Rosales, Alejandro

    2012-07-01

    Acute leukemia is a malignant disease that affects a large proportion of the world population. Different types and subtypes of acute leukemia require different treatments. In order to assign the correct treatment, a physician must identify the leukemia type or subtype. Advanced and precise methods are available for identifying leukemia types, but they are very expensive and not available in most hospitals in developing countries. Thus, alternative methods have been proposed. An option explored in this paper is based on the morphological properties of bone marrow images, where features are extracted from medical images and standard machine learning techniques are used to build leukemia type classifiers. This paper studies the use of ensemble particle swarm model selection (EPSMS), which is an automated tool for the selection of classification models, in the context of acute leukemia classification. EPSMS is the application of particle swarm optimization to the exploration of the search space of ensembles that can be formed by heterogeneous classification models in a machine learning toolbox. EPSMS does not require prior domain knowledge and it is able to select highly accurate classification models without user intervention. Furthermore, specific models can be used for different classification tasks. We report experimental results for acute leukemia classification with real data and show that EPSMS outperformed the best results obtained using manually designed classifiers with the same data. The highest performance using EPSMS was of 97.68% for two-type classification problems and of 94.21% for more than two types problems. To the best of our knowledge, these are the best results reported for this data set. Compared with previous studies, these improvements were consistent among different type/subtype classification tasks, different features extracted from images, and different feature extraction regions. The performance improvements were statistically significant

  16. Scientific and General Subject Classifications in the Digital World

    CERN Document Server

    De Robbio, Antonella; Marini, A

    2001-01-01

    In the present work we discuss opportunities, problems, tools and techniques encountered when interconnecting discipline-specific subject classifications, primarily organized as search devices in bibliographic databases, with general classifications originally devised for book shelving in public libraries. We first state the fundamental distinction between topical (or subject) classifications and object classifications. Then we trace the structural limitations that have constrained subject classifications since their library origins, and the devices that were used to overcome the gap with genuine knowledge representation. After recalling some general notions on structure, dynamics and interferences of subject classifications and of the objects they refer to, we sketch a synthetic overview on discipline-specific classifications in Mathematics, Computing and Physics, on one hand, and on general classifications on the other. In this setting we present The Scientific Classifications Page, which collects groups of...

  17. Classification of lung sounds using higher-order statistics: A divide-and-conquer approach.

    Science.gov (United States)

    Naves, Raphael; Barbosa, Bruno H G; Ferreira, Danton D

    2016-06-01

    Lung sound auscultation is one of the most commonly used methods to evaluate respiratory diseases. However, the effectiveness of this method depends on the physician's training. If the physician does not have the proper training, he/she will be unable to distinguish between normal and abnormal sounds generated by the human body. Thus, the aim of this study was to implement a pattern recognition system to classify lung sounds. We used a dataset composed of five types of lung sounds: normal, coarse crackle, fine crackle, monophonic and polyphonic wheezes. We used higher-order statistics (HOS) to extract features (second-, third- and fourth-order cumulants), Genetic Algorithms (GA) and Fisher's Discriminant Ratio (FDR) to reduce dimensionality, and k-Nearest Neighbors and Naive Bayes classifiers to recognize the lung sound events in a tree-based system. We used the cross-validation procedure to analyze the classifiers performance and the Tukey's Honestly Significant Difference criterion to compare the results. Our results showed that the Genetic Algorithms outperformed the Fisher's Discriminant Ratio for feature selection. Moreover, each lung class had a different signature pattern according to their cumulants showing that HOS is a promising feature extraction tool for lung sounds. Besides, the proposed divide-and-conquer approach can accurately classify different types of lung sounds. The classification accuracy obtained by the best tree-based classifier was 98.1% for classification accuracy on training, and 94.6% for validation data. The proposed approach achieved good results even using only one feature extraction tool (higher-order statistics). Additionally, the implementation of the proposed classifier in an embedded system is feasible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. A New Method for Solving Supervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2014-01-01

    Full Text Available Supervised data classification is one of the techniques used to extract nontrivial information from data. Classification is a widely used technique in various fields, including data mining, industry, medicine, science, and law. This paper considers a new algorithm for supervised data classification problems associated with the cluster analysis. The mathematical formulations for this algorithm are based on nonsmooth, nonconvex optimization. A new algorithm for solving this optimization problem is utilized. The new algorithm uses a derivative-free technique, with robustness and efficiency. To improve classification performance and efficiency in generating classification model, a new feature selection algorithm based on techniques of convex programming is suggested. Proposed methods are tested on real-world datasets. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithms.

  19. Exploring different approaches for music genre classification

    Directory of Open Access Journals (Sweden)

    Antonio Jose Homsi Goulart

    2012-07-01

    Full Text Available In this letter, we present different approaches for music genre classification. The proposed techniques, which are composed of a feature extraction stage followed by a classification procedure, explore both the variations of parameters used as input and the classifier architecture. Tests were carried out with three styles of music, namely blues, classical, and lounge, which are considered informally by some musicians as being “big dividers” among music genres, showing the efficacy of the proposed algorithms and establishing a relationship between the relevance of each set of parameters for each music style and each classifier. In contrast to other works, entropies and fractal dimensions are the features adopted for the classifications.

  20. Automatic Classification of Attacks on IP Telephony

    Directory of Open Access Journals (Sweden)

    Jakub Safarik

    2013-01-01

    Full Text Available This article proposes an algorithm for automatic analysis of attack data in IP telephony network with a neural network. Data for the analysis is gathered from variable monitoring application running in the network. These monitoring systems are a typical part of nowadays network. Information from them is usually used after attack. It is possible to use an automatic classification of IP telephony attacks for nearly real-time classification and counter attack or mitigation of potential attacks. The classification use proposed neural network, and the article covers design of a neural network and its practical implementation. It contains also methods for neural network learning and data gathering functions from honeypot application.

  1. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    Science.gov (United States)

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  2. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    Science.gov (United States)

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  3. Towards secondary fingerprint classification

    CSIR Research Space (South Africa)

    Msiza, IS

    2011-07-01

    Full Text Available an accuracy figure of 76.8%. This small difference between the two figures is indicative of the validity of the proposed secondary classification module. Keywords?fingerprint core; fingerprint delta; primary classifi- cation; secondary classification I..., namely, the fingerprint core and the fingerprint delta. Forensically, a fingerprint core is defined as the innermost turning point where the fingerprint ridges form a loop, while the fingerprint delta is defined as the point where these ridges form a...

  4. Lauren classification and individualized chemotherapy in gastric cancer.

    Science.gov (United States)

    Ma, Junli; Shen, Hong; Kapesa, Linda; Zeng, Shan

    2016-05-01

    Gastric cancer is one of the most common malignancies worldwide. During the last 50 years, the histological classification of gastric carcinoma has been largely based on Lauren's criteria, in which gastric cancer is classified into two major histological subtypes, namely intestinal type and diffuse type adenocarcinoma. This classification was introduced in 1965, and remains currently widely accepted and employed, since it constitutes a simple and robust classification approach. The two histological subtypes of gastric cancer proposed by the Lauren classification exhibit a number of distinct clinical and molecular characteristics, including histogenesis, cell differentiation, epidemiology, etiology, carcinogenesis, biological behaviors and prognosis. Gastric cancer exhibits varied sensitivity to chemotherapy drugs and significant heterogeneity; therefore, the disease may be a target for individualized therapy. The Lauren classification may provide the basis for individualized treatment for advanced gastric cancer, which is increasingly gaining attention in the scientific field. However, few studies have investigated individualized treatment that is guided by pathological classification. The aim of the current review is to analyze the two major histological subtypes of gastric cancer, as proposed by the Lauren classification, and to discuss the implications of this for personalized chemotherapy.

  5. Unsupervised classification of variable stars

    Science.gov (United States)

    Valenzuela, Lucas; Pichara, Karim

    2018-03-01

    During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.

  6. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  7. Churn classification model for local telecommunication company ...

    African Journals Online (AJOL)

    ... model based on the Rough Set Theory to classify customer churn. The results of the study show that the proposed Rough Set classification model outperforms the existing models and contributes to significant accuracy improvement. Keywords: customer churn; classification model; telecommunication industry; data mining;

  8. Optimal ABC inventory classification using interval programming

    NARCIS (Netherlands)

    Rezaei, J.; Salimi, N.

    2015-01-01

    Inventory classification is one of the most important activities in inventory management, whereby inventories are classified into three or more classes. Several inventory classifications have been proposed in the literature, almost all of which have two main shortcomings in common. That is, the

  9. CLASSIFICATION ALGORITHMS FOR BIG DATA ANALYSIS, A MAP REDUCE APPROACH

    Directory of Open Access Journals (Sweden)

    V. A. Ayma

    2015-03-01

    Full Text Available Since many years ago, the scientific community is concerned about how to increase the accuracy of different classification methods, and major achievements have been made so far. Besides this issue, the increasing amount of data that is being generated every day by remote sensors raises more challenges to be overcome. In this work, a tool within the scope of InterIMAGE Cloud Platform (ICP, which is an open-source, distributed framework for automatic image interpretation, is presented. The tool, named ICP: Data Mining Package, is able to perform supervised classification procedures on huge amounts of data, usually referred as big data, on a distributed infrastructure using Hadoop MapReduce. The tool has four classification algorithms implemented, taken from WEKA’s machine learning library, namely: Decision Trees, Naïve Bayes, Random Forest and Support Vector Machines (SVM. The results of an experimental analysis using a SVM classifier on data sets of different sizes for different cluster configurations demonstrates the potential of the tool, as well as aspects that affect its performance.

  10. CCM: A Text Classification Method by Clustering

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    In this paper, a new Cluster based Classification Model (CCM) for suspicious email detection and other text classification tasks, is presented. Comparative experiments of the proposed model against traditional classification models and the boosting algorithm are also discussed. Experimental results...... show that the CCM outperforms traditional classification models as well as the boosting algorithm for the task of suspicious email detection on terrorism domain email dataset and topic categorization on the Reuters-21578 and 20 Newsgroups datasets. The overall finding is that applying a cluster based...

  11. A risk informed safety classification for a Nordic NPP

    International Nuclear Information System (INIS)

    Jaenkaelae, K.

    2002-01-01

    The report describes a study to develop a safety classification proposal or classi- fication recommendations based on risks for selected equipment of a nuclear power plant. The application plant in this work is Loviisa NPP unit 1. The safety classification proposals are to be considered as an exercise in this pilot study and do not necessarily represent final proposals in a real situation. Comparisons to original safety classifications and technical specifications were made. The study concludes that it is possible to change safety classes or safety signifi- cances as considered in technical specifications and in in-service-inspections into both directions without endangering the safety or even by improving the safety. (au)

  12. Image Processing Tools for Improved Visualization and Analysis of Remotely Sensed Images for Agriculture and Forest Classifications

    OpenAIRE

    SINHA G. R.

    2017-01-01

    This paper suggests Image Processing tools for improved visualization and better analysis of remotely sensed images. There are methods already available in literature for the purpose but the most important challenge among the limitations is lack of robustness. We propose an optimal method for image enhancement of the images using fuzzy based approaches and few optimization tools. The segmentation images subsequently obtained after de-noising will be classified into distinct information and th...

  13. Land Cover Classification from Multispectral Data Using Computational Intelligence Tools: A Comparative Study

    Directory of Open Access Journals (Sweden)

    André Mora

    2017-11-01

    Full Text Available This article discusses how computational intelligence techniques are applied to fuse spectral images into a higher level image of land cover distribution for remote sensing, specifically for satellite image classification. We compare a fuzzy-inference method with two other computational intelligence methods, decision trees and neural networks, using a case study of land cover classification from satellite images. Further, an unsupervised approach based on k-means clustering has been also taken into consideration for comparison. The fuzzy-inference method includes training the classifier with a fuzzy-fusion technique and then performing land cover classification using reinforcement aggregation operators. To assess the robustness of the four methods, a comparative study including three years of land cover maps for the district of Mandimba, Niassa province, Mozambique, was undertaken. Our results show that the fuzzy-fusion method performs similarly to decision trees, achieving reliable classifications; neural networks suffer from overfitting; while k-means clustering constitutes a promising technique to identify land cover types from unknown areas.

  14. Classification of pyodestructive pulmonary diseases

    International Nuclear Information System (INIS)

    Muromskij, Yu.A.; Semivolkov, V.I.; Shlenova, L.A.

    1993-01-01

    Classification of pyodestructive lungs diseases, thier complications and outcomes is proposed which makes it possible for physioians engaged in studying respiratory organs pathology to orient themselves in problems of diagnosis and treatment tactics. The above classification is developed on the basis of studying the disease anamnesis and its clinical process, as well as on the basis of roentgenological and morphological study results by more than 10000 patients

  15. Social Media Text Classification by Enhancing Well-Formed Text Trained Model

    Directory of Open Access Journals (Sweden)

    Phat Jotikabukkana

    2016-09-01

    Full Text Available Social media are a powerful communication tool in our era of digital information. The large amount of user-generated data is a useful novel source of data, even though it is not easy to extract the treasures from this vast and noisy trove. Since classification is an important part of text mining, many techniques have been proposed to classify this kind of information. We developed an effective technique of social media text classification by semi-supervised learning utilizing an online news source consisting of well-formed text. The computer first automatically extracts news categories, well-categorized by publishers, as classes for topic classification. A bag of words taken from news articles provides the initial keywords related to their category in the form of word vectors. The principal task is to retrieve a set of new productive keywords. Term Frequency-Inverse Document Frequency weighting (TF-IDF and Word Article Matrix (WAM are used as main methods. A modification of WAM is recomputed until it becomes the most effective model for social media text classification. The key success factor was enhancing our model with effective keywords from social media. A promising result of 99.50% accuracy was achieved, with more than 98.5% of Precision, Recall, and F-measure after updating the model three times.

  16. The Improved Methods of Critical Component Classification for the SSCs of New NPP

    International Nuclear Information System (INIS)

    Lee, Sang Dae; Yeom, Dong Un; Hyun, Jin Woo

    2010-01-01

    Functional Importance Determination (FID) process classifies the components of a plant into four groups: Critical A, Critical B, Minor and No Impact. The output of FID can be used as the decision-making tool for maintenance work priority and the input data for preventive maintenance implementation. FID applied to new Nuclear Power Plant (NPP) can be accomplished by utilizing the function analysis results and safety significance determination results of Maintenance Rule (MR) program. Using Shin-Kori NPP as an example, this paper proposes the advanced critical component classification methods for FID utilizing MR scoping results

  17. Classification of Noisy Data: An Approach Based on Genetic Algorithms and Voronoi Tessellation

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Knudsen, Torben

    Classification is one of the major constituents of the data-mining toolkit. The well-known methods for classification are built on either the principle of logic or statistical/mathematical reasoning for classification. In this article we propose: (1) a different strategy, which is based on the po......Classification is one of the major constituents of the data-mining toolkit. The well-known methods for classification are built on either the principle of logic or statistical/mathematical reasoning for classification. In this article we propose: (1) a different strategy, which is based...

  18. An Active Learning Framework for Hyperspectral Image Classification Using Hierarchical Segmentation

    Science.gov (United States)

    Zhang, Zhou; Pasolli, Edoardo; Crawford, Melba M.; Tilton, James C.

    2015-01-01

    Augmenting spectral data with spatial information for image classification has recently gained significant attention, as classification accuracy can often be improved by extracting spatial information from neighboring pixels. In this paper, we propose a new framework in which active learning (AL) and hierarchical segmentation (HSeg) are combined for spectral-spatial classification of hyperspectral images. The spatial information is extracted from a best segmentation obtained by pruning the HSeg tree using a new supervised strategy. The best segmentation is updated at each iteration of the AL process, thus taking advantage of informative labeled samples provided by the user. The proposed strategy incorporates spatial information in two ways: 1) concatenating the extracted spatial features and the original spectral features into a stacked vector and 2) extending the training set using a self-learning-based semi-supervised learning (SSL) approach. Finally, the two strategies are combined within an AL framework. The proposed framework is validated with two benchmark hyperspectral datasets. Higher classification accuracies are obtained by the proposed framework with respect to five other state-of-the-art spectral-spatial classification approaches. Moreover, the effectiveness of the proposed pruning strategy is also demonstrated relative to the approaches based on a fixed segmentation.

  19. Vehicle Maneuver Detection with Accelerometer-Based Classification

    Directory of Open Access Journals (Sweden)

    Javier Cervantes-Villanueva

    2016-09-01

    Full Text Available In the mobile computing era, smartphones have become instrumental tools to develop innovative mobile context-aware systems. In that sense, their usage in the vehicular domain eases the development of novel and personal transportation solutions. In this frame, the present work introduces an innovative mechanism to perceive the current kinematic state of a vehicle on the basis of the accelerometer data from a smartphone mounted in the vehicle. Unlike previous proposals, the introduced architecture targets the computational limitations of such devices to carry out the detection process following an incremental approach. For its realization, we have evaluated different classification algorithms to act as agents within the architecture. Finally, our approach has been tested with a real-world dataset collected by means of the ad hoc mobile application developed.

  20. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  1. Sow-activity classification from acceleration patterns

    DEFF Research Database (Denmark)

    Escalante, Hugo Jair; Rodriguez, Sara V.; Cordero, Jorge

    2013-01-01

    sow-activity classification can be approached with standard machine learning methods for pattern classification. Individual predictions for elements of times series of arbitrary length are combined to classify it as a whole. An extensive comparison of representative learning algorithms, including......This paper describes a supervised learning approach to sow-activity classification from accelerometer measurements. In the proposed methodology, pairs of accelerometer measurements and activity types are considered as labeled instances of a usual supervised classification task. Under this scenario...... neural networks, support vector machines, and ensemble methods, is presented. Experimental results are reported using a data set for sow-activity classification collected in a real production herd. The data set, which has been widely used in related works, includes measurements from active (Feeding...

  2. Asynchronous data-driven classification of weapon systems

    International Nuclear Information System (INIS)

    Jin, Xin; Mukherjee, Kushal; Gupta, Shalabh; Ray, Asok; Phoha, Shashi; Damarla, Thyagaraju

    2009-01-01

    This communication addresses real-time weapon classification by analysis of asynchronous acoustic data, collected from microphones on a sensor network. The weapon classification algorithm consists of two parts: (i) feature extraction from time-series data using symbolic dynamic filtering (SDF), and (ii) pattern classification based on the extracted features using the language measure (LM) and support vector machine (SVM). The proposed algorithm has been tested on field data, generated by firing of two types of rifles. The results of analysis demonstrate high accuracy and fast execution of the pattern classification algorithm with low memory requirements. Potential applications include simultaneous shooter localization and weapon classification with soldier-wearable networked sensors. (rapid communication)

  3. Artificial intelligence tools decision support systems in condition monitoring and diagnosis

    CERN Document Server

    Galar Pascual, Diego

    2015-01-01

    Artificial Intelligence Tools: Decision Support Systems in Condition Monitoring and Diagnosis discusses various white- and black-box approaches to fault diagnosis in condition monitoring (CM). This indispensable resource: Addresses nearest-neighbor-based, clustering-based, statistical, and information theory-based techniques Considers the merits of each technique as well as the issues associated with real-life application Covers classification methods, from neural networks to Bayesian and support vector machines Proposes fuzzy logic to explain the uncertainties associated with diagnostic processes Provides data sets, sample signals, and MATLAB® code for algorithm testing Artificial Intelligence Tools: Decision Support Systems in Condition Monitoring and Diagnosis delivers a thorough evaluation of the latest AI tools for CM, describing the most common fault diagnosis techniques used and the data acquired when these techniques are applied.

  4. Cerebro-costo-mandibular syndrome: prognosis and proposal for classification.

    Science.gov (United States)

    Nagasawa, Hiroyuki; Yamamoto, Yutaka; Kohno, Yoshinori

    2010-09-01

    Cerebro-costo-mandibular syndrome (CCMS) is a very rare syndrome characterized by micrognathia and posterior rib gap, with a poor prognosis. To date, only 75 cases have been reported worldwide. The overall survival rate for patients with this disorder has not been reported, and a classification of the patients on the basis of the prognosis is not yet available. The present study analyzed the figures and prognoses of past patients and documented a new case of CCMS. Formerly published case reports and personal communications were used to reveal the prognosis and classification of CCMS. The occurrence ratios of rib gap defects and of missing ribs were examined. Patients were divided into the following three groups according to their life span: lethal type, where the patients died before 1 month; severe type, where the patients lived for 1-12 months; and mild type, where they survived for more than 1 year. A comparison was made of the number of rib gaps, missing ribs, and the rib gap ratio (defined as the number of rib gaps divided by the number of all existing ribs) among these three groups. A significant difference in the number of rib defects between the lethal type and other types was noted. Short life span of severe type patients, compared to mild type, was attributed to their subjection to severe respiratory infection. CCMS can be classified into three categories--lethal, severe, and mild--according to the severity of the symptoms and prognosis.

  5. New guidelines for dam safety classification

    International Nuclear Information System (INIS)

    Dascal, O.

    1999-01-01

    Elements are outlined of recommended new guidelines for safety classification of dams. Arguments are provided for the view that dam classification systems should require more than one system as follows: (a) classification for selection of design criteria, operation procedures and emergency measures plans, based on potential consequences of a dam failure - the hazard classification of water retaining structures; (b) classification for establishment of surveillance activities and for safety evaluation of dams, based on the probability and consequences of failure - the risk classification of water retaining structures; and (c) classification for establishment of water management plans, for safety evaluation of the entire project, for preparation of emergency measures plans, for definition of the frequency and extent of maintenance operations, and for evaluation of changes and modifications required - the hazard classification of the project. The hazard classification of the dam considers, as consequence, mainly the loss of lives or persons in jeopardy and the property damages to third parties. Difficulties in determining the risk classification of the dam lie in the fact that no tool exists to evaluate the probability of the dam's failure. To overcome this, the probability of failure can be substituted for by a set of dam characteristics that express the failure potential of the dam and its foundation. The hazard classification of the entire project is based on the probable consequences of dam failure influencing: loss of life, persons in jeopardy, property and environmental damage. The classification scheme is illustrated for dam threatening events such as earthquakes and floods. 17 refs., 5 tabs

  6. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Directory of Open Access Journals (Sweden)

    Lev V. Utkin

    2012-01-01

    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  7. Dense Iterative Contextual Pixel Classification using Kriging

    DEFF Research Database (Denmark)

    Ganz, Melanie; Loog, Marco; Brandt, Sami

    2009-01-01

    have been proposed to this end, e.g., iterative contextual pixel classification, iterated conditional modes, and other approaches related to Markov random fields. A problem of these methods, however, is their computational complexity, especially when dealing with high-resolution images in which......In medical applications, segmentation has become an ever more important task. One of the competitive schemes to perform such segmentation is by means of pixel classification. Simple pixel-based classification schemes can be improved by incorporating contextual label information. Various methods...... relatively long range interactions may play a role. We propose a new method based on Kriging that makes it possible to include such long range interactions, while keeping the computations manageable when dealing with large medical images....

  8. The decision tree approach to classification

    Science.gov (United States)

    Wu, C.; Landgrebe, D. A.; Swain, P. H.

    1975-01-01

    A class of multistage decision tree classifiers is proposed and studied relative to the classification of multispectral remotely sensed data. The decision tree classifiers are shown to have the potential for improving both the classification accuracy and the computation efficiency. Dimensionality in pattern recognition is discussed and two theorems on the lower bound of logic computation for multiclass classification are derived. The automatic or optimization approach is emphasized. Experimental results on real data are reported, which clearly demonstrate the usefulness of decision tree classifiers.

  9. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    Directory of Open Access Journals (Sweden)

    Wei Jin

    2016-12-01

    Full Text Available Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC, atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  10. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    Science.gov (United States)

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-01-01

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261

  11. A Novel Feature Level Fusion for Heart Rate Variability Classification Using Correntropy and Cauchy-Schwarz Divergence.

    Science.gov (United States)

    Goshvarpour, Ateke; Goshvarpour, Atefeh

    2018-04-30

    Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.

  12. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  13. A proposed new classification for the renal collecting system of cattle.

    Science.gov (United States)

    Pereira-Sampaio, Marco A; Bagetti Filho, Helio J S; Carvalho, Francismar S; Sampaio, Francisco J B; Henry, Robert W

    2010-11-01

    To evaluate the intrarenal anatomy of kidneys obtained from cattle and to propose a new classification for the renal collecting system of cattle. 37 kidneys from 20 adult male mixed-breed cattle. Intrarenal anatomy was evaluated by the use of 3-D endocasts made of the kidneys. The number of renal lobes and minor renal calyces in each kidney and each renal region (cranial pole, caudal pole, and hilus) was quantified. The renal pelvis was evident in all casts and was classified into 2 types (nondilated [28/37 {75.7%}] or dilated [9/37 {24.3%}]). All casts had a major renal calyx associated with the cranial pole and the caudal pole. The number of minor renal calices per kidney ranged from 13 to 64 (mean, 22.7). There was a significant correlation between the number of renal lobes and the number of minor renal calices for the entire kidney, the cranial pole region, and the hilus region; however, there was not a similar significant correlation for the caudal pole region. Major and minor renal calices were extremely narrow, compared with major and minor renal calices in pigs and humans. The renal collecting system of cattle, with a renal pelvis and 2 major renal calices connected to several minor renal calices by an infundibulum, differed substantially from the renal collecting system of pigs and humans. From a morphological standpoint, the kidneys of cattle were not suitable for use as a model in endourologic research and training.

  14. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons.

    Science.gov (United States)

    Steinbiss, Sascha; Kastens, Sascha; Kurtz, Stefan

    2012-11-07

    Long terminal repeat (LTR) retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-)families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets), making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining the output of software for predicting LTR

  15. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    Science.gov (United States)

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Classification of Building Object Types

    DEFF Research Database (Denmark)

    Jørgensen, Kaj Asbjørn

    2011-01-01

    made. This is certainly the case in the Danish development. Based on the theories about these abstraction mechanisms, the basic principles for classification systems are presented and the observed misconceptions are analyses and explained. Furthermore, it is argued that the purpose of classification...... systems has changed and that new opportunities should be explored. Some proposals for new applications are presented and carefully aligned with IT opportunities. Especially, the use of building modelling will give new benefits and many of the traditional uses of classification systems will instead...... be managed by software applications and on the basis of building models. Classification systems with taxonomies of building object types have many application opportunities but can still be beneficial in data exchange between building construction partners. However, this will be performed by new methods...

  17. Acoustic classification of housing according to ISO/CD 19488 compared with VDI 4100 and DEGA Recommendation 103

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2017-01-01

    and for further development of design tools. Due to the high diversity in Europe, the European COST Action TU0901 ”Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions” was established in 2009 with preparation of a proposal for an acoustic classification scheme...... and impact sound insulation between dwellings, facade sound insulation and service equipment noise. The schemes have been implemented and revised gradually since the 1990es. However, due to lack of coordination, there are significant discrepancies, implying obstacles for exchange of experience......In Europe, national acoustic classification schemes for housing exist in about ten countries. The schemes specify a number of quality classes, reflecting different levels of acoustic protection, and include class criteria concerning several acoustic aspects, main criteria being about airborne...

  18. Quantum computing for pattern classification

    OpenAIRE

    Schuld, Maria; Sinayskiy, Ilya; Petruccione, Francesco

    2014-01-01

    It is well known that for certain tasks, quantum computing outperforms classical computing. A growing number of contributions try to use this advantage in order to improve or extend classical machine learning algorithms by methods of quantum information theory. This paper gives a brief introduction into quantum machine learning using the example of pattern classification. We introduce a quantum pattern classification algorithm that draws on Trugenberger's proposal for measuring the Hamming di...

  19. Promoting consistent use of the communication function classification system (CFCS).

    Science.gov (United States)

    Cunningham, Barbara Jane; Rosenbaum, Peter; Hidecker, Mary Jo Cooley

    2016-01-01

    We developed a Knowledge Translation (KT) intervention to standardize the way speech-language pathologists working in Ontario Canada's Preschool Speech and Language Program (PSLP) used the Communication Function Classification System (CFCS). This tool was being used as part of a provincial program evaluation and standardizing its use was critical for establishing reliability and validity within the provincial dataset. Two theoretical foundations - Diffusion of Innovations and the Communication Persuasion Matrix - were used to develop and disseminate the intervention to standardize use of the CFCS among a cohort speech-language pathologists. A descriptive pre-test/post-test study was used to evaluate the intervention. Fifty-two participants completed an electronic pre-test survey, reviewed intervention materials online, and then immediately completed an electronic post-test survey. The intervention improved clinicians' understanding of how the CFCS should be used, their intentions to use the tool in the standardized way, and their abilities to make correct classifications using the tool. Findings from this work will be shared with representatives of the Ontario PSLP. The intervention may be disseminated to all speech-language pathologists working in the program. This study can be used as a model for developing and disseminating KT interventions for clinicians in paediatric rehabilitation. The Communication Function Classification System (CFCS) is a new tool that allows speech-language pathologists to classify children's skills into five meaningful levels of function. There is uncertainty and inconsistent practice in the field about the methods for using this tool. This study used combined two theoretical frameworks to develop an intervention to standardize use of the CFCS among a cohort of speech-language pathologists. The intervention effectively increased clinicians' understanding of the methods for using the CFCS, ability to make correct classifications, and

  20. Strategy proposed by Electricite de France in the development of automatic tools

    Energy Technology Data Exchange (ETDEWEB)

    Castaing, C.; Cazin, B. [Electricite de France, Noisy le grand (France)

    1995-03-01

    The strategy proposed by EDF in the development of a means to limit personal and collective dosimetry is recent. It follows in the steps of a policy that consisted of developing remote operation means for those activities of inspection and maintenance on the reactor, pools bottom, steam generators (SGs), also reactor building valves; target activities because of their high dosimetric cost. One of the main duties of the UTO (Technical Support Department), within the EDF, is the maintenance of Pressurized Water Reactors in French Nuclear Power Plant Operations (consisting of 54 units) and the development and monitoring of specialized tools. To achieve this, the UTO has started a national think-tank on the implementation of the ALARA process in its field of activity and created an ALARA Committee responsible for running and monitoring it, as well as a policy for developing tools. This point will be illustrated in the second on reactor vessel heads.

  1. Hyperspectral image classification based on local binary patterns and PCANet

    Science.gov (United States)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  2. Rule-guided human classification of Volunteered Geographic Information

    Science.gov (United States)

    Ali, Ahmed Loai; Falomir, Zoe; Schmid, Falko; Freksa, Christian

    2017-05-01

    During the last decade, web technologies and location sensing devices have evolved generating a form of crowdsourcing known as Volunteered Geographic Information (VGI). VGI acted as a platform of spatial data collection, in particular, when a group of public participants are involved in collaborative mapping activities: they work together to collect, share, and use information about geographic features. VGI exploits participants' local knowledge to produce rich data sources. However, the resulting data inherits problematic data classification. In VGI projects, the challenges of data classification are due to the following: (i) data is likely prone to subjective classification, (ii) remote contributions and flexible contribution mechanisms in most projects, and (iii) the uncertainty of spatial data and non-strict definitions of geographic features. These factors lead to various forms of problematic classification: inconsistent, incomplete, and imprecise data classification. This research addresses classification appropriateness. Whether the classification of an entity is appropriate or inappropriate is related to quantitative and/or qualitative observations. Small differences between observations may be not recognizable particularly for non-expert participants. Hence, in this paper, the problem is tackled by developing a rule-guided classification approach. This approach exploits data mining techniques of Association Classification (AC) to extract descriptive (qualitative) rules of specific geographic features. The rules are extracted based on the investigation of qualitative topological relations between target features and their context. Afterwards, the extracted rules are used to develop a recommendation system able to guide participants to the most appropriate classification. The approach proposes two scenarios to guide participants towards enhancing the quality of data classification. An empirical study is conducted to investigate the classification of grass

  3. Performance of rapid subtyping tools used for the classification of ...

    African Journals Online (AJOL)

    HIV-1 genetic diversity in sub-Saharan Africa is broad and the AIDS epidemic is driven predominantly by recombinants in Central and West Africa. The classification of HIV-1 strains is therefore necessary to understand diagnostic efficiency, individual treatment responses as well as options for designing vaccines and ...

  4. Co-occurrence Models in Music Genre Classification

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan

    2005-01-01

    Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...... genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models....

  5. Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.

    Science.gov (United States)

    Gatos, Ilias; Tsantis, Stavros; Karamesini, Maria; Spiliopoulos, Stavros; Karnabatidis, Dimitris; Hazle, John D; Kagadis, George C

    2017-07-01

    To automatically segment and classify focal liver lesions (FLLs) on nonenhanced T2-weighted magnetic resonance imaging (MRI) scans using a computer-aided diagnosis (CAD) algorithm. 71 FLLs (30 benign lesions, 19 hepatocellular carcinomas, and 22 metastases) on T2-weighted MRI scans were delineated by the proposed CAD scheme. The FLL segmentation procedure involved wavelet multiscale analysis to extract accurate edge information and mean intensity values for consecutive edges computed using horizontal and vertical analysis that were fed into the subsequent fuzzy C-means algorithm for final FLL border extraction. Texture information for each extracted lesion was derived using 42 first- and second-order textural features from grayscale value histogram, co-occurrence, and run-length matrices. Twelve morphological features were also extracted to capture any shape differentiation between classes. Feature selection was performed with stepwise multilinear regression analysis that led to a reduced feature subset. A multiclass Probabilistic Neural Network (PNN) classifier was then designed and used for lesion classification. PNN model evaluation was performed using the leave-one-out (LOO) method and receiver operating characteristic (ROC) curve analysis. The mean overlap between the automatically segmented FLLs and the manual segmentations performed by radiologists was 0.91 ± 0.12. The highest classification accuracies in the PNN model for the benign, hepatocellular carcinoma, and metastatic FLLs were 94.1%, 91.4%, and 94.1%, respectively, with sensitivity/specificity values of 90%/97.3%, 89.5%/92.2%, and 90.9%/95.6% respectively. The overall classification accuracy for the proposed system was 90.1%. Our diagnostic system using sophisticated FLL segmentation and classification algorithms is a powerful tool for routine clinical MRI-based liver evaluation and can be a supplement to contrast-enhanced MRI to prevent unnecessary invasive procedures. © 2017 American

  6. Polarimetric SAR image classification based on discriminative dictionary learning model

    Science.gov (United States)

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  7. Severity of Airflow Obstruction in Chronic Obstructive Pulmonary Disease (COPD): Proposal for a New Classification.

    Science.gov (United States)

    Coton, Sonia; Vollmer, William M; Bateman, Eric; Marks, Guy B; Tan, Wan; Mejza, Filip; Juvekar, Sanjay; Janson, Christer; Mortimer, Kevin; P A, Mahesh; Buist, A Sonia; Burney, Peter G J

    2017-10-01

    Current classifications of Chronic Obstructive Pulmonary Disease (COPD) severity are complex and do not grade levels of obstruction. Obstruction is a simpler construct and independent of ethnicity. We constructed an index of obstruction severity based on the FEV 1 /FVC ratio, with cut-points dividing the Burden of Obstructive Lung Disease (BOLD) study population into four similarly sized strata to those created by the GOLD criteria that uses FEV 1 . We measured the agreement between classifications and the validity of the FEV 1 -based classification in identifying the level of obstruction as defined by the new groupings. We compared the strengths of association of each classification with quality of life (QoL), MRC dyspnoea score and the self-reported exacerbation rate. Agreement between classifications was only fair. FEV 1 -based criteria for moderate COPD identified only 79% of those with moderate obstruction and misclassified half of the participants with mild obstruction as having more severe COPD. Both scales were equally strongly associated with QoL, exertional dyspnoea and respiratory exacerbations. Severity assessed using the FEV 1 /FVC ratio is only in moderate agreement with the severity assessed using FEV 1 but is equally strongly associated with other outcomes. Severity assessed using the FEV 1 /FVC ratio is likely to be independent of ethnicity.

  8. AIRPORTS CLASSIFICATION AND PRIORITY OF THEIR RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    K. V. Marintseva

    2014-03-01

    Full Text Available Purpose. It is important for Ukraine to have a network of airports, which would promote the current and long-term implementation of air transportation needs of the population and the economics. This study aims to establish criteria of airports classification to determine their role in the development of the air transport system of Ukraine. Methodology. The methods of statistical analysis were used for the processing of data according to categories of airport productivity and geographic information system for data visualization. Findings. It is established that the existing division of Ukrainian airports into international and domestic, as well as into coordinated and non-coordinated ones is not relevant for determining the role of airport in the development of air transport system of the country and accordingly for the priority in financing of their modernization. The approach to the determination of airports classifications using analysis of performance categories was developed. Originality. Classification criterions of the airports in Ukraine are proposed: by type of activity and by the maintenance of scheduled route network. It is proposed to classify the airports by the type of activity to the primary commercial, commercial, cargo primary commercial, cargo commercial and general aviation. According to the scheduled route network maintenane it is proposed to classify the airports as the primary, non-primary and auxiliary hubs. An example of classification by the given criteria is submitted. Practical value. The value of the obtained results is in the possibility of using the proposed classification in the task of determining the priorities for financing the country's airports. As opposed to the practice of directed funding procedure in the framework of the state program of airports development, it is proposed to take into account the fact that the resumption of the functioning of the airport and/or its modernization should be as a response to

  9. False-positive reduction in CAD mass detection using a competitive classification strategy

    International Nuclear Information System (INIS)

    Li Lihua; Zheng Yang; Zhang Lei; Clark, Robert A.

    2001-01-01

    High false-positive (FP) rate remains to be one of the major problems to be solved in CAD study because too many false-positively cued signals will potentially degrade the performance of detecting true-positive regions and increase the call-back rate in CAD environment. In this paper, we proposed a novel classification method for FP reduction, where the conventional 'hard' decision classifier is cascaded with a 'soft' decision classification with the objective to reduce false-positives in the cases with multiple FPs retained after the 'hard' decision classification. The 'soft' classification takes a competitive classification strategy in which only the 'best' ones are selected from the pre-classified suspicious regions as the true mass in each case. A neural network structure is designed to implement the proposed competitive classification. Comparative studies of FP reduction on a database of 79 images by a 'hard' decision classification and a combined 'hard'-'soft' classification method demonstrated the efficiency of the proposed classification strategy. For example, for the high FP sub-database which has only 31.7% of total images but accounts for 63.5% of whole FPs generated in single 'hard' classification, the FPs can be reduced for 56% (from 8.36 to 3.72 per image) by using the proposed method at the cost of 1% TP loss (from 69% to 68%) in whole database, while it can only be reduced for 27% (from 8.36 to 6.08 per image) by simply increasing the threshold of 'hard' classifier with a cost of TP loss as high as 14% (from 69% to 55%). On the average in whole database, the FP reduction by hybrid 'hard'-'soft' classification is 1.58 per image as compared to 1.11 by 'hard' classification at the TP costs described above. Because the cases with high dense tissue are of higher risk of cancer incidence and false-negative detection in mammogram screening, and usually generate more FPs in CAD detection, the method proposed in this paper will be very helpful in improving

  10. Adaptive SVM for Data Stream Classification

    Directory of Open Access Journals (Sweden)

    Isah A. Lawal

    2017-07-01

    Full Text Available In this paper, we address the problem of learning an adaptive classifier for the classification of continuous streams of data. We present a solution based on incremental extensions of the Support Vector Machine (SVM learning paradigm that updates an existing SVM whenever new training data are acquired. To ensure that the SVM effectiveness is guaranteed while exploiting the newly gathered data, we introduce an on-line model selection approach in the incremental learning process. We evaluated the proposed method on real world applications including on-line spam email filtering and human action classification from videos. Experimental results show the effectiveness and the potential of the proposed approach.

  11. Using Machine Learning for Land Suitability Classification

    African Journals Online (AJOL)

    User

    West African Journal of Applied Ecology, vol. ... evidence for the utility of machine learning methods in land suitability classification especially MCS methods. ... Artificial intelligence tools. ..... Numerical values of index for the various classes.

  12. Classification of peacock feather reflectance using principal component analysis similarity factors from multispectral imaging data.

    Science.gov (United States)

    Medina, José M; Díaz, José A; Vukusic, Pete

    2015-04-20

    Iridescent structural colors in biology exhibit sophisticated spatially-varying reflectance properties that depend on both the illumination and viewing angles. The classification of such spectral and spatial information in iridescent structurally colored surfaces is important to elucidate the functional role of irregularity and to improve understanding of color pattern formation at different length scales. In this study, we propose a non-invasive method for the spectral classification of spatial reflectance patterns at the micron scale based on the multispectral imaging technique and the principal component analysis similarity factor (PCASF). We demonstrate the effectiveness of this approach and its component methods by detailing its use in the study of the angle-dependent reflectance properties of Pavo cristatus (the common peacock) feathers, a species of peafowl very well known to exhibit bright and saturated iridescent colors. We show that multispectral reflectance imaging and PCASF approaches can be used as effective tools for spectral recognition of iridescent patterns in the visible spectrum and provide meaningful information for spectral classification of the irregularity of the microstructure in iridescent plumage.

  13. Exploring repetitive DNA landscapes using REPCLASS, a tool that automates the classification of transposable elements in eukaryotic genomes.

    Science.gov (United States)

    Feschotte, Cédric; Keswani, Umeshkumar; Ranganathan, Nirmal; Guibotsy, Marcel L; Levine, David

    2009-07-23

    Eukaryotic genomes contain large amount of repetitive DNA, most of which is derived from transposable elements (TEs). Progress has been made to develop computational tools for ab initio identification of repeat families, but there is an urgent need to develop tools to automate the annotation of TEs in genome sequences. Here we introduce REPCLASS, a tool that automates the classification of TE sequences. Using control repeat libraries, we show that the program can classify accurately virtually any known TE types. Combining REPCLASS to ab initio repeat finding in the genomes of Caenorhabditis elegans and Drosophila melanogaster allowed us to recover the contrasting TE landscape characteristic of these species. Unexpectedly, REPCLASS also uncovered several novel TE families in both genomes, augmenting the TE repertoire of these model species. When applied to the genomes of distant Caenorhabditis and Drosophila species, the approach revealed a remarkable conservation of TE composition profile within each genus, despite substantial interspecific covariations in genome size and in the number of TEs and TE families. Lastly, we applied REPCLASS to analyze 10 fungal genomes from a wide taxonomic range, most of which have not been analyzed for TE content previously. The results showed that TE diversity varies widely across the fungi "kingdom" and appears to positively correlate with genome size, in particular for DNA transposons. Together, these data validate REPCLASS as a powerful tool to explore the repetitive DNA landscapes of eukaryotes and to shed light onto the evolutionary forces shaping TE diversity and genome architecture.

  14. Featureless classification of light curves

    Science.gov (United States)

    Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.

    2015-08-01

    In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.

  15. CLASS-PAIR-GUIDED MULTIPLE KERNEL LEARNING OF INTEGRATING HETEROGENEOUS FEATURES FOR CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Q. Wang

    2017-10-01

    Full Text Available In recent years, many studies on remote sensing image classification have shown that using multiple features from different data sources can effectively improve the classification accuracy. As a very powerful means of learning, multiple kernel learning (MKL can conveniently be embedded in a variety of characteristics. The conventional combined kernel learned by MKL can be regarded as the compromise of all basic kernels for all classes in classification. It is the best of the whole, but not optimal for each specific class. For this problem, this paper proposes a class-pair-guided MKL method to integrate the heterogeneous features (HFs from multispectral image (MSI and light detection and ranging (LiDAR data. In particular, the one-against-one strategy is adopted, which converts multiclass classification problem to a plurality of two-class classification problem. Then, we select the best kernel from pre-constructed basic kernels set for each class-pair by kernel alignment (KA in the process of classification. The advantage of the proposed method is that only the best kernel for the classification of any two classes can be retained, which leads to greatly enhanced discriminability. Experiments are conducted on two real data sets, and the experimental results show that the proposed method achieves the best performance in terms of classification accuracies in integrating the HFs for classification when compared with several state-of-the-art algorithms.

  16. Building the United States National Vegetation Classification

    Science.gov (United States)

    Franklin, S.B.; Faber-Langendoen, D.; Jennings, M.; Keeler-Wolf, T.; Loucks, O.; Peet, R.; Roberts, D.; McKerrow, A.

    2012-01-01

    The Federal Geographic Data Committee (FGDC) Vegetation Subcommittee, the Ecological Society of America Panel on Vegetation Classification, and NatureServe have worked together to develop the United States National Vegetation Classification (USNVC). The current standard was accepted in 2008 and fosters consistency across Federal agencies and non-federal partners for the description of each vegetation concept and its hierarchical classification. The USNVC is structured as a dynamic standard, where changes to types at any level may be proposed at any time as new information comes in. But, because much information already exists from previous work, the NVC partners first established methods for screening existing types to determine their acceptability with respect to the 2008 standard. Current efforts include a screening process to assign confidence to Association and Group level descriptions, and a review of the upper three levels of the classification. For the upper levels especially, the expectation is that the review process includes international scientists. Immediate future efforts include the review of remaining levels and the development of a proposal review process.

  17. Knowledge-based approach to video content classification

    Science.gov (United States)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  18. Application of the PredictAD Software Tool to Predict Progression in Patients with Mild Cognitive Impairment

    DEFF Research Database (Denmark)

    Simonsen, Anja H; Mattila, Jussi; Hejl, Anne-Mette

    2012-01-01

    of incremental data presentation using the software tool. A 5th phase was done with all available patient data presented on paper charts. Classifications by the clinical raters were compared to the clinical diagnoses made by the Alzheimer's Disease Neuroimaging Initiative investigators. Results: A statistical...... significant trend (p classification accuracy (from 62.6 to 70.0%) was found when using the PredictAD tool during the stepwise procedure. When the same data were presented on paper, classification accuracy of the raters dropped significantly from 70.0 to 63.2%. Conclusion: Best...... classification accuracy was achieved by the clinical raters when using the tool for decision support, suggesting that the tool can add value in diagnostic classification when large amounts of heterogeneous data are presented....

  19. Using SaudiVeg Ecoinformatics in assessment, monitoring and proposing environmental restoration tools in central Saudi Arabia

    Science.gov (United States)

    El-Sheikh, Mohamed; Hennekens, Stephan; Alfarhan, Ahmed; Thomas, Jacob; Schaminee, Joop; El-Keblawy, Ali

    2017-04-01

    Successful restoration of degraded habitats requires information about the history and factors led to the deterioration of these habitats. This study analyzed SaudiVeg Ecoinformatics, which is a big phytosociological database about plant communities and other environmental factors affecting them in the Najd-Central Region of Saudi Arabia. A phytosociological survey with more than 3000 vegetation relevés was conducted during 2013. The data were used to correlate the plant community attributes, such as abundance and species diversity in natural and ruderal habitats with environmental factors, such as human impacts, soil physical and chemical properties, and land uses. The data were subjected to multivariate analyses using programs, such as TWINSPAN, DCA and CCA, via Juice package. Fourteen vegetation associations were described under provisional classification of the Central Saudi Arabia deserts. These associations were broadly grouped into desert vegetation types. One alliance group, Haloxylonion salicornici, is the most widespread and contains four associations on the wadis and desert plains. Three associations are dominant on the depression habitats (raudhas) and two associations of Tamarixidetum spp. on the wetland and salt pan habitats. Four associations inhabit the man-made habitat and abandoned field habitats and one association, the Neurado procumbentis-Heliotropietum digyni, dominates the overgrazed sandy dunes. As human impact is huge and increasing, the vegetation ecoinformatics of the present study would form a baseline description that could be used as a vital tool for future monitoring and for proposing environmental restoration processes in central Saudi Arabia. It could also help both Governmental and Non-governmental organizations (NGO) in formulating strategies and on-ground plans for protection, management and restoration of the natural vegetation.

  20. Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems

    Directory of Open Access Journals (Sweden)

    Sang-Il Oh

    2017-01-01

    Full Text Available To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN. The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.

  1. Cellular image classification

    CERN Document Server

    Xu, Xiang; Lin, Feng

    2017-01-01

    This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...

  2. Music genre classification using temporal domain features

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. Jay

    2004-10-01

    Music genre provides an efficient way to index songs in the music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. In addition to other features, the temporal domain features of a music signal are exploited so as to increase the classification rate in this research. Three temporal techniques are examined in depth. First, the hidden Markov model (HMM) is used to emulate the time-varying properties of music signals. Second, to further increase the classification rate, we propose another feature set that focuses on the residual part of music signals. Third, the overall classification rate is enhanced by classifying smaller segments from a test material individually and making decision via majority voting. Experimental results are given to demonstrate the performance of the proposed techniques.

  3. Transportation Modes Classification Using Sensors on Smartphones

    Directory of Open Access Journals (Sweden)

    Shih-Hau Fang

    2016-08-01

    Full Text Available This paper investigates the transportation and vehicular modes classification by using big data from smartphone sensors. The three types of sensors used in this paper include the accelerometer, magnetometer, and gyroscope. This study proposes improved features and uses three machine learning algorithms including decision trees, K-nearest neighbor, and support vector machine to classify the user’s transportation and vehicular modes. In the experiments, we discussed and compared the performance from different perspectives including the accuracy for both modes, the executive time, and the model size. Results show that the proposed features enhance the accuracy, in which the support vector machine provides the best performance in classification accuracy whereas it consumes the largest prediction time. This paper also investigates the vehicle classification mode and compares the results with that of the transportation modes.

  4. Binary classification posed as a quadratically constrained quadratic ...

    Indian Academy of Sciences (India)

    Binary classification is posed as a quadratically constrained quadratic problem and solved using the proposed method. Each class in the binary classification problem is modeled as a multidimensional ellipsoid to forma quadratic constraint in the problem. Particle swarms help in determining the optimal hyperplane or ...

  5. Classifications of Patterned Hair Loss: A Review.

    Science.gov (United States)

    Gupta, Mrinal; Mysore, Venkataram

    2016-01-01

    Patterned hair loss is the most common cause of hair loss seen in both the sexes after puberty. Numerous classification systems have been proposed by various researchers for grading purposes. These systems vary from the simpler systems based on recession of the hairline to the more advanced multifactorial systems based on the morphological and dynamic parameters that affect the scalp and the hair itself. Most of these preexisting systems have certain limitations. Currently, the Hamilton-Norwood classification system for males and the Ludwig system for females are most commonly used to describe patterns of hair loss. In this article, we review the various classification systems for patterned hair loss in both the sexes. Relevant articles were identified through searches of MEDLINE and EMBASE. Search terms included but were not limited to androgenic alopecia classification, patterned hair loss classification, male pattern baldness classification, and female pattern hair loss classification. Further publications were identified from the reference lists of the reviewed articles.

  6. Classifications of patterned hair loss: a review

    Directory of Open Access Journals (Sweden)

    Mrinal Gupta

    2016-01-01

    Full Text Available Patterned hair loss is the most common cause of hair loss seen in both the sexes after puberty. Numerous classification systems have been proposed by various researchers for grading purposes. These systems vary from the simpler systems based on recession of the hairline to the more advanced multifactorial systems based on the morphological and dynamic parameters that affect the scalp and the hair itself. Most of these preexisting systems have certain limitations. Currently, the Hamilton-Norwood classification system for males and the Ludwig system for females are most commonly used to describe patterns of hair loss. In this article, we review the various classification systems for patterned hair loss in both the sexes. Relevant articles were identified through searches of MEDLINE and EMBASE. Search terms included but were not limited to androgenic alopecia classification, patterned hair loss classification, male pattern baldness classification, and female pattern hair loss classification. Further publications were identified from the reference lists of the reviewed articles.

  7. A new classification system for congenital laryngeal cysts.

    Science.gov (United States)

    Forte, Vito; Fuoco, Gabriel; James, Adrian

    2004-06-01

    A new classification system for congenital laryngeal cysts based on the extent of the cyst and on the embryologic tissue of origin is proposed. Retrospective chart review. The charts of 20 patients with either congenital or acquired laryngeal cysts that were treated surgically between 1987 and 2002 at the Hospital for Sick Children, Toronto were retrospectively reviewed. Clinical presentation, radiologic findings, surgical management, histopathology, and outcome were recorded. A new classification system is proposed to better appreciate the origin of these cysts and to guide in their successful surgical management. Fourteen of the supraglottic and subglottic simple mucous retention cysts posed no diagnostic or therapeutic challenge and were treated successfully by a single endoscopic excision or marsupialization. The remaining six patients with congenital cysts in the study were deemed more complex, and all required open surgical procedures for cure. On the basis of the analysis of the data of these patients, a new classification of congenital laryngeal cysts is proposed. Type I cysts are confined to the larynx, the cyst wall composed of endodermal elements only, and can be managed endoscopically. Type II cysts extend beyond the confines of the larynx and require an external approach. The Type II cysts are further subclassified histologically on the basis of the embryologic tissue of origin: IIa, composed of endoderm only and IIb, containing endodermal and mesodermal elements (epithelium and cartilage) in the wall of the cyst. A new classification system for congenital laryngeal cysts is proposed on the basis of the extent of the cyst and the embryologic tissue of origin. This classification can help guide the surgeon with initial management and help us better understand the origin of these cysts.

  8. Agent Collaborative Target Localization and Classification in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sheng Wang

    2007-07-01

    Full Text Available Wireless sensor networks (WSNs are autonomous networks that have beenfrequently deployed to collaboratively perform target localization and classification tasks.Their autonomous and collaborative features resemble the characteristics of agents. Suchsimilarities inspire the development of heterogeneous agent architecture for WSN in thispaper. The proposed agent architecture views WSN as multi-agent systems and mobileagents are employed to reduce in-network communication. According to the architecture,an energy based acoustic localization algorithm is proposed. In localization, estimate oftarget location is obtained by steepest descent search. The search algorithm adapts tomeasurement environments by dynamically adjusting its termination condition. With theagent architecture, target classification is accomplished by distributed support vectormachine (SVM. Mobile agents are employed for feature extraction and distributed SVMlearning to reduce communication load. Desirable learning performance is guaranteed bycombining support vectors and convex hull vectors. Fusion algorithms are designed tomerge SVM classification decisions made from various modalities. Real world experimentswith MICAz sensor nodes are conducted for vehicle localization and classification.Experimental results show the proposed agent architecture remarkably facilitates WSNdesigns and algorithm implementation. The localization and classification algorithms alsoprove to be accurate and energy efficient.

  9. Toward an Attention-Based Diagnostic Tool for Patients With Locked-in Syndrome.

    Science.gov (United States)

    Lesenfants, Damien; Habbal, Dina; Chatelle, Camille; Soddu, Andrea; Laureys, Steven; Noirhomme, Quentin

    2018-03-01

    Electroencephalography (EEG) has been proposed as a supplemental tool for reducing clinical misdiagnosis in severely brain-injured populations helping to distinguish conscious from unconscious patients. We studied the use of spectral entropy as a measure of focal attention in order to develop a motor-independent, portable, and objective diagnostic tool for patients with locked-in syndrome (LIS), answering the issues of accuracy and training requirement. Data from 20 healthy volunteers, 6 LIS patients, and 10 patients with a vegetative state/unresponsive wakefulness syndrome (VS/UWS) were included. Spectral entropy was computed during a gaze-independent 2-class (attention vs rest) paradigm, and compared with EEG rhythms (delta, theta, alpha, and beta) classification. Spectral entropy classification during the attention-rest paradigm showed 93% and 91% accuracy in healthy volunteers and LIS patients respectively. VS/UWS patients were at chance level. EEG rhythms classification reached a lower accuracy than spectral entropy. Resting-state EEG spectral entropy could not distinguish individual VS/UWS patients from LIS patients. The present study provides evidence that an EEG-based measure of attention could detect command-following in patients with severe motor disabilities. The entropy system could detect a response to command in all healthy subjects and LIS patients, while none of the VS/UWS patients showed a response to command using this system.

  10. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    Science.gov (United States)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  11. GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

    Directory of Open Access Journals (Sweden)

    N. Jamshidpour

    2017-09-01

    Full Text Available Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  12. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  13. Simple Fully Automated Group Classification on Brain fMRI

    International Nuclear Information System (INIS)

    Honorio, J.; Goldstein, R.; Samaras, D.; Tomasi, D.; Goldstein, R.Z.

    2010-01-01

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statistical theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.

  14. Simple Fully Automated Group Classification on Brain fMRI

    Energy Technology Data Exchange (ETDEWEB)

    Honorio, J.; Goldstein, R.; Honorio, J.; Samaras, D.; Tomasi, D.; Goldstein, R.Z.

    2010-04-14

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statistical theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.

  15. CREST--classification resources for environmental sequence tags.

    Directory of Open Access Journals (Sweden)

    Anders Lanzén

    Full Text Available Sequencing of taxonomic or phylogenetic markers is becoming a fast and efficient method for studying environmental microbial communities. This has resulted in a steadily growing collection of marker sequences, most notably of the small-subunit (SSU ribosomal RNA gene, and an increased understanding of microbial phylogeny, diversity and community composition patterns. However, to utilize these large datasets together with new sequencing technologies, a reliable and flexible system for taxonomic classification is critical. We developed CREST (Classification Resources for Environmental Sequence Tags, a set of resources and tools for generating and utilizing custom taxonomies and reference datasets for classification of environmental sequences. CREST uses an alignment-based classification method with the lowest common ancestor algorithm. It also uses explicit rank similarity criteria to reduce false positives and identify novel taxa. We implemented this method in a web server, a command line tool and the graphical user interfaced program MEGAN. Further, we provide the SSU rRNA reference database and taxonomy SilvaMod, derived from the publicly available SILVA SSURef, for classification of sequences from bacteria, archaea and eukaryotes. Using cross-validation and environmental datasets, we compared the performance of CREST and SilvaMod to the RDP Classifier. We also utilized Greengenes as a reference database, both with CREST and the RDP Classifier. These analyses indicate that CREST performs better than alignment-free methods with higher recall rate (sensitivity as well as precision, and with the ability to accurately identify most sequences from novel taxa. Classification using SilvaMod performed better than with Greengenes, particularly when applied to environmental sequences. CREST is freely available under a GNU General Public License (v3 from http://apps.cbu.uib.no/crest and http://lcaclassifier.googlecode.com.

  16. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  17. Collaborative classification of hyperspectral and visible images with convolutional neural network

    Science.gov (United States)

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2017-10-01

    Recent advances in remote sensing technology have made multisensor data available for the same area, and it is well-known that remote sensing data processing and analysis often benefit from multisource data fusion. Specifically, low spatial resolution of hyperspectral imagery (HSI) degrades the quality of the subsequent classification task while using visible (VIS) images with high spatial resolution enables high-fidelity spatial analysis. A collaborative classification framework is proposed to fuse HSI and VIS images for finer classification. First, the convolutional neural network model is employed to extract deep spectral features for HSI classification. Second, effective binarized statistical image features are learned as contextual basis vectors for the high-resolution VIS image, followed by a classifier. The proposed approach employs diversified data in a decision fusion, leading to an integration of the rich spectral information, spatial information, and statistical representation information. In particular, the proposed approach eliminates the potential problems of the curse of dimensionality and excessive computation time. The experiments evaluated on two standard data sets demonstrate better classification performance offered by this framework.

  18. A Confidence Paradigm for Classification Systems

    Science.gov (United States)

    2008-09-01

    methodology to determine how much confi- dence one should have in a classifier output. This research proposes a framework to determine the level of...theoretical framework that attempts to unite the viewpoints of the classification system developer (or engineer) and the classification system user (or...operating point. An algorithm is developed that minimizes a “confidence” measure called Binned Error in the Posterior ( BEP ). Then, we prove that training a

  19. Development of a revised radiotoxicity hazard classification [for radionuclides

    International Nuclear Information System (INIS)

    Carter, M.W.; Burns, P.; Munslow-Davies, L.

    1993-01-01

    Publication of ICRP 60 and 61 has rendered previous radiotoxicity classification lists obsolete. Past classifications have been examined and possible bases for such classifications have been considered. A revised radiotoxicity hazard classification list, based on data in ICRP Publication 61 has been produced for use by Australian regulatory authorities and is described in this paper. The authors propose that the appropriate basis for this new list is a combination of the most restrictive inhalation ALI and the specific activity. (author)

  20. Deep Learning for ECG Classification

    Science.gov (United States)

    Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.

    2017-10-01

    The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.

  1. Experimental Study of Real-Time Classification of 17 Voluntary Movements for Multi-Degree Myoelectric Prosthetic Hand

    Directory of Open Access Journals (Sweden)

    Trongmun Jiralerspong

    2017-11-01

    Full Text Available The myoelectric prosthetic hand is a powerful tool developed to help people with upper limb loss restore the functions of a biological hand. Recognizing multiple hand motions from only a few electromyography (EMG sensors is one of the requirements for the development of prosthetic hands with high level of usability. This task is highly challenging because both classification rate and misclassification rate worsen with additional hand motions. This paper presents a signal processing technique that uses spectral features and an artificial neural network to classify 17 voluntary movements from EMG signals. The main highlight will be on the use of a small set of low-cost EMG sensor for classification of a reasonably large number of hand movements. The aim of this work is to extend the capabilities to recognize and produce multiple movements beyond what is currently feasible. This work will also show and discuss about how tailoring the number of hand motions for a specific task can help develop a more reliable prosthetic hand system. Online classification experiments have been conducted on seven male and five female participants to evaluate the validity of the proposed method. The proposed algorithm achieves an overall correct classification rate of up to 83%, thus, demonstrating the potential to classify 17 movements from 6 EMG sensors. Furthermore, classifying 9 motions using this method could achieve an accuracy of up to 92%. These results show that if the prosthetic hand is intended for a specific task, limiting the number of motions can significantly increase the performance and usability.

  2. Learning classification models with soft-label information.

    Science.gov (United States)

    Nguyen, Quang; Valizadegan, Hamed; Hauskrecht, Milos

    2014-01-01

    Learning of classification models in medicine often relies on data labeled by a human expert. Since labeling of clinical data may be time-consuming, finding ways of alleviating the labeling costs is critical for our ability to automatically learn such models. In this paper we propose a new machine learning approach that is able to learn improved binary classification models more efficiently by refining the binary class information in the training phase with soft labels that reflect how strongly the human expert feels about the original class labels. Two types of methods that can learn improved binary classification models from soft labels are proposed. The first relies on probabilistic/numeric labels, the other on ordinal categorical labels. We study and demonstrate the benefits of these methods for learning an alerting model for heparin induced thrombocytopenia. The experiments are conducted on the data of 377 patient instances labeled by three different human experts. The methods are compared using the area under the receiver operating characteristic curve (AUC) score. Our AUC results show that the new approach is capable of learning classification models more efficiently compared to traditional learning methods. The improvement in AUC is most remarkable when the number of examples we learn from is small. A new classification learning framework that lets us learn from auxiliary soft-label information provided by a human expert is a promising new direction for learning classification models from expert labels, reducing the time and cost needed to label data.

  3. DANNP: an efficient artificial neural network pruning tool

    KAUST Repository

    Alshahrani, Mona

    2017-11-06

    Background Artificial neural networks (ANNs) are a robust class of machine learning models and are a frequent choice for solving classification problems. However, determining the structure of the ANNs is not trivial as a large number of weights (connection links) may lead to overfitting the training data. Although several ANN pruning algorithms have been proposed for the simplification of ANNs, these algorithms are not able to efficiently cope with intricate ANN structures required for complex classification problems. Methods We developed DANNP, a web-based tool, that implements parallelized versions of several ANN pruning algorithms. The DANNP tool uses a modified version of the Fast Compressed Neural Network software implemented in C++ to considerably enhance the running time of the ANN pruning algorithms we implemented. In addition to the performance evaluation of the pruned ANNs, we systematically compared the set of features that remained in the pruned ANN with those obtained by different state-of-the-art feature selection (FS) methods. Results Although the ANN pruning algorithms are not entirely parallelizable, DANNP was able to speed up the ANN pruning up to eight times on a 32-core machine, compared to the serial implementations. To assess the impact of the ANN pruning by DANNP tool, we used 16 datasets from different domains. In eight out of the 16 datasets, DANNP significantly reduced the number of weights by 70%–99%, while maintaining a competitive or better model performance compared to the unpruned ANN. Finally, we used a naïve Bayes classifier derived with the features selected as a byproduct of the ANN pruning and demonstrated that its accuracy is comparable to those obtained by the classifiers trained with the features selected by several state-of-the-art FS methods. The FS ranking methodology proposed in this study allows the users to identify the most discriminant features of the problem at hand. To the best of our knowledge, DANNP (publicly

  4. Issues surrounding the classification of accounting information

    Directory of Open Access Journals (Sweden)

    Huibrecht Van der Poll

    2011-06-01

    Full Text Available The act of classifying information created by accounting practices is ubiquitous in the accounting process; from recording to reporting, it has almost become second nature. The classification has to correspond to the requirements and demands of the changing environment in which it is practised. Evidence suggests that the current classification of items in financial statements is not keeping pace with the needs of users and the new financial constructs generated by the industry. This study addresses the issue of classification in two ways: by means of a critical analysis of classification theory and practices and by means of a questionnaire that was developed and sent to compilers and users of financial statements. A new classification framework for accounting information in the balance sheet and income statement is proposed.

  5. LDA boost classification: boosting by topics

    Science.gov (United States)

    Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li

    2012-12-01

    AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.

  6. A real-time classification algorithm for EEG-based BCI driven by self-induced emotions.

    Science.gov (United States)

    Iacoviello, Daniela; Petracca, Andrea; Spezialetti, Matteo; Placidi, Giuseppe

    2015-12-01

    classification results are encouraging with percentage of success that is, in the average for the whole set of the examined subjects, above 90%. An ongoing work is the application of the proposed procedure to map a large set of emotions with EEG and to establish the EEG headset with the minimal number of channels to allow the recognition of a significant range of emotions both in the field of affective computing and in the development of auxiliary communication tools for subjects affected by severe disabilities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Classification Systems, their Digitization and Consequences for Data-Driven Decision Making

    DEFF Research Database (Denmark)

    Stein, Mari-Klara; Newell, Sue; Galliers, Robert D.

    2013-01-01

    Classification systems are foundational in many standardized software tools. This digitization of classification systems gives them a new ‘materiality’ that, jointly with the social practices of information producers/consumers, has significant consequences on the representational quality of such ...... and the foundational role of representational quality in understanding the success and consequences of data-driven decision-making.......-narration and meta-narration), and three different information production/consumption situations. We contribute to the relational theorization of representational quality and extend classification systems research by drawing explicit attention to the importance of ‘materialization’ of classification systems...

  8. The Performance of EEG-P300 Classification using Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Arjon Turnip

    2013-12-01

    Full Text Available Electroencephalogram (EEG recordings signal provide an important function of brain-computer communication, but the accuracy of their classification is very limited in unforeseeable signal variations relating to artifacts. In this paper, we propose a classification method entailing time-series EEG-P300 signals using backpropagation neural networks to predict the qualitative properties of a subject’s mental tasks by extracting useful information from the highly multivariate non-invasive recordings of brain activity. To test the improvement in the EEG-P300 classification performance (i.e., classification accuracy and transfer rate with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis (BLDA. Finally, the result of the experiment showed that the average of the classification accuracy was 97% and the maximum improvement of the average transfer rate is 42.4%, indicating the considerable potential of the using of EEG-P300 for the continuous classification of mental tasks.

  9. Whewell on classification and consilience.

    Science.gov (United States)

    Quinn, Aleta

    2017-08-01

    In this paper I sketch William Whewell's attempts to impose order on classificatory mineralogy, which was in Whewell's day (1794-1866) a confused science of uncertain prospects. Whewell argued that progress was impeded by the crude reductionist assumption that all macroproperties of crystals could be straightforwardly explained by reference to the crystals' chemical constituents. By comparison with biological classification, Whewell proposed methodological reforms that he claimed would lead to a natural classification of minerals, which in turn would support advances in causal understanding of the properties of minerals. Whewell's comparison to successful biological classification is particularly striking given that classificatory biologists did not share an understanding of the causal structure underlying the natural classification of life (the common descent with modification of all organisms). Whewell's key proposed methodological reform is consideration of multiple, distinct principles of classification. The most powerful evidence in support of a natural classificatory claim is the consilience of claims arrived at through distinct lines of reasoning, rooted in distinct conceptual approaches to the target objects. Mineralogists must consider not only elemental composition and chemical affinities, but also symmetry and polarity. Geometrical properties are central to what makes an individual mineral the type of mineral that it is. In Whewell's view, function and organization jointly define life, and so are the keys to understanding what makes an organism the type of organism that it is. I explain the relationship between Whewell's teleological account of life and his natural theology. I conclude with brief comments about the importance of Whewell's classificatory theory for the further development of his philosophy of science and in particular his account of consilience. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Locality-preserving sparse representation-based classification in hyperspectral imagery

    Science.gov (United States)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  11. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    International Nuclear Information System (INIS)

    Zongze, Y; Hao, S; Kefeng, J; Huanxin, Z

    2014-01-01

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  12. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  13. Use of information criterion for classification of measurement data ...

    African Journals Online (AJOL)

    ... measurement data for the purpose of identification and authentication of users during online network activity. The proposed method increases the accuracy of classification of signals in authorization systems. Keywords: analysis and classification of signals, identification and authentications of user, access control system ...

  14. The Big Data Tools Impact on Development of Simulation-Concerned Academic Disciplines

    Directory of Open Access Journals (Sweden)

    A. A. Sukhobokov

    2015-01-01

    Full Text Available The article gives a definition of Big Data on the basis of 5V (Volume, Variety, Velocity, Veracity, Value as well as shows examples of tasks that require using Big Data tools in a diversity of areas, namely: health, education, financial services, industry, agriculture, logistics, retail, information technology, telecommunications and others. An overview of Big Data tools is delivered, including open source products, IBM Bluemix and SAP HANA platforms. Examples of architecture of corporate data processing and management systems using Big Data tools are shown for big Internet companies and for enterprises in traditional industries. Within the overview, a classification of Big Data tools is proposed that fills gaps of previously developed similar classifications. The new classification contains 19 classes and allows embracing several hundreds of existing and emerging products.The uprise and use of Big Data tools, in addition to solving practical problems, affects the development of scientific disciplines concerning the simulation of technical, natural or socioeconomic systems and the solution of practical problems based on developed models. New schools arise in these disciplines. These new schools decide peculiar to each discipline tasks, but for systems with a much bigger number of internal elements and connections between them. Characteristics of the problems to be solved under new schools, not always meet the criteria for Big Data. It is suggested to identify the Big Data as a part of the theory of sorting and searching algorithms. In other disciplines the new schools are called by analogy with Big Data: Big Calculation in numerical methods, Big Simulation in imitational modeling, Big Management in the management of socio-economic systems, Big Optimal Control in the optimal control theory. The paper shows examples of tasks and methods to be developed within new schools. The educed tendency is not limited to the considered disciplines: there are

  15. A proposal for a classification of product-related dependencies in development of mechatronic products

    DEFF Research Database (Denmark)

    Torry-Smith, Jonas; Mortensen, Niels Henrik; Achiche, Sofiane

    2014-01-01

    to the classification of product-related dependencies. Traditionally these dependencies have been described as appearing between the following product attributes: function, properties and structure. By analysing three mechatronic projects from industry we identified and classified 13 types of product......-related dependencies. Each product-related dependency is described and illustrated using the practical examples from the industrial projects. The value of the classification is evaluated by applying it to an industrial development setting not used for the analysis. The evaluation shows that delays in the project...

  16. Automatic Hierarchical Color Image Classification

    Directory of Open Access Journals (Sweden)

    Jing Huang

    2003-02-01

    Full Text Available Organizing images into semantic categories can be extremely useful for content-based image retrieval and image annotation. Grouping images into semantic classes is a difficult problem, however. Image classification attempts to solve this hard problem by using low-level image features. In this paper, we propose a method for hierarchical classification of images via supervised learning. This scheme relies on using a good low-level feature and subsequently performing feature-space reconfiguration using singular value decomposition to reduce noise and dimensionality. We use the training data to obtain a hierarchical classification tree that can be used to categorize new images. Our experimental results suggest that this scheme not only performs better than standard nearest-neighbor techniques, but also has both storage and computational advantages.

  17. Classification of solid industrial waste based on ecotoxicology tests using Daphnia magna: an alternative

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2005-11-01

    Full Text Available The adequate treatment and final disposal of solid industrial wastes depends on their classification into class I or II. This classification is proposed by NBR 10.004; however, it is complex and time-consuming. With a view to facilitating this classification, the use of assays with Daphnia magna is proposed. These assays make possible the identification of toxic chemicals in the leach, which denotes the presence of one of the characteristics described by NBR 10.004, the toxicity, which is a sufficient argument to put the waste into class I. Ecotoxicological tests were carried out with ten samples of solid wastes of frequent production and, on the basis of the results from EC(I50/48h of those samples in comparison with the official classification of NBR 10.004, limits were established for the classification of wastes into class I or II. A coincidence in the classification of 50% of the analyzed samples was observed. In cases in which there is no coherence between the methods, the method proposed in this work classifies the waste into class I. These data are preliminary, but they reveal that the classification system proposed here is promising because of its quickness and economic viability.

  18. Correlation of Estradiol Serum Levels with Classification of Osteoporosis Risk OSTA (Osteoporosis Self-Assessment Tools for Asian in Menopause Women

    Directory of Open Access Journals (Sweden)

    Eva Maya Puspita

    2017-01-01

    Full Text Available Background: In postmenopausal women, decreasing estrogen levels is a marker of ovarian dysfunction. Hypoestrogenic state has known increasing the risk of osteoporosis. Objective: To determine the correlation between estradiol serum levels with classification of osteoporosis risk OSTA (Osteoporosis Self-Assessment Tools for Asian in menopausal women. Methods: This study was case series study which examined estradiol serum in menopausal women by ELISA and assess the osteoporosis risk using osteoporosis risk classification OSTA. Total 47 samples was collected at Dr. H.Adam malik, dr. Pirngadi, and RSU Networking in Medan. This research was conducted from May to December 2016. Data were statistically analyzed, and presented with Spearman test. Results: In this study, we found the mean levels of estradiol in menopausal women was 18.62 ± 16.85 ng / ml with OSTA osteoporosis risk score of 2.09 ± 2.45. There was a significant positive correlation between estradiol and risk of osteoporosis OSTA with correlation coefficient r = 0.825 and p <0.05. Conclusion: There is a strong positive correlation between serum levels of estradiol with OSTA osteoporosis risk assessment in menopausal women.

  19. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    Science.gov (United States)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  20. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    Science.gov (United States)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  1. Real-time network traffic classification technique for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    Network traffic or data traffic in a Wireless Local Area Network (WLAN) is the amount of network packets moving across a wireless network from each wireless node to another wireless node, which provide the load of sampling in a wireless network. WLAN's Network traffic is the main component for network traffic measurement, network traffic control and simulation. Traffic classification technique is an essential tool for improving the Quality of Service (QoS) in different wireless networks in the complex applications such as local area networks, wireless local area networks, wireless personal area networks, wireless metropolitan area networks, and wide area networks. Network traffic classification is also an essential component in the products for QoS control in different wireless network systems and applications. Classifying network traffic in a WLAN allows to see what kinds of traffic we have in each part of the network, organize the various kinds of network traffic in each path into different classes in each path, and generate network traffic matrix in order to Identify and organize network traffic which is an important key for improving the QoS feature. To achieve effective network traffic classification, Real-time Network Traffic Classification (RNTC) algorithm for WLANs based on Compressed Sensing (CS) is presented in this paper. The fundamental goal of this algorithm is to solve difficult wireless network management problems. The proposed architecture allows reducing False Detection Rate (FDR) to 25% and Packet Delay (PD) to 15 %. The proposed architecture is also increased 10 % accuracy of wireless transmission, which provides a good background for establishing high quality wireless local area networks.

  2. Sentiment classification technology based on Markov logic networks

    Science.gov (United States)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  3. GLOBAL LAND COVER CLASSIFICATION USING MODIS SURFACE REFLECTANCE PROSUCTS

    Directory of Open Access Journals (Sweden)

    K. Fukue

    2016-06-01

    Full Text Available The objective of this study is to develop high accuracy land cover classification algorithm for Global scale by using multi-temporal MODIS land reflectance products. In this study, time-domain co-occurrence matrix was introduced as a classification feature which provides time-series signature of land covers. Further, the non-parametric minimum distance classifier was introduced for timedomain co-occurrence matrix, which performs multi-dimensional pattern matching for time-domain co-occurrence matrices of a classification target pixel and each classification classes. The global land cover classification experiments have been conducted by applying the proposed classification method using 46 multi-temporal(in one year SR(Surface Reflectance and NBAR(Nadir BRDF-Adjusted Reflectance products, respectively. IGBP 17 land cover categories were used in our classification experiments. As the results, SR and NBAR products showed similar classification accuracy of 99%.

  4. Convolutional neural network with transfer learning for rice type classification

    Science.gov (United States)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  5. Critical Evaluation of Headache Classifications.

    Science.gov (United States)

    Özge, Aynur

    2013-08-01

    Transforming a subjective sense like headache into an objective state and establishing a common language for this complaint which can be both a symptom and a disease all by itself have kept the investigators busy for years. Each recommendation proposed has brought along a set of patients who do not meet the criteria. While almost the most ideal and most comprehensive classification studies continued at this point, this time criticisims about withdrawing from daily practice came to the fore. In this article, the classification adventure of scientists who work in the area of headache will be summarized. More specifically, 2 classifications made by the International Headache Society (IHS) and the point reached in relation with the 3rd classification which is still being worked on will be discussed together with headache subtypes. It has been presented with the wish and belief that it will contribute to the readers and young investigators who are interested in this subject.

  6. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level radioactive waste (HLW) as: (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel....that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission....determines....requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph (B). The approach also results in definitions of other waste classes, i.e., transuranic (TRU) and low-level waste (LLW). A basic waste classification scheme results from the quantitative definitions

  7. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  8. Waste classification and methods applied to specific disposal sites

    International Nuclear Information System (INIS)

    Rogers, V.C.

    1979-01-01

    An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs

  9. Gradient Evolution-based Support Vector Machine Algorithm for Classification

    Science.gov (United States)

    Zulvia, Ferani E.; Kuo, R. J.

    2018-03-01

    This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.

  10. Formalized classification of European fen vegetation at the alliance level

    DEFF Research Database (Denmark)

    Peterka, Tomáš; Hájek, Michal; Jiroušek, Martin

    2017-01-01

    Aims Phytosociological classification of fen vegetation (Scheuchzerio palustris-Caricetea fuscae class) differs among European countries. Here we propose a unified vegetation classification of European fens at the alliance level, provide unequivocal assignment rules for individual vegetation plot...

  11. Land-Use and Land-Cover Mapping Using a Gradable Classification Method

    Directory of Open Access Journals (Sweden)

    Keigo Kitada

    2012-05-01

    Full Text Available Conventional spectral-based classification methods have significant limitations in the digital classification of urban land-use and land-cover classes from high-resolution remotely sensed data because of the lack of consideration given to the spatial properties of images. To recognize the complex distribution of urban features in high-resolution image data, texture information consisting of a group of pixels should be considered. Lacunarity is an index used to characterize different texture appearances. It is often reported that the land-use and land-cover in urban areas can be effectively classified using the lacunarity index with high-resolution images. However, the applicability of the maximum-likelihood approach for hybrid analysis has not been reported. A more effective approach that employs the original spectral data and lacunarity index can be expected to improve the accuracy of the classification. A new classification procedure referred to as “gradable classification method” is proposed in this study. This method improves the classification accuracy in incremental steps. The proposed classification approach integrates several classification maps created from original images and lacunarity maps, which consist of lacnarity values, to create a new classification map. The results of this study confirm the suitability of the gradable classification approach, which produced a higher overall accuracy (68% and kappa coefficient (0.64 than those (65% and 0.60, respectively obtained with the maximum-likelihood approach.

  12. Observation versus classification in supervised category learning.

    Science.gov (United States)

    Levering, Kimery R; Kurtz, Kenneth J

    2015-02-01

    The traditional supervised classification paradigm encourages learners to acquire only the knowledge needed to predict category membership (a discriminative approach). An alternative that aligns with important aspects of real-world concept formation is learning with a broader focus to acquire knowledge of the internal structure of each category (a generative approach). Our work addresses the impact of a particular component of the traditional classification task: the guess-and-correct cycle. We compare classification learning to a supervised observational learning task in which learners are shown labeled examples but make no classification response. The goals of this work sit at two levels: (1) testing for differences in the nature of the category representations that arise from two basic learning modes; and (2) evaluating the generative/discriminative continuum as a theoretical tool for understand learning modes and their outcomes. Specifically, we view the guess-and-correct cycle as consistent with a more discriminative approach and therefore expected it to lead to narrower category knowledge. Across two experiments, the observational mode led to greater sensitivity to distributional properties of features and correlations between features. We conclude that a relatively subtle procedural difference in supervised category learning substantially impacts what learners come to know about the categories. The results demonstrate the value of the generative/discriminative continuum as a tool for advancing the psychology of category learning and also provide a valuable constraint for formal models and associated theories.

  13. Robust tissue classification for reproducible wound assessment in telemedicine environments

    Science.gov (United States)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  14. REAL-TIME INTELLIGENT MULTILAYER ATTACK CLASSIFICATION SYSTEM

    Directory of Open Access Journals (Sweden)

    T. Subbhulakshmi

    2014-01-01

    Full Text Available Intrusion Detection Systems (IDS takes the lion’s share of the current security infrastructure. Detection of intrusions is vital for initiating the defensive procedures. Intrusion detection was done by statistical and distance based methods. A threshold value is used in these methods to indicate the level of normalcy. When the network traffic crosses the level of normalcy then above which it is flagged as anomalous. When there are occurrences of new intrusion events which are increasingly a key part of system security, the statistical techniques cannot detect them. To overcome this issue, learning techniques are used which helps in identifying new intrusion activities in a computer system. The objective of the proposed system designed in this paper is to classify the intrusions using an Intelligent Multi Layered Attack Classification System (IMLACS which helps in detecting and classifying the intrusions with improved classification accuracy. The intelligent multi layered approach contains three intelligent layers. The first layer involves Binary Support Vector Machine classification for detecting the normal and attack. The second layer involves neural network classification to classify the attacks into classes of attacks. The third layer involves fuzzy inference system to classify the attacks into various subclasses. The proposed IMLACS can be able to detect an intrusion behavior of the networks since the system contains a three intelligent layer classification and better set of rules. Feature selection is also used to improve the time of detection. The experimental results show that the IMLACS achieves the Classification Rate of 97.31%.

  15. An Extended Spectral-Spatial Classification Approach for Hyperspectral Data

    Science.gov (United States)

    Akbari, D.

    2017-11-01

    In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF) algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1) unsupervised feature extraction methods including principal component analysis (PCA), independent component analysis (ICA), and minimum noise fraction (MNF); (2) supervised feature extraction including decision boundary feature extraction (DBFE), discriminate analysis feature extraction (DAFE), and nonparametric weighted feature extraction (NWFE); (3) genetic algorithm (GA). The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  16. Completed Local Ternary Pattern for Rotation Invariant Texture Classification

    Directory of Open Access Journals (Sweden)

    Taha H. Rassem

    2014-01-01

    Full Text Available Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP and the Completed Local Binary Count (CLBC, have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

  17. LTRsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected LTR retrotransposons

    Directory of Open Access Journals (Sweden)

    Steinbiss Sascha

    2012-11-01

    Full Text Available Abstract Background Long terminal repeat (LTR retrotransposons are a class of eukaryotic mobile elements characterized by a distinctive sequence similarity-based structure. Hence they are well suited for computational identification. Current software allows for a comprehensive genome-wide de novo detection of such elements. The obvious next step is the classification of newly detected candidates resulting in (super-families. Such a de novo classification approach based on sequence-based clustering of transposon features has been proposed before, resulting in a preliminary assignment of candidates to families as a basis for subsequent manual refinement. However, such a classification workflow is typically split across a heterogeneous set of glue scripts and generic software (for example, spreadsheets, making it tedious for a human expert to inspect, curate and export the putative families produced by the workflow. Results We have developed LTRsift, an interactive graphical software tool for semi-automatic postprocessing of de novo predicted LTR retrotransposon annotations. Its user-friendly interface offers customizable filtering and classification functionality, displaying the putative candidate groups, their members and their internal structure in a hierarchical fashion. To ease manual work, it also supports graphical user interface-driven reassignment, splitting and further annotation of candidates. Export of grouped candidate sets in standard formats is possible. In two case studies, we demonstrate how LTRsift can be employed in the context of a genome-wide LTR retrotransposon survey effort. Conclusions LTRsift is a useful and convenient tool for semi-automated classification of newly detected LTR retrotransposons based on their internal features. Its efficient implementation allows for convenient and seamless filtering and classification in an integrated environment. Developed for life scientists, it is helpful in postprocessing and refining

  18. Design and implementation based on the classification protection vulnerability scanning system

    International Nuclear Information System (INIS)

    Wang Chao; Lu Zhigang; Liu Baoxu

    2010-01-01

    With the application and spread of the classification protection, Network Security Vulnerability Scanning should consider the efficiency and the function expansion. It proposes a kind of a system vulnerability from classification protection, and elaborates the design and implementation of a vulnerability scanning system based on vulnerability classification plug-in technology and oriented classification protection. According to the experiment, the application of classification protection has good adaptability and salability with the system, and it also approves the efficiency of scanning. (authors)

  19. Applicability of the ICD-11 proposal for PTSD: a comparison of prevalence and comorbidity rates with the DSM-IV PTSD classification in two post-conflict samples.

    Science.gov (United States)

    Stammel, Nadine; Abbing, Eva M; Heeke, Carina; Knaevelsrud, Christine

    2015-01-01

    The World Health Organization recently proposed significant changes to the posttraumatic stress disorder (PTSD) diagnostic criteria in the 11th edition of the International Classification of Diseases (ICD-11). The present study investigated the impact of these changes in two different post-conflict samples. Prevalence and rates of concurrent depression and anxiety, socio-demographic characteristics, and indicators of clinical severity according to ICD-11 in 1,075 Cambodian and 453 Colombian civilians exposed to civil war and genocide were compared to those according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV). Results indicated significantly lower prevalence rates under the ICD-11 proposal (8.1% Cambodian sample and 44.4% Colombian sample) compared to the DSM-IV (11.2% Cambodian sample and 55.0% Colombian sample). Participants meeting a PTSD diagnosis only under the ICD-11 proposal had significantly lower rates of concurrent depression and a lower concurrent total score (depression and anxiety) compared to participants meeting only DSM-IV diagnostic criteria. There were no significant differences in socio-demographic characteristics and indicators of clinical severity between these two groups. The lower prevalence of PTSD according to the ICD-11 proposal in our samples of persons exposed to a high number of traumatic events may counter criticism of previous PTSD classifications to overuse the PTSD diagnosis in populations exposed to extreme stressors. Also another goal, to better distinguish PTSD from comorbid disorders could be supported with our data.

  20. Acoustic classification of dwellings

    DEFF Research Database (Denmark)

    Berardi, Umberto; Rasmussen, Birgit

    2014-01-01

    insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...... exchanging experiences about constructions fulfilling different classes, reducing trade barriers, and finally increasing the sound insulation of dwellings.......Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on sound...

  1. Global Optimization Ensemble Model for Classification Methods

    Science.gov (United States)

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  2. Global Optimization Ensemble Model for Classification Methods

    Directory of Open Access Journals (Sweden)

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  3. BIOPHARMACEUTICS CLASSIFICATION SYSTEM: A STRATEGIC TOOL FOR CLASSIFYING DRUG SUBSTANCES

    OpenAIRE

    Rohilla Seema; Rohilla Ankur; Marwaha RK; Nanda Arun

    2011-01-01

    The biopharmaceutical classification system (BCS) is a scientific approach for classifying drug substances based on their dose/solubility ratio and intestinal permeability. The BCS has been developed to allow prediction of in vivo pharmacokinetic performance of drug products from measurements of permeability and solubility. Moreover, the drugs can be categorized into four classes of BCS on the basis of permeability and solubility namely; high permeability high solubility, high permeability lo...

  4. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Science.gov (United States)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  5. Alignment of ICNP® 2.0 ontology and a proposed INCP® Brazilian ontology.

    Science.gov (United States)

    Carvalho, Carina Maris Gaspar; Cubas, Marcia Regina; Malucelli, Andreia; Nóbrega, Maria Miriam Lima da

    2014-01-01

    to align the International Classification for Nursing Practice (ICNP®) Version 2.0 ontology and a proposed INCP® Brazilian Ontology. document-based, exploratory and descriptive study, the empirical basis of which was provided by the ICNP® 2.0 Ontology and the INCP® Brazilian Ontology. The ontology alignment was performed using a computer tool with algorithms to identify correspondences between concepts, which were organized and analyzed according to their presence or absence, their names, and their sibling, parent, and child classes. there were 2,682 concepts present in the ICNP® 2.0 Ontology that were missing in the Brazilian Ontology; 717 concepts present in the Brazilian Ontology were missing in the ICNP® 2.0 Ontology; and there were 215 pairs of matching concepts. it is believed that the correspondences identified in this study might contribute to the interoperability between the representations of nursing practice elements in ICNP®, thus allowing the standardization of nursing records based on this classification system.

  6. Classification and its applications for drug-target interaction identification

    OpenAIRE

    Mei, Jian-Ping; Kwoh, Chee-Keong; Yang, Peng; Li, Xiao-Li

    2015-01-01

    Classification is one of the most popular and widely used supervised learning tasks, which categorizes objects into predefined classes based on known knowledge. Classification has been an important research topic in machine learning and data mining. Different classification methods have been proposed and applied to deal with various real-world problems. Unlike unsupervised learning such as clustering, a classifier is typically trained with labeled data before being used to make prediction, an...

  7. Structure-based classification and ontology in chemistry

    Directory of Open Access Journals (Sweden)

    Hastings Janna

    2012-04-01

    Full Text Available Abstract Background Recent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures, while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies. Results We analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches. Conclusion Systems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational

  8. Event Classification using Concepts

    NARCIS (Netherlands)

    Boer, M.H.T. de; Schutte, K.; Kraaij, W.

    2013-01-01

    The semantic gap is one of the challenges in the GOOSE project. In this paper a Semantic Event Classification (SEC) system is proposed as an initial step in tackling the semantic gap challenge in the GOOSE project. This system uses semantic text analysis, multiple feature detectors using the BoW

  9. Toward the establishment of standardized in vitro tests for lipid-based formulations, part 4: proposing a new lipid formulation performance classification system.

    Science.gov (United States)

    Williams, Hywel D; Sassene, Philip; Kleberg, Karen; Calderone, Marilyn; Igonin, Annabel; Jule, Eduardo; Vertommen, Jan; Blundell, Ross; Benameur, Hassan; Müllertz, Anette; Porter, Christopher J H; Pouton, Colin W

    2014-08-01

    The Lipid Formulation Classification System Consortium looks to develop standardized in vitro tests and to generate much-needed performance criteria for lipid-based formulations (LBFs). This article highlights the value of performing a second, more stressful digestion test to identify LBFs near a performance threshold and to facilitate lead formulation selection in instances where several LBF prototypes perform adequately under standard digestion conditions (but where further discrimination is necessary). Stressed digestion tests can be designed based on an understanding of the factors that affect LBF performance, including the degree of supersaturation generated on dispersion/digestion. Stresses evaluated included decreasing LBF concentration (↓LBF), increasing bile salt, and decreasing pH. Their capacity to stress LBFs was dependent on LBF composition and drug type: ↓LBF was a stressor to medium-chain glyceride-rich LBFs, but not more hydrophilic surfactant-rich LBFs, whereas decreasing pH stressed tolfenamic acid LBFs, but not fenofibrate LBFs. Lastly, a new Performance Classification System, that is, LBF composition independent, is proposed to promote standardized LBF comparisons, encourage robust LBF development, and facilitate dialogue with the regulatory authorities. This classification system is based on the concept that performance evaluations across three in vitro tests, designed to subject a LBF to progressively more challenging conditions, will enable effective LBF discrimination and performance grading. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  10. The Spinal Cord Injury-Interventions Classification System

    NARCIS (Netherlands)

    van Langeveld, A.H.B.

    2010-01-01

    Title: The Spinal Cord Injury-Interventions Classification System: development and evaluation of a documentation tool to record therapy to improve mobility and self-care in people with spinal cord injury. Background: Many rehabilitation researchers have emphasized the need to examine the actual

  11. A new qualitative pattern classification of shear wave elastograghy for solid breast mass evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Cong, Rui, E-mail: congrui2684@163.com; Li, Jing, E-mail: lijing@sj-hospital.org; Guo, Song, E-mail: 21751735@qq.com

    2017-02-15

    Highlights: • Qualitative SWE classification proposed here was significantly better than quantitative SWE parameters. • Qualitative classification proposed here was better than the classification proposed before. • Qualitative classification proposed here could obtain higher specificity without a loss of sensitivity. - Abstract: Objectives: To examine the efficacy of qualitative shear wave elastography (SWE) in the classification and evaluation of solid breast masses, and to compare this method with conventional ultrasonograghy (US), quantitative SWE parameters and qualitative SWE classification proposed before. Methods: From April 2015 to March 2016, 314 consecutive females with 325 breast masses who decided to undergo core needle biopsy and/or surgical biopsy were enrolled. Conventional US and SWE were previously performed in all enrolled subjects. Each mass was classified by two different qualitative classifications. One was established in our study, herein named the Qual1. Qual1 could classify the SWE images into five color patterns by the visual evaluations: Color pattern 1 (homogeneous pattern); Color pattern 2 (comparative homogeneous pattern); Color pattern 3 (irregularly heterogeneous pattern); Color pattern 4 (intralesional echo pattern); and Color pattern 5 (the stiff rim sign pattern). The second qualitative classification was named Qual2 here, and included a four-color overlay pattern classification (Tozaki and Fukuma, Acta Radiologica, 2011). The Breast Imaging Reporting and Data System (BI-RADS) assessment and quantitative SWE parameters were recorded. Diagnostic performances of conventional US, SWE parameters, and combinations of US and SWE parameters were compared. Results: With pathological results as the gold standard, of the 325 examined breast masses, 139 (42.77%) samples were malignant and 186 (57.23%) were benign. The Qual1 showed a higher Az value than the Qual2 and quantitative SWE parameters (all P < 0.05). When applying Qual1

  12. A new qualitative pattern classification of shear wave elastograghy for solid breast mass evaluation

    International Nuclear Information System (INIS)

    Cong, Rui; Li, Jing; Guo, Song

    2017-01-01

    Highlights: • Qualitative SWE classification proposed here was significantly better than quantitative SWE parameters. • Qualitative classification proposed here was better than the classification proposed before. • Qualitative classification proposed here could obtain higher specificity without a loss of sensitivity. - Abstract: Objectives: To examine the efficacy of qualitative shear wave elastography (SWE) in the classification and evaluation of solid breast masses, and to compare this method with conventional ultrasonograghy (US), quantitative SWE parameters and qualitative SWE classification proposed before. Methods: From April 2015 to March 2016, 314 consecutive females with 325 breast masses who decided to undergo core needle biopsy and/or surgical biopsy were enrolled. Conventional US and SWE were previously performed in all enrolled subjects. Each mass was classified by two different qualitative classifications. One was established in our study, herein named the Qual1. Qual1 could classify the SWE images into five color patterns by the visual evaluations: Color pattern 1 (homogeneous pattern); Color pattern 2 (comparative homogeneous pattern); Color pattern 3 (irregularly heterogeneous pattern); Color pattern 4 (intralesional echo pattern); and Color pattern 5 (the stiff rim sign pattern). The second qualitative classification was named Qual2 here, and included a four-color overlay pattern classification (Tozaki and Fukuma, Acta Radiologica, 2011). The Breast Imaging Reporting and Data System (BI-RADS) assessment and quantitative SWE parameters were recorded. Diagnostic performances of conventional US, SWE parameters, and combinations of US and SWE parameters were compared. Results: With pathological results as the gold standard, of the 325 examined breast masses, 139 (42.77%) samples were malignant and 186 (57.23%) were benign. The Qual1 showed a higher Az value than the Qual2 and quantitative SWE parameters (all P < 0.05). When applying Qual1

  13. FULLY CONVOLUTIONAL NETWORKS FOR GROUND CLASSIFICATION FROM LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2018-05-01

    Full Text Available Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs. In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN, a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher. The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  14. A New Feature Ensemble with a Multistage Classification Scheme for Breast Cancer Diagnosis

    Directory of Open Access Journals (Sweden)

    Idil Isikli Esener

    2017-01-01

    Full Text Available A new and effective feature ensemble with a multistage classification is proposed to be implemented in a computer-aided diagnosis (CAD system for breast cancer diagnosis. A publicly available mammogram image dataset collected during the Image Retrieval in Medical Applications (IRMA project is utilized to verify the suggested feature ensemble and multistage classification. In achieving the CAD system, feature extraction is performed on the mammogram region of interest (ROI images which are preprocessed by applying a histogram equalization followed by a nonlocal means filtering. The proposed feature ensemble is formed by concatenating the local configuration pattern-based, statistical, and frequency domain features. The classification process of these features is implemented in three cases: a one-stage study, a two-stage study, and a three-stage study. Eight well-known classifiers are used in all cases of this multistage classification scheme. Additionally, the results of the classifiers that provide the top three performances are combined via a majority voting technique to improve the recognition accuracy on both two- and three-stage studies. A maximum of 85.47%, 88.79%, and 93.52% classification accuracies are attained by the one-, two-, and three-stage studies, respectively. The proposed multistage classification scheme is more effective than the single-stage classification for breast cancer diagnosis.

  15. Optimized extreme learning machine for urban land cover classification using hyperspectral imagery

    Science.gov (United States)

    Su, Hongjun; Tian, Shufang; Cai, Yue; Sheng, Yehua; Chen, Chen; Najafian, Maryam

    2017-12-01

    This work presents a new urban land cover classification framework using the firefly algorithm (FA) optimized extreme learning machine (ELM). FA is adopted to optimize the regularization coefficient C and Gaussian kernel σ for kernel ELM. Additionally, effectiveness of spectral features derived from an FA-based band selection algorithm is studied for the proposed classification task. Three sets of hyperspectral databases were recorded using different sensors, namely HYDICE, HyMap, and AVIRIS. Our study shows that the proposed method outperforms traditional classification algorithms such as SVM and reduces computational cost significantly.

  16. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  17. Support Vector Machine Based Tool for Plant Species Taxonomic Classification

    OpenAIRE

    Manimekalai .K; Vijaya.MS

    2014-01-01

    Plant species are living things and are generally categorized in terms of Domain, Kingdom, Phylum, Class, Order, Family, Genus and name of Species in a hierarchical fashion. This paper formulates the taxonomic leaf categorization problem as the hierarchical classification task and provides a suitable solution using a supervised learning technique namely support vector machine. Features are extracted from scanned images of plant leaves and trained using SVM. Only class, order, family of plants...

  18. Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis

    Science.gov (United States)

    Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku

    2018-02-01

    This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.

  19. Applicability of the International Classification of Functioning, Disability and Health (ICF for evaluation of children with cerebral palsy: a systematic review

    Directory of Open Access Journals (Sweden)

    Lílian de Fátima Dornelas

    2014-12-01

    Full Text Available Objective: To examine and synthesize the knowledge available in the literature on the instruments used in the perspective of functionality in children with cerebral palsy (CP, and to review the literature evaluating the instruments used for the implementation of the International Classification of Functioning, Disability and Health (ICF in children with CP. Method: The search was conducted in the electronic databases Google Scholar, PubMed, Lilacs and Medline, for articles published between January 2006 and December 2012, using the following keywords: cerebral palsy, child and assessment, combined with ICF. Ten articles were selected for analysis according to pre-established criteria. Results: The authors proposed tools that could standardize the assessment for classification of the components “Structure and function of the body”, “Activities and Participation” and “Environmental Factors”, proposing instruments such as Gross Motor Function Measure (GMFM, Pediatric Evaluation of Disability Inventory (PEDI, Goal Attainment Scaling (GAS, Manual Ability Classification System (MACS, Gross Motor Function Classification System (GMFCS, Physicians Rating Scale (PRS, Vineland Adaptive Behavior Scale (VABS, Pediatric Functional Independence Measure (Wee FIM, Gillette Functional Assessment Questionnaire (FAQ, Pediatric Quality of Life Inventory (PedsQL, Pediatric Outcomes Data Collection Instrument (PODCI, Gillette Gait Index (GGI, Energy Expenditure Index (EEI, and Vécu et Santé Perçue de l’Adolescent (VSP-A. Conclusion: The domains “Structure and function of the body” and “Activities and Participation” are often classified according to ICF in children with CP, and they present a variety of instrumentation for applicability of classification.

  20. Ship Detection and Classification on Optical Remote Sensing Images Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Liu Ying

    2017-01-01

    Full Text Available Ship detection and classification is critical for national maritime security and national defense. Although some SAR (Synthetic Aperture Radar image-based ship detection approaches have been proposed and used, they are not able to satisfy the requirement of real-world applications as the number of SAR sensors is limited, the resolution is low, and the revisit cycle is long. As massive optical remote sensing images of high resolution are available, ship detection and classification on theses images is becoming a promising technique, and has attracted great attention on applications including maritime security and traffic control. Some digital image processing methods have been proposed to detect ships in optical remote sensing images, but most of them face difficulty in terms of accuracy, performance and complexity. Recently, an autoencoder-based deep neural network with extreme learning machine was proposed, but it cannot meet the requirement of real-world applications as it only works with simple and small-scaled data sets. Therefore, in this paper, we propose a novel ship detection and classification approach which utilizes deep convolutional neural network (CNN as the ship classifier. The performance of our proposed ship detection and classification approach was evaluated on a set of images downloaded from Google Earth at the resolution 0.5m. 99% detection accuracy and 95% classification accuracy were achieved. In model training, 75× speedup is achieved on 1 Nvidia Titanx GPU.

  1. A supervised learning rule for classification of spatiotemporal spike patterns.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Adjouadi, Malek

    2016-08-01

    This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.

  2. Classification of Broken Rice Kernels using 12D Features

    Directory of Open Access Journals (Sweden)

    SUNDER ALI KHOWAJA

    2016-07-01

    Full Text Available Integrating the technological aspect for assessment of rice quality is very much needed for the Asian markets where rice is one of the major exports. Methods based on image analysis has been proposed for automated quality assessment by taking into account some of the textural features. These features are good at classifying when rice grains are scanned in controlled environment but it is not suitable for practical implementation. Rice grains are placed randomly on the scanner which neither maintains the uniformity in intensity regions nor the placement strategy is kept ideal thus resulting in false classification of grains. The aim of this research is to propose a method for extracting set of features which can overcome the said issues. This paper uses morphological features along-with gray level and Hough transform based features to overcome the false classification in the existing methods. RBF (Radial Basis function is used as a classification mechanism to classify between complete grains and broken grains. Furthermore the broken grains are classified into two classes? i.e. acceptable grains and non-acceptable grains. This research also uses image enhancement technique prior to the feature extraction and classification process based on top-hat transformation. The proposed method has been simulated in MATLAB to visually analyze and validate the results.

  3. An Authentication Technique Based on Classification

    Institute of Scientific and Technical Information of China (English)

    李钢; 杨杰

    2004-01-01

    We present a novel watermarking approach based on classification for authentication, in which a watermark is embedded into the host image. When the marked image is modified, the extracted watermark is also different to the original watermark, and different kinds of modification lead to different extracted watermarks. In this paper, different kinds of modification are considered as classes, and we used classification algorithm to recognize the modifications with high probability. Simulation results show that the proposed method is potential and effective.

  4. Scientific and methodological tools of cost management of enterprises: the main approaches and proposals

    Directory of Open Access Journals (Sweden)

    M. S. Dikunova

    2017-01-01

    Full Text Available A study of the economic nature of expenses for the industrial and defense enterprises occurs through the definition of a conceptual framework, harmonization of definitions of such things as cost, costs and expenses. In the study, these concepts are used as equivalent terms. In theory, there are different classifications of production cost and sales of products’ cost, a part of which is not used in practice. While analyzing the legislative division into elements it was proposed to complement the legal classification of an element of the costs of supply and distribution. The need for categorization is obvious, which is possible after considering the existing classifications of expenditures in the completely manufacturing process. This study shows that they do not solve all the tasks by the cost control. This problem can be solved if you set the relationship of costs and revenues with the actions of the persons, responsible for the use of resources and introduce so-called responsibility centers.The most promising methods include: cost-benefit analysis, planning on a zero basis, strategic cost management in the organization. The comparative analysis allowed to create the fundamental basis for their combined use in the creation of a mechanism to control costs, because this method is widely used in recent times by progressive enterprises. However, there are problems of its implementation with regard to the current economic situation in the state.The law regulates supply and marketing policy of the companies. The analysis of the used classifications in the supply and marketing systems of the enterprises allowed offering group costs. Their elements are described in detail and delimitation of the production costs is carried out. There is no such a division in practical activities of enterprises.To confirm this fact and clarify further research directions, we have considered the real status of the expenses for manufacture and realization of production

  5. Dynamic Latent Classification Model

    DEFF Research Database (Denmark)

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...

  6. Improving Classification of Airborne Laser Scanning Echoes in the Forest-Tundra Ecotone Using Geostatistical and Statistical Measures

    Directory of Open Access Journals (Sweden)

    Nadja Stumberg

    2014-05-01

    Full Text Available The vegetation in the forest-tundra ecotone zone is expected to be highly affected by climate change and requires effective monitoring techniques. Airborne laser scanning (ALS has been proposed as a tool for the detection of small pioneer trees for such vast areas using laser height and intensity data. The main objective of the present study was to assess a possible improvement in the performance of classifying tree and nontree laser echoes from high-density ALS data. The data were collected along a 1000 km long transect stretching from southern to northern Norway. Different geostatistical and statistical measures derived from laser height and intensity values were used to extent and potentially improve more simple models ignoring the spatial context. Generalised linear models (GLM and support vector machines (SVM were employed as classification methods. Total accuracies and Cohen’s kappa coefficients were calculated and compared to those of simpler models from a previous study. For both classification methods, all models revealed total accuracies similar to the results of the simpler models. Concerning classification performance, however, the comparison of the kappa coefficients indicated a significant improvement for some models both using GLM and SVM, with classification accuracies >94%.

  7. Semantic aspects of the International Classification of Functioning, Disability and Health: towards sharing knowledge and unifying information.

    Science.gov (United States)

    Andronache, Adrian Stefan; Simoncello, Andrea; Della Mea, Vincenzo; Daffara, Carlo; Francescutti, Carlo

    2012-02-01

    During the last decade, under the World Health Organization's direction, the International Classification of Functioning, Disability and Health (ICF) has become a reference tool for monitoring and developing various policies addressing people with disability. This article presents three steps to increase the semantic interoperability of ICF: first, the representation of ICF using ontology tools; second, the alignment to upper-level ontologies; and third, the use of these tools to implement semantic mappings between ICF and other tools, such as disability assessment instruments, health classifications, and at least partially formalized terminologies.

  8. SoFoCles: feature filtering for microarray classification based on gene ontology.

    Science.gov (United States)

    Papachristoudis, Georgios; Diplaris, Sotiris; Mitkas, Pericles A

    2010-02-01

    Marker gene selection has been an important research topic in the classification analysis of gene expression data. Current methods try to reduce the "curse of dimensionality" by using statistical intra-feature set calculations, or classifiers that are based on the given dataset. In this paper, we present SoFoCles, an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external, well-defined knowledge retrieved from the Gene Ontology. The notion of semantic similarity is used to derive genes that are involved in the same biological path during the microarray experiment, by enriching a feature set that has been initially produced with legacy methods. Among its other functionalities, SoFoCles offers a large repository of semantic similarity methods that are used in order to derive feature sets and marker genes. The structure and functionality of the tool are discussed in detail, as well as its ability to improve classification accuracy. Through experimental evaluation, SoFoCles is shown to outperform other classification schemes in terms of classification accuracy in two real datasets using different semantic similarity computation approaches.

  9. The newly proposed clinical and post-neoadjuvant treatment staging classifications for gastric adenocarcinoma for the American Joint Committee on Cancer (AJCC) staging.

    Science.gov (United States)

    In, Haejin; Ravetch, Ethan; Langdon-Embry, Marisa; Palis, Bryan; Ajani, Jaffer A; Hofstetter, Wayne L; Kelsen, David P; Sano, Takeshi

    2018-01-01

    New stage grouping classifications for clinical (cStage) and post-neoadjuvant treatment (ypStage) stage for gastric adenocarcinoma have been proposed for the eighth edition of the AJCC manual. This article summarizes the analysis for these stages. Gastric adenocarcinoma patients diagnosed in 2004-2009 were identified from the National Cancer Database (NCDB). The cStage cohort included both surgical and nonsurgical cases, and the ypStage cohort included only patients who had chemotherapy or radiation therapy before surgery. Survival differences between the stage groups were determined by the log-rank test and prognostic accuracy was assessed by concordance index. Analysis was performed using SAS 9.4 (SAS, Cary, NC, USA). Five strata for cStage and four strata for ypStage were developed. The 5-year survival rates for cStages were 56.77%, 47.39%, 33.1%, 25.9%, and 5.0% for stages I, IIa, IIb, III, and IV, respectively, and the rates for ypStage were 74.2%, 46.3%, 19.2%, and 11.6% for stages I, II, III, and IV, respectively. The log-rank test showed that survival differences were well stratified and stage groupings were ordered and distinct (p < 0.0001). The proposed cStage and ypStage classification was sensitive and specific and had high prognostic accuracy (cStage: c index = 0.81, 95% CI, 0.79-0.83; ypStage: c index = 0.80, 95% CI, 0.73-0.87). The proposed eighth edition establishes two new staging schemata that provide essential prognostic data for patients before treatment and for patients who have undergone surgery following neoadjuvant therapy. These additions are a significant advance to the AJCC staging manual and will provide critical guidance to clinicians in making informed decisions throughout the treatment course.

  10. Out-of-Sample Generalizations for Supervised Manifold Learning for Classification.

    Science.gov (United States)

    Vural, Elif; Guillemot, Christine

    2016-03-01

    Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.

  11. Understanding about the classification of pulp inflammation

    Directory of Open Access Journals (Sweden)

    Trijoedani Widodo

    2007-03-01

    Full Text Available Since most authors use the reversible pulpitis and irreversible pulpitis classification, however, many dentists still do not implement these new classifications. Research was made using a descriptive method by proposing questionnaire to dentists from various dental clinics. The numbers of the dentists participating in this research are 22 dentists. All respondents use the diagnosis sheet during their examinations on patients. Nonetheless, it can't be known what diagnosis card used and most of the dentists are still using the old classification. Concerning responses given towards the new classification: a the new classification had been heard, however, it was not clear (36.3%; b the new classification has never been heard at all (63.6%. Then, responses concerning whether a new development is important to be followed-up or not: a there are those who think that information concerning new development is very important (27.2%; b those who feel that it is important to have new information (68.3%; c those who think that new information is not important (8%. It concluded that information concerning the development of classification of pulp inflammation did not reach the dentists.

  12. KNN BASED CLASSIFICATION OF DIGITAL MODULATED SIGNALS

    Directory of Open Access Journals (Sweden)

    Sajjad Ahmed Ghauri

    2016-11-01

    Full Text Available Demodulation process without the knowledge of modulation scheme requires Automatic Modulation Classification (AMC. When receiver has limited information about received signal then AMC become essential process. AMC finds important place in the field many civil and military fields such as modern electronic warfare, interfering source recognition, frequency management, link adaptation etc. In this paper we explore the use of K-nearest neighbor (KNN for modulation classification with different distance measurement methods. Five modulation schemes are used for classification purpose which is Binary Phase Shift Keying (BPSK, Quadrature Phase Shift Keying (QPSK, Quadrature Amplitude Modulation (QAM, 16-QAM and 64-QAM. Higher order cummulants (HOC are used as an input feature set to the classifier. Simulation results shows that proposed classification method provides better results for the considered modulation formats.

  13. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  14. Scene Classification Using High Spatial Resolution Multispectral Data

    National Research Council Canada - National Science Library

    Garner, Jamada

    2002-01-01

    ...), High-spatial resolution (8-meter), 4-color MSI data from IKONOS provide a new tool for scene classification, The utility of these data are studied for the purpose of classifying the Elkhorn Slough and surrounding wetlands in central...

  15. Feature-Based Classification of Amino Acid Substitutions outside Conserved Functional Protein Domains

    Directory of Open Access Journals (Sweden)

    Branislava Gemovic

    2013-01-01

    Full Text Available There are more than 500 amino acid substitutions in each human genome, and bioinformatics tools irreplaceably contribute to determination of their functional effects. We have developed feature-based algorithm for the detection of mutations outside conserved functional domains (CFDs and compared its classification efficacy with the most commonly used phylogeny-based tools, PolyPhen-2 and SIFT. The new algorithm is based on the informational spectrum method (ISM, a feature-based technique, and statistical analysis. Our dataset contained neutral polymorphisms and mutations associated with myeloid malignancies from epigenetic regulators ASXL1, DNMT3A, EZH2, and TET2. PolyPhen-2 and SIFT had significantly lower accuracies in predicting the effects of amino acid substitutions outside CFDs than expected, with especially low sensitivity. On the other hand, only ISM algorithm showed statistically significant classification of these sequences. It outperformed PolyPhen-2 and SIFT by 15% and 13%, respectively. These results suggest that feature-based methods, like ISM, are more suitable for the classification of amino acid substitutions outside CFDs than phylogeny-based tools.

  16. Modern Methods of Multidimensional Data Visualization: Analysis, Classification, Implementation, and Applications in Technical Systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2016-01-01

    Full Text Available The article deals with theoretical and practical aspects of solving the problem of visualization of multidimensional data as an effective means of multivariate analysis of systems. Several classifications are proposed for visualization techniques, according to data types, visualization objects, the method of transformation of coordinates and data. To represent classification are used charts with links to the relevant work. The article also proposes two classifications of modern trends in display technology, including integration of visualization techniques as one of the modern trends of development, along with the introduction of interactive technologies and the dynamics of development processes. It describes some approaches to the visualization problem, which are concerned with fulfilling the needs. The needs are generated by the relevant tasks such as information retrieval in global networks, development of bioinformatics, study and control of business processes, development of regions, etc. The article highlights modern visualization tools, which are capable of improving the efficiency of the multivariate analysis and searching for solutions in multi-objective optimization of technical systems, but are not very actively used for such studies. These are horizontal graphs, graphics "quantile-quantile", etc. The paper proposes to use Choropleth cards traditionally used in cartography for simultaneous presentation of the distribution parameters of several criteria in the space. It notes that visualizations of graphs in network applications can be more actively used to describe the control system. The article suggests using the heat maps to provide graphical representation of the sensitivity of the system quality criteria under variations of options (multivariate analysis of technical systems. It also mentions that it is useful to extend the supervising heat maps to the task of estimating quality of identify in constructing system models. A

  17. Phenotype classification of zebrafish embryos by supervised learning.

    Directory of Open Access Journals (Sweden)

    Nathalie Jeanray

    Full Text Available Zebrafish is increasingly used to assess biological properties of chemical substances and thus is becoming a specific tool for toxicological and pharmacological studies. The effects of chemical substances on embryo survival and development are generally evaluated manually through microscopic observation by an expert and documented by several typical photographs. Here, we present a methodology to automatically classify brightfield images of wildtype zebrafish embryos according to their defects by using an image analysis approach based on supervised machine learning. We show that, compared to manual classification, automatic classification results in 90 to 100% agreement with consensus voting of biological experts in nine out of eleven considered defects in 3 days old zebrafish larvae. Automation of the analysis and classification of zebrafish embryo pictures reduces the workload and time required for the biological expert and increases the reproducibility and objectivity of this classification.

  18. Discriminative Bayesian Dictionary Learning for Classification.

    Science.gov (United States)

    Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal

    2016-12-01

    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

  19. Applicability of the ICD-11 proposal for PTSD: a comparison of prevalence and comorbidity rates with the DSM-IV PTSD classification in two post-conflict samples

    Directory of Open Access Journals (Sweden)

    Nadine Stammel

    2015-05-01

    Full Text Available Background: The World Health Organization recently proposed significant changes to the posttraumatic stress disorder (PTSD diagnostic criteria in the 11th edition of the International Classification of Diseases (ICD-11. Objective: The present study investigated the impact of these changes in two different post-conflict samples. Method: Prevalence and rates of concurrent depression and anxiety, socio-demographic characteristics, and indicators of clinical severity according to ICD-11 in 1,075 Cambodian and 453 Colombian civilians exposed to civil war and genocide were compared to those according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV. Results: Results indicated significantly lower prevalence rates under the ICD-11 proposal (8.1% Cambodian sample and 44.4% Colombian sample compared to the DSM-IV (11.2% Cambodian sample and 55.0% Colombian sample. Participants meeting a PTSD diagnosis only under the ICD-11 proposal had significantly lower rates of concurrent depression and a lower concurrent total score (depression and anxiety compared to participants meeting only DSM-IV diagnostic criteria. There were no significant differences in socio-demographic characteristics and indicators of clinical severity between these two groups. Conclusions: The lower prevalence of PTSD according to the ICD-11 proposal in our samples of persons exposed to a high number of traumatic events may counter criticism of previous PTSD classifications to overuse the PTSD diagnosis in populations exposed to extreme stressors. Also another goal, to better distinguish PTSD from comorbid disorders could be supported with our data.

  20. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  1. AN ADABOOST OPTIMIZED CCFIS BASED CLASSIFICATION MODEL FOR BREAST CANCER DETECTION

    Directory of Open Access Journals (Sweden)

    CHANDRASEKAR RAVI

    2017-06-01

    Full Text Available Classification is a Data Mining technique used for building a prototype of the data behaviour, using which an unseen data can be classified into one of the defined classes. Several researchers have proposed classification techniques but most of them did not emphasis much on the misclassified instances and storage space. In this paper, a classification model is proposed that takes into account the misclassified instances and storage space. The classification model is efficiently developed using a tree structure for reducing the storage complexity and uses single scan of the dataset. During the training phase, Class-based Closed Frequent ItemSets (CCFIS were mined from the training dataset in the form of a tree structure. The classification model has been developed using the CCFIS and a similarity measure based on Longest Common Subsequence (LCS. Further, the Particle Swarm Optimization algorithm is applied on the generated CCFIS, which assigns weights to the itemsets and their associated classes. Most of the classifiers are correctly classifying the common instances but they misclassify the rare instances. In view of that, AdaBoost algorithm has been used to boost the weights of the misclassified instances in the previous round so as to include them in the training phase to classify the rare instances. This improves the accuracy of the classification model. During the testing phase, the classification model is used to classify the instances of the test dataset. Breast Cancer dataset from UCI repository is used for experiment. Experimental analysis shows that the accuracy of the proposed classification model outperforms the PSOAdaBoost-Sequence classifier by 7% superior to other approaches like Naïve Bayes Classifier, Support Vector Machine Classifier, Instance Based Classifier, ID3 Classifier, J48 Classifier, etc.

  2. AN EXTENDED SPECTRAL–SPATIAL CLASSIFICATION APPROACH FOR HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    D. Akbari

    2017-11-01

    Full Text Available In this paper an extended classification approach for hyperspectral imagery based on both spectral and spatial information is proposed. The spatial information is obtained by an enhanced marker-based minimum spanning forest (MSF algorithm. Three different methods of dimension reduction are first used to obtain the subspace of hyperspectral data: (1 unsupervised feature extraction methods including principal component analysis (PCA, independent component analysis (ICA, and minimum noise fraction (MNF; (2 supervised feature extraction including decision boundary feature extraction (DBFE, discriminate analysis feature extraction (DAFE, and nonparametric weighted feature extraction (NWFE; (3 genetic algorithm (GA. The spectral features obtained are then fed into the enhanced marker-based MSF classification algorithm. In the enhanced MSF algorithm, the markers are extracted from the classification maps obtained by both SVM and watershed segmentation algorithm. To evaluate the proposed approach, the Pavia University hyperspectral data is tested. Experimental results show that the proposed approach using GA achieves an approximately 8 % overall accuracy higher than the original MSF-based algorithm.

  3. Overfitting Reduction of Text Classification Based on AdaBELM

    Directory of Open Access Journals (Sweden)

    Xiaoyue Feng

    2017-07-01

    Full Text Available Overfitting is an important problem in machine learning. Several algorithms, such as the extreme learning machine (ELM, suffer from this issue when facing high-dimensional sparse data, e.g., in text classification. One common issue is that the extent of overfitting is not well quantified. In this paper, we propose a quantitative measure of overfitting referred to as the rate of overfitting (RO and a novel model, named AdaBELM, to reduce the overfitting. With RO, the overfitting problem can be quantitatively measured and identified. The newly proposed model can achieve high performance on multi-class text classification. To evaluate the generalizability of the new model, we designed experiments based on three datasets, i.e., the 20 Newsgroups, Reuters-21578, and BioMed corpora, which represent balanced, unbalanced, and real application data, respectively. Experiment results demonstrate that AdaBELM can reduce overfitting and outperform classical ELM, decision tree, random forests, and AdaBoost on all three text-classification datasets; for example, it can achieve 62.2% higher accuracy than ELM. Therefore, the proposed model has a good generalizability.

  4. Subsurface event detection and classification using Wireless Signal Networks.

    Science.gov (United States)

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T

    2012-11-05

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  5. Chromatographic profiles of Phyllanthus aqueous extracts samples: a proposition of classification using chemometric models.

    Science.gov (United States)

    Martins, Lucia Regina Rocha; Pereira-Filho, Edenir Rodrigues; Cass, Quezia Bezerra

    2011-04-01

    Taking in consideration the global analysis of complex samples, proposed by the metabolomic approach, the chromatographic fingerprint encompasses an attractive chemical characterization of herbal medicines. Thus, it can be used as a tool in quality control analysis of phytomedicines. The generated multivariate data are better evaluated by chemometric analyses, and they can be modeled by classification methods. "Stone breaker" is a popular Brazilian plant of Phyllanthus genus, used worldwide to treat renal calculus, hepatitis, and many other diseases. In this study, gradient elution at reversed-phase conditions with detection at ultraviolet region were used to obtain chemical profiles (fingerprints) of botanically identified samples of six Phyllanthus species. The obtained chromatograms, at 275 nm, were organized in data matrices, and the time shifts of peaks were adjusted using the Correlation Optimized Warping algorithm. Principal Component Analyses were performed to evaluate similarities among cultivated and uncultivated samples and the discrimination among the species and, after that, the samples were used to compose three classification models using Soft Independent Modeling of Class analogy, K-Nearest Neighbor, and Partial Least Squares for Discriminant Analysis. The ability of classification models were discussed after their successful application for authenticity evaluation of 25 commercial samples of "stone breaker."

  6. GPGPU Accelerated Deep Object Classification on a Heterogeneous Mobile Platform

    Directory of Open Access Journals (Sweden)

    Syed Tahir Hussain Rizvi

    2016-12-01

    Full Text Available Deep convolutional neural networks achieve state-of-the-art performance in image classification. The computational and memory requirements of such networks are however huge, and that is an issue on embedded devices due to their constraints. Most of this complexity derives from the convolutional layers and in particular from the matrix multiplications they entail. This paper proposes a complete approach to image classification providing common layers used in neural networks. Namely, the proposed approach relies on a heterogeneous CPU-GPU scheme for performing convolutions in the transform domain. The Compute Unified Device Architecture(CUDA-based implementation of the proposed approach is evaluated over three different image classification networks on a Tegra K1 CPU-GPU mobile processor. Experiments show that the presented heterogeneous scheme boasts a 50× speedup over the CPU-only reference and outperforms a GPU-based reference by 2×, while slashing the power consumption by nearly 30%.

  7. American College of Rheumatology classification criteria for Sjögren's syndrome

    DEFF Research Database (Denmark)

    Shiboski, S C; Shiboski, C H; Criswell, L A

    2012-01-01

    We propose new classification criteria for Sjögren's syndrome (SS), which are needed considering the emergence of biologic agents as potential treatments and their associated comorbidity. These criteria target individuals with signs/symptoms suggestive of SS.......We propose new classification criteria for Sjögren's syndrome (SS), which are needed considering the emergence of biologic agents as potential treatments and their associated comorbidity. These criteria target individuals with signs/symptoms suggestive of SS....

  8. Transport of cohesive sediments : Classification and requirements for turbulence modelling

    NARCIS (Netherlands)

    Bruens, A.W.

    1999-01-01

    This report describes a classification of sediment-laden flows, which gives an overview of the different transport forms of fine sediment and the interactions of the different processes as acting in an estuary. At the outs et of the proposed classification a distinction in physical states of

  9. Non-invasive classification of severe sepsis and systemic inflammatory response syndrome using a nonlinear support vector machine: a preliminary study

    International Nuclear Information System (INIS)

    Tang, Collin H H; Savkin, Andrey V; Chan, Gregory S H; Middleton, Paul M; Bishop, Sarah; Lovell, Nigel H

    2010-01-01

    Sepsis has been defined as the systemic response to infection in critically ill patients, with severe sepsis and septic shock representing increasingly severe stages of the same disease. Based on the non-invasive cardiovascular spectrum analysis, this paper presents a pilot study on the potential use of the nonlinear support vector machine (SVM) in the classification of the sepsis continuum into severe sepsis and systemic inflammatory response syndrome (SIRS) groups. 28 consecutive eligible patients attending the emergency department with presumptive diagnoses of sepsis syndrome have participated in this study. Through principal component analysis (PCA), the first three principal components were used to construct the SVM feature space. The SVM classifier with a fourth-order polynomial kernel was found to have a better overall performance compared with the other SVM classifiers, showing the following classification results: sensitivity = 94.44%, specificity = 62.50%, positive predictive value = 85.00%, negative predictive value = 83.33% and accuracy = 84.62%. Our classification results suggested that the combinatory use of cardiovascular spectrum analysis and the proposed SVM classification of autonomic neural activity is a potentially useful clinical tool to classify the sepsis continuum into two distinct pathological groups of varying sepsis severity

  10. Single-cultivar extra virgin olive oil classification using a potentiometric electronic tongue.

    Science.gov (United States)

    Dias, Luís G; Fernandes, Andreia; Veloso, Ana C A; Machado, Adélio A S C; Pereira, José A; Peres, António M

    2014-10-01

    Label authentication of monovarietal extra virgin olive oils is of great importance. A novel approach based on a potentiometric electronic tongue is proposed to classify oils obtained from single olive cultivars (Portuguese cvs. Cobrançosa, Madural, Verdeal Transmontana; Spanish cvs. Arbequina, Hojiblanca, Picual). A meta-heuristic simulated annealing algorithm was applied to select the most informative sets of sensors to establish predictive linear discriminant models. Olive oils were correctly classified according to olive cultivar (sensitivities greater than 97%) and each Spanish olive oil was satisfactorily discriminated from the Portuguese ones with the exception of cv. Arbequina (sensitivities from 61% to 98%). Also, the discriminant ability was related to the polar compounds contents of olive oils and so, indirectly, with organoleptic properties like bitterness, astringency or pungency. Therefore the proposed E-tongue can be foreseen as a useful auxiliary tool for trained sensory panels for the classification of monovarietal extra virgin olive oils. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Classification via Clustering for Predicting Final Marks Based on Student Participation in Forums

    Science.gov (United States)

    Lopez, M. I.; Luna, J. M.; Romero, C.; Ventura, S.

    2012-01-01

    This paper proposes a classification via clustering approach to predict the final marks in a university course on the basis of forum data. The objective is twofold: to determine if student participation in the course forum can be a good predictor of the final marks for the course and to examine whether the proposed classification via clustering…

  12. 75 FR 21592 - Proposed Information Collection; Comment Request; Business and Professional Classification Report

    Science.gov (United States)

    2010-04-26

    ... business in such areas as: primary business activity, company structure, size, and business operations.... Form Number: SQ-CLASS(00). Type of Review: Regular. Affected Public: Businesses and other organizations...; Business and Professional Classification Report AGENCY: U.S. Census Bureau, Commerce. ACTION: Notice...

  13. Towards an integrated phylogenetic classification of the Tremellomycetes.

    Science.gov (United States)

    Liu, X-Z; Wang, Q-M; Göker, M; Groenewald, M; Kachalkin, A V; Lumbsch, H T; Millanes, A M; Wedin, M; Yurkov, A M; Boekhout, T; Bai, F-Y

    2015-06-01

    Families and genera assigned to Tremellomycetes have been mainly circumscribed by morphology and for the yeasts also by biochemical and physiological characteristics. This phenotype-based classification is largely in conflict with molecular phylogenetic analyses. Here a phylogenetic classification framework for the Tremellomycetes is proposed based on the results of phylogenetic analyses from a seven-genes dataset covering the majority of tremellomycetous yeasts and closely related filamentous taxa. Circumscriptions of the taxonomic units at the order, family and genus levels recognised were quantitatively assessed using the phylogenetic rank boundary optimisation (PRBO) and modified general mixed Yule coalescent (GMYC) tests. In addition, a comprehensive phylogenetic analysis on an expanded LSU rRNA (D1/D2 domains) gene sequence dataset covering as many as available teleomorphic and filamentous taxa within Tremellomycetes was performed to investigate the relationships between yeasts and filamentous taxa and to examine the stability of undersampled clades. Based on the results inferred from molecular data and morphological and physiochemical features, we propose an updated classification for the Tremellomycetes. We accept five orders, 17 families and 54 genera, including seven new families and 18 new genera. In addition, seven families and 17 genera are emended and one new species name and 185 new combinations are proposed. We propose to use the term pro tempore or pro tem. in abbreviation to indicate the species names that are temporarily maintained.

  14. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    Science.gov (United States)

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  15. R and D proposals to improve outages operation. Methods, practices and tools

    International Nuclear Information System (INIS)

    Dionis, Francois

    2014-01-01

    This paper deals with outage operation improvement. It offers a number of tracks on the interactions between the operation activities and maintenance, with a methodological perspective and proposals concerning the Information System. On the methodological point of view, a clever plant systems modeling may allow representing the needed characteristics in order to optimize tagouts, alignment procedures and the schedule. Tools must be taken n into account for new tagout practices such as tags sharing. It is possible to take advantage of 2D drawings integrated into the information system in order to improve the data controls and to visualize operation activities. An integrated set of mobile applications should allow field operators to join the information system for a better and safer performance. (author)

  16. Clinically orientated classification incorporating shoulder balance for the surgical treatment of adolescent idiopathic scoliosis.

    Science.gov (United States)

    Elsebaie, H B; Dannawi, Z; Altaf, F; Zaidan, A; Al Mukhtar, M; Shaw, M J; Gibson, A; Noordeen, H

    2016-02-01

    The achievement of shoulder balance is an important measure of successful scoliosis surgery. No previously described classification system has taken shoulder balance into account. We propose a simple classification system for AIS based on two components which include the curve type and shoulder level. Altogether, three curve types have been defined according to the size and location of the curves, each curve pattern is subdivided into type A or B depending on the shoulder level. This classification was tested for interobserver reproducibility and intraobserver reliability. A retrospective analysis of the radiographs of 232 consecutive cases of AIS patients treated surgically between 2005 and 2009 was also performed. Three major types and six subtypes were identified. Type I accounted for 30 %, type II 28 % and type III 42 %. The retrospective analysis showed three patients developed a decompensation that required extension of the fusion. One case developed worsening of shoulder balance requiring further surgery. This classification was tested for interobserver and intraobserver reliability. The mean kappa coefficients for interobserver reproducibility ranged from 0.89 to 0.952, while the mean kappa value for intraobserver reliability was 0.964 indicating a good-to-excellent reliability. The treatment algorithm guides the spinal surgeon to achieve optimal curve correction and postoperative shoulder balance whilst fusing the smallest number of spinal segments. The high interobserver reproducibility and intraobserver reliability makes it an invaluable tool to describe scoliosis curves in everyday clinical practice.

  17. Research on quality assurance classification methodology for domestic AP1000 nuclear power projects

    International Nuclear Information System (INIS)

    Bai Jinhua; Jiang Huijie; Li Jingyan

    2012-01-01

    To meet the quality assurance classification requirements of domestic nuclear safety codes and standards, this paper analyzes the quality assurance classification methodology of domestic AP1000 nuclear power projects at present, and proposes the quality assurance classification methodology for subsequent AP1000 nuclear power projects. (authors)

  18. Improving the Computational Performance of Ontology-Based Classification Using Graph Databases

    Directory of Open Access Journals (Sweden)

    Thomas J. Lampoltshammer

    2015-07-01

    Full Text Available The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi- automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task. The introduced approach shifts the classification task from the classical Protégé environment and its common reasoners to the proposed graph-based approaches. For the validation, the authors tested the approach on a simulation scenario based on a real-world example. The results demonstrate a quite promising improvement of classification speed—up to 80,000 times faster than the Protégé-based approach.

  19. Enhancing the Social Network Dimension of Lifelong Competence Development and Management Systems: A Proposal of Methods and Tools

    NARCIS (Netherlands)

    Cheak, Alicia; Angehrn, Albert; Sloep, Peter

    2006-01-01

    Cheak, A. M., Angehrn, A. A., & Sloep, P. (2006). Enhancing the social network dimension of lifelong competence development and management systems: A proposal of methods and tools. In R. Koper & K. Stefanov (Eds.). Proceedings of International Workshop in Learning Networks for Lifelong Competence

  20. Classification of EEG-P300 Signals Extracted from Brain Activities in BCI Systems Using ν-SVM and BLDA Algorithms

    Directory of Open Access Journals (Sweden)

    Ali MOMENNEZHAD

    2014-06-01

    Full Text Available In this paper, a linear predictive coding (LPC model is used to improve classification accuracy, convergent speed to maximum accuracy, and maximum bitrates in brain computer interface (BCI system based on extracting EEG-P300 signals. First, EEG signal is filtered in order to eliminate high frequency noise. Then, the parameters of filtered EEG signal are extracted using LPC model. Finally, the samples are reconstructed by LPC coefficients and two classifiers, a Bayesian Linear discriminant analysis (BLDA, and b the υ-support vector machine (υ-SVM are applied in order to classify. The proposed algorithm performance is compared with fisher linear discriminant analysis (FLDA. Results show that the efficiency of our algorithm in improving classification accuracy and convergent speed to maximum accuracy are much better. As example at the proposed algorithms, respectively BLDA with LPC model and υ-SVM with LPC model with8 electrode configuration for subject S1 the total classification accuracy is improved as 9.4% and 1.7%. And also, subject 7 at BLDA and υ-SVM with LPC model algorithms (LPC+BLDA and LPC+ υ-SVM after block 11th converged to maximum accuracy but Fisher Linear Discriminant Analysis (FLDA algorithm did not converge to maximum accuracy (with the same configuration. So, it can be used as a promising tool in designing BCI systems.

  1. Trends and concepts in fern classification

    Science.gov (United States)

    Christenhusz, Maarten J. M.; Chase, Mark W.

    2014-01-01

    Background and Aims Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. Scope An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Key Results Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called ‘fern allies’ (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is

  2. Trends and concepts in fern classification.

    Science.gov (United States)

    Christenhusz, Maarten J M; Chase, Mark W

    2014-03-01

    Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called 'fern allies' (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is sister to all other vascular plants, whereas

  3. Influence of nuclei segmentation on breast cancer malignancy classification

    Science.gov (United States)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  4. Development of an intelligent ultrasonic welding defect classification software

    International Nuclear Information System (INIS)

    Song, Sung Jin; Kim, Hak Joon; Jeong, Hee Don

    1997-01-01

    Ultrasonic pattern recognition is the most effective approach to the problem of discriminating types of flaws in weldments based on ultrasonic flaw signals. In spite of significant progress in the research on this methodology, it has not been widely used in many practical ultrasonic inspections of weldments in industry. Hence, for the convenient application of this approach in many practical situations, we develop an intelligent ultrasonic signature classification software which can discriminate types of flaws in weldments based on their ultrasonic signals using various tools in artificial intelligence such as neural networks. This software shows the excellent performance in an experimental problem where flaws in weldments are classified into two categories of cracks and non-cracks. This performance demonstrates the high possibility of this software as a practical tool for ultrasonic flaw classification in weldments.

  5. Unsupervised feature learning for autonomous rock image classification

    Science.gov (United States)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  6. Multiview Discriminative Geometry Preserving Projection for Image Classification

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2014-01-01

    Full Text Available In many image classification applications, it is common to extract multiple visual features from different views to describe an image. Since different visual features have their own specific statistical properties and discriminative powers for image classification, the conventional solution for multiple view data is to concatenate these feature vectors as a new feature vector. However, this simple concatenation strategy not only ignores the complementary nature of different views, but also ends up with “curse of dimensionality.” To address this problem, we propose a novel multiview subspace learning algorithm in this paper, named multiview discriminative geometry preserving projection (MDGPP for feature extraction and classification. MDGPP can not only preserve the intraclass geometry and interclass discrimination information under a single view, but also explore the complementary property of different views to obtain a low-dimensional optimal consensus embedding by using an alternating-optimization-based iterative algorithm. Experimental results on face recognition and facial expression recognition demonstrate the effectiveness of the proposed algorithm.

  7. MRI Brain Images Healthy and Pathological Tissues Classification with the Aid of Improved Particle Swarm Optimization and Neural Network

    Science.gov (United States)

    Sheejakumari, V.; Sankara Gomathi, B.

    2015-01-01

    The advantages of magnetic resonance imaging (MRI) over other diagnostic imaging modalities are its higher spatial resolution and its better discrimination of soft tissue. In the previous tissues classification method, the healthy and pathological tissues are classified from the MRI brain images using HGANN. But the method lacks sensitivity and accuracy measures. The classification method is inadequate in its performance in terms of these two parameters. So, to avoid these drawbacks, a new classification method is proposed in this paper. Here, new tissues classification method is proposed with improved particle swarm optimization (IPSO) technique to classify the healthy and pathological tissues from the given MRI images. Our proposed classification method includes the same four stages, namely, tissue segmentation, feature extraction, heuristic feature selection, and tissue classification. The method is implemented and the results are analyzed in terms of various statistical performance measures. The results show the effectiveness of the proposed classification method in classifying the tissues and the achieved improvement in sensitivity and accuracy measures. Furthermore, the performance of the proposed technique is evaluated by comparing it with the other segmentation methods. PMID:25977706

  8. Classification of technogenic impacts on the geological medium

    International Nuclear Information System (INIS)

    Trofimov, V.T.; Korolev, V.A.; Gerasimova, A.S.

    1995-01-01

    The available systems of classification of technology-induced impacts on the geological environment are analyzed and a classification which is elaborated by the authors and allows to break the integrated impact into individual components for their subsequent analysis, evaluation and reflection in cartographic models. This classification assumes the division of technology-induced impacts into classes and subclasses. The first class-impacts of physical nature-includes a subclass of radioactive impacts where, in its turn, two types of impacts are distinguished: radioactive contamination and radiation decontamination of the components of the geological environment. The proposed classification can serve the basis for developing standards and regulations of typification and evaluation of technology-induced impacts o the geological environment. 27 refs., 1 tab

  9. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    Science.gov (United States)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  10. Mapping the rehabilitation interventions of a community stroke team to the extended International Classification of Functioning, Disability and Health Core Set for Stroke.

    Science.gov (United States)

    Evans, Melissa; Hocking, Clare; Kersten, Paula

    2017-12-01

    This study aim was to evaluate whether the Extended International Classification of Functioning, Disability and Health Core Set for Stroke captured the interventions of a community stroke rehabilitation team situated in a large city in New Zealand. It was proposed that the results would identify the contribution of each discipline, and the gaps and differences in service provision to Māori and non-Māori. Applying the Extended International Classification of Functioning, Disability and Health Core Set for Stroke in this way would also inform whether this core set should be adopted in New Zealand. Interventions were retrospectively extracted from 18 medical records and linked to the International Classification of Functioning, Disability and Health and the Extended International Classification of Functioning, Disability and Health Core Set for Stroke. The frequencies of linked interventions and the health discipline providing the intervention were calculated. Analysis revealed that 98.8% of interventions provided by the rehabilitation team could be linked to the Extended International Classification of Functioning, Disability and Health Core Set for Stroke, with more interventions for body function and structure than for activities and participation; no interventions for emotional concerns; and limited interventions for community, social and civic life. Results support previous recommendations for additions to the EICSS. The results support the use of the Extended International Classification of Functioning, Disability and Health Core Set for Stroke in New Zealand and demonstrates its use as a quality assurance tool that can evaluate the scope and practice of a rehabilitation service. Implications for Rehabilitation The Extended International Classification of Functioning Disability and Health Core Set for Stroke appears to represent the stroke interventions of a community stroke rehabilitation team in New Zealand. As a result, researchers and clinicians may have

  11. Advances in the classification and treatment of mastocytosis

    DEFF Research Database (Denmark)

    Valent, Peter; Akin, Cem; Hartmann, Karin

    2017-01-01

    Mastocytosis is a term used to denote a heterogeneous group of conditions defined by the expansion and accumulation of clonal (neoplastic) tissue mast cells in various organs. The classification of the World Health Organization (WHO) divides the disease into cutaneous mastocytosis, systemic...... leukemia. The clinical impact and prognostic value of this classification has been confirmed in numerous studies, and its basic concept remains valid. However, refinements have recently been proposed by the consensus group, the WHO, and the European Competence Network on Mastocytosis. In addition, new...... of mastocytosis, with emphasis on classification, prognostication, and emerging new treatment options in advanced systemic mastocytosis....

  12. Classification of solid industrial waste based on ecotoxicology tests using Daphnia magna: an alternative

    OpenAIRE

    William Gerson Matias; Vanessa Guimarães Machado; Cátia Regina Silva de Carvalho-Pinto; Débora Monteiro Brentano; Letícia Flohr

    2005-01-01

    The adequate treatment and final disposal of solid industrial wastes depends on their classification into class I or II. This classification is proposed by NBR 10.004; however, it is complex and time-consuming. With a view to facilitating this classification, the use of assays with Daphnia magna is proposed. These assays make possible the identification of toxic chemicals in the leach, which denotes the presence of one of the characteristics described by NBR 10.004, the toxicity, which is a s...

  13. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  14. Substantiation of Biological Assets Classification Indexes for Enhancing Their Accounting Efficiency

    OpenAIRE

    Rayisa Tsyhan; Olha Chubka

    2013-01-01

    Present day national agricultural companies sell their products in both domestic and foreign markets which has a significant impact on specifics of biological assets accounting. The article offers biological assets classification provided in the Practical Guide to Accounting for Biological Assets and, besides, specifications proposed by various scientists. Based on the analysis, biological assets classification has been supplemented with new classification factors and their appropriateness ha...

  15. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  16. Failure diagnosis using deep belief learning based health state classification

    International Nuclear Information System (INIS)

    Tamilselvan, Prasanna; Wang, Pingfeng

    2013-01-01

    Effective health diagnosis provides multifarious benefits such as improved safety, improved reliability and reduced costs for operation and maintenance of complex engineered systems. This paper presents a novel multi-sensor health diagnosis method using deep belief network (DBN). DBN has recently become a popular approach in machine learning for its promised advantages such as fast inference and the ability to encode richer and higher order network structures. The DBN employs a hierarchical structure with multiple stacked restricted Boltzmann machines and works through a layer by layer successive learning process. The proposed multi-sensor health diagnosis methodology using DBN based state classification can be structured in three consecutive stages: first, defining health states and preprocessing sensory data for DBN training and testing; second, developing DBN based classification models for diagnosis of predefined health states; third, validating DBN classification models with testing sensory dataset. Health diagnosis using DBN based health state classification technique is compared with four existing diagnosis techniques. Benchmark classification problems and two engineering health diagnosis applications: aircraft engine health diagnosis and electric power transformer health diagnosis are employed to demonstrate the efficacy of the proposed approach

  17. Classification of radiolarian images with hand-crafted and deep features

    Science.gov (United States)

    Keçeli, Ali Seydi; Kaya, Aydın; Keçeli, Seda Uzunçimen

    2017-12-01

    Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy.

  18. Motor Oil Classification using Color Histograms and Pattern Recognition Techniques.

    Science.gov (United States)

    Ahmadi, Shiva; Mani-Varnosfaderani, Ahmad; Habibi, Biuck

    2018-04-20

    Motor oil classification is important for quality control and the identification of oil adulteration. In thiswork, we propose a simple, rapid, inexpensive and nondestructive approach based on image analysis and pattern recognition techniques for the classification of nine different types of motor oils according to their corresponding color histograms. For this, we applied color histogram in different color spaces such as red green blue (RGB), grayscale, and hue saturation intensity (HSI) in order to extract features that can help with the classification procedure. These color histograms and their combinations were used as input for model development and then were statistically evaluated by using linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM) techniques. Here, two common solutions for solving a multiclass classification problem were applied: (1) transformation to binary classification problem using a one-against-all (OAA) approach and (2) extension from binary classifiers to a single globally optimized multilabel classification model. In the OAA strategy, LDA, QDA, and SVM reached up to 97% in terms of accuracy, sensitivity, and specificity for both the training and test sets. In extension from binary case, despite good performances by the SVM classification model, QDA and LDA provided better results up to 92% for RGB-grayscale-HSI color histograms and up to 93% for the HSI color map, respectively. In order to reduce the numbers of independent variables for modeling, a principle component analysis algorithm was used. Our results suggest that the proposed method is promising for the identification and classification of different types of motor oils.

  19. Enhancing the Social Network Dimension of Lifelong Competence Development and Management Systems: A proposal of methods and tools

    NARCIS (Netherlands)

    Cheak, Alicia; Angehrn, Albert; Sloep, Peter

    2006-01-01

    Cheak, A. M., Angehrn, A. A., & Sloep, P. B. (2006). Enhancing the social network dimension of lifelong competence development and management systems: A proposal of methods and tools. In E. J. R. Koper & K. Stefanov (Eds.), Proceedings of International Workshop on Learning Networks for Lifelong

  20. Revised Soil Classification System for Coarse-Fine Mixtures

    KAUST Repository

    Park, Junghee; Santamarina, Carlos

    2017-01-01

    Soil classification systems worldwide capture great physical insight and enable geotechnical engineers to anticipate the properties and behavior of soils by grouping them into similar response categories based on their index properties. Yet gravimetric analysis and data trends summarized from published papers reveal critical limitations in soil group boundaries adopted in current systems. In particular, current classification systems fail to capture the dominant role of fines on the mechanical and hydraulic properties of soils. A revised soil classification system (RSCS) for coarse-fine mixtures is proposed herein. Definitions of classification boundaries use low and high void ratios that gravel, sand, and fines may attain. This research adopts emax and emin for gravels and sands, and three distinctive void ratio values for fines: soft eF|10  kPa and stiff eF|1  MPa for mechanical response (at effective stress 10 kPa and 1 MPa, respectively), and viscous λ⋅eF|LL for fluid flow control, where λ=2log(LL−25) and eF|LL is the void ratio at the liquid limit. For classification purposes, these void ratios can be estimated from index properties such as particle shape, the coefficient of uniformity, and the liquid limit. Analytically computed and data-adjusted boundaries are soil-specific, in contrast with the Unified Soil Classification System (USCS). Threshold fractions for mechanical control and for flow control are quite distinct in the proposed system. Therefore, the RSCS uses a two-name nomenclature whereby the first letters identify the component(s) that controls mechanical properties, followed by a letter (shown in parenthesis) that identifies the component that controls fluid flow. Sample charts in this paper and a Microsoft Excel facilitate the implementation of this revised classification system.

  1. Revised Soil Classification System for Coarse-Fine Mixtures

    KAUST Repository

    Park, Junghee

    2017-04-17

    Soil classification systems worldwide capture great physical insight and enable geotechnical engineers to anticipate the properties and behavior of soils by grouping them into similar response categories based on their index properties. Yet gravimetric analysis and data trends summarized from published papers reveal critical limitations in soil group boundaries adopted in current systems. In particular, current classification systems fail to capture the dominant role of fines on the mechanical and hydraulic properties of soils. A revised soil classification system (RSCS) for coarse-fine mixtures is proposed herein. Definitions of classification boundaries use low and high void ratios that gravel, sand, and fines may attain. This research adopts emax and emin for gravels and sands, and three distinctive void ratio values for fines: soft eF|10  kPa and stiff eF|1  MPa for mechanical response (at effective stress 10 kPa and 1 MPa, respectively), and viscous λ⋅eF|LL for fluid flow control, where λ=2log(LL−25) and eF|LL is the void ratio at the liquid limit. For classification purposes, these void ratios can be estimated from index properties such as particle shape, the coefficient of uniformity, and the liquid limit. Analytically computed and data-adjusted boundaries are soil-specific, in contrast with the Unified Soil Classification System (USCS). Threshold fractions for mechanical control and for flow control are quite distinct in the proposed system. Therefore, the RSCS uses a two-name nomenclature whereby the first letters identify the component(s) that controls mechanical properties, followed by a letter (shown in parenthesis) that identifies the component that controls fluid flow. Sample charts in this paper and a Microsoft Excel facilitate the implementation of this revised classification system.

  2. An intelligent temporal pattern classification system using fuzzy ...

    Indian Academy of Sciences (India)

    In this paper, we propose a new pattern classification system by combining Temporal features with Fuzzy Min–Max (TFMM) neural network based classifier for effective decision support in medical diagnosis. Moreover, a Particle Swarm Optimization (PSO) algorithm based rule extractor is also proposed in this work for ...

  3. Vessel-guided airway segmentation based on voxel classification

    DEFF Research Database (Denmark)

    Lo, Pechin Chien Pau; Sporring, Jon; Ashraf, Haseem

    2008-01-01

    This paper presents a method for improving airway tree segmentation using vessel orientation information. We use the fact that an airway branch is always accompanied by an artery, with both structures having similar orientations. This work is based on a  voxel classification airway segmentation...... method proposed previously. The probability of a voxel belonging to the airway, from the voxel classification method, is augmented with an orientation similarity measure as a criterion for region growing. The orientation similarity measure of a voxel indicates how similar is the orientation...... of the surroundings of a voxel, estimated based on a tube model, is to that of a neighboring vessel. The proposed method is tested on 20 CT images from different subjects selected randomly from a lung cancer screening study. Length of the airway branches from the results of the proposed method are significantly...

  4. Modular playware as a playful diagnosis tool for autistic children

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop

    2009-01-01

    children. Using artificial neural networks for automatic classification of the individual construction practices, we may compare this classification with the diagnosis of the children, and possible obtain a supplementary diagnosis tool which is based on the autistic children's free play with the modular...

  5. Machine learning classification with confidence: application of transductive conformal predictors to MRI-based diagnostic and prognostic markers in depression.

    Science.gov (United States)

    Nouretdinov, Ilia; Costafreda, Sergi G; Gammerman, Alexander; Chervonenkis, Alexey; Vovk, Vladimir; Vapnik, Vladimir; Fu, Cynthia H Y

    2011-05-15

    There is rapidly accumulating evidence that the application of machine learning classification to neuroimaging measurements may be valuable for the development of diagnostic and prognostic prediction tools in psychiatry. However, current methods do not produce a measure of the reliability of the predictions. Knowing the risk of the error associated with a given prediction is essential for the development of neuroimaging-based clinical tools. We propose a general probabilistic classification method to produce measures of confidence for magnetic resonance imaging (MRI) data. We describe the application of transductive conformal predictor (TCP) to MRI images. TCP generates the most likely prediction and a valid measure of confidence, as well as the set of all possible predictions for a given confidence level. We present the theoretical motivation for TCP, and we have applied TCP to structural and functional MRI data in patients and healthy controls to investigate diagnostic and prognostic prediction in depression. We verify that TCP predictions are as accurate as those obtained with more standard machine learning methods, such as support vector machine, while providing the additional benefit of a valid measure of confidence for each prediction. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. A Neural-Network-Based Approach to White Blood Cell Classification

    Directory of Open Access Journals (Sweden)

    Mu-Chun Su

    2014-01-01

    Full Text Available This paper presents a new white blood cell classification system for the recognition of five types of white blood cells. We propose a new segmentation algorithm for the segmentation of white blood cells from smear images. The core idea of the proposed segmentation algorithm is to find a discriminating region of white blood cells on the HSI color space. Pixels with color lying in the discriminating region described by an ellipsoidal region will be regarded as the nucleus and granule of cytoplasm of a white blood cell. Then, through a further morphological process, we can segment a white blood cell from a smear image. Three kinds of features (i.e., geometrical features, color features, and LDP-based texture features are extracted from the segmented cell. These features are fed into three different kinds of neural networks to recognize the types of the white blood cells. To test the effectiveness of the proposed white blood cell classification system, a total of 450 white blood cells images were used. The highest overall correct recognition rate could reach 99.11% correct. Simulation results showed that the proposed white blood cell classification system was very competitive to some existing systems.

  7. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    Science.gov (United States)

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.

    2017-01-01

    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  8. A Two-Level Sound Classification Platform for Environmental Monitoring

    Directory of Open Access Journals (Sweden)

    Stelios A. Mitilineos

    2018-01-01

    Full Text Available STORM is an ongoing European research project that aims at developing an integrated platform for monitoring, protecting, and managing cultural heritage sites through technical and organizational innovation. Part of the scheduled preventive actions for the protection of cultural heritage is the development of wireless acoustic sensor networks (WASNs that will be used for assessing the impact of human-generated activities as well as for monitoring potentially hazardous environmental phenomena. Collected sound samples will be forwarded to a central server where they will be automatically classified in a hierarchical manner; anthropogenic and environmental activity will be monitored, and stakeholders will be alarmed in the case of potential malevolent behavior or natural phenomena like excess rainfall, fire, gale, high tides, and waves. Herein, we present an integrated platform that includes sound sample denoising using wavelets, feature extraction from sound samples, Gaussian mixture modeling of these features, and a powerful two-layer neural network for automatic classification. We contribute to previous work by extending the proposed classification platform to perform low-level classification too, i.e., classify sounds to further subclasses that include airplane, car, and pistol sounds for the anthropogenic sound class; bird, dog, and snake sounds for the biophysical sound class; and fire, waterfall, and gale for the geophysical sound class. Classification results exhibit outstanding classification accuracy in both high-level and low-level classification thus demonstrating the feasibility of the proposed approach.

  9. A new classification of geological resources

    International Nuclear Information System (INIS)

    Mata Perello, Josep M; Mata Lleonart, Roger; Vintro Sanchez, Carla

    2011-01-01

    The traditional definition of the geological resource term excludes all those elements or processes of the physical environment that show a scientific, didactic, or cultural interest, but do not offer, in principle, an economic potential. The so called cultural geo-resources have traditionally not been included within a classification that puts them in the same hierarchical and semantic ranking than the rest of the resources, and there has been no attempt to define a classification of these resources under a more didactic and modern perspective. Hence, in order to catalogue all those geological elements that show a cultural, patrimonial, scientific, or didactic interest as a resource, this paper proposes a new classification in which geo-resources stand in the same hierarchical and semantic ranking than the rest of the resources traditionally catalogued as such.

  10. Dissimilarity Representations in Lung Parenchyma Classification

    DEFF Research Database (Denmark)

    Sørensen, Lauge Emil Borch Laurs; de Bruijne, Marleen

    2009-01-01

    parenchyma classification. This allows for the classifiers to work on dissimilarities between objects, which might be a more natural way of representing lung parenchyma. In this context, dissimilarity is defined between CT regions of interest (ROI)s. ROIs are represented by their CT attenuation histogram...... and ROI dissimilarity is defined as a histogram dissimilarity measure between the attenuation histograms. In this setting, the full histograms are utilized according to the chosen histogram dissimilarity measure. We apply this idea to classification of different emphysema patterns as well as normal...... are built in this representation. This is also the general trend in lung parenchyma classification in computed tomography (CT) images, where the features often are measures on feature histograms. Instead, we propose to build normal density based classifiers in dissimilarity representations for lung...

  11. Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.

    Science.gov (United States)

    Sarikaya, Duygu; Corso, Jason J; Guru, Khurshid A

    2017-07-01

    Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame.

  12. Proposed classification scale for radiological incidents and accidents

    International Nuclear Information System (INIS)

    2003-04-01

    The scale proposed in this report is intended to facilitate communication concerning the severity of incidents and accidents involving the exposure of human beings to ionising radiations. Like the INES, it comprises eight levels of severity and uses the same terms (accident, incident, anomaly, serious and major) for keeping the public and the media informed. In a radiological protection context, the severity of an event is considered as being directly proportional to the risk run by an individual (the probability of developing fatal or non-fatal health effects) following exposure to ionising radiation in an incident or accident situation. However for society, other factors have to be taken into account to determine severity. The severity scale proposed is therefore based on assessment of the individual radiological risk. A severity level corresponding to exposure of a member of the public in an incident or accident situation is determined on the basis of risk assessment concepts and methods derived from international consensus on dose/effect relationships for both stochastic and deterministic effects. The severity of all the possible exposure situations - worker exposure, collective exposure, potential exposure - is determined using a system of weighting in relation to situations involving members of the public. In the case of this scale, to indicate the severity of an event, it is proposed to make use of the most penalizing level of severity, comparing: - the severity associated with the probability of occurrence of deterministic effects and the severity associated with the probability of occurrence of stochastic effects, when the event gives rise to both types of risk; - the severity for members of the public and the severity for exposed workers, when both categories of individuals are involved; - the severity on the proposed radiological protection scale and that obtained using the INES, when radiological protection and nuclear safety aspects are associated with

  13. Structure of diagnostics horizons and humus classification

    Directory of Open Access Journals (Sweden)

    Zanella A

    2008-03-01

    Full Text Available The classification of the main humus forms is generally based on the morpho-genetic characters of the A and OH diagnostic horizons. This is the case in the new European key of classification presented in Freiburg on September 2004 (Eurosoil Congress. Among the morpho-genetic characters, the soil structure covers a very important role. In this work, the structure of the diagnostic A and OH horizons has been analysed in terms of aggregation force, diameter and composition of the soil lumps (peds. In order to study the aggregation force, two disaggregating tools have been conceived and used. The diameter of the lumps has been measured by sieving the soil samples with standardised webs. Observing the samples thanks to a binocular magnifying 10X and 50X, the organic or/and mineral composition of the soil aggregates has been determined, data being investigated with ANOVA and Factorial Analysis. The article examines the argument from two points of view: crashing tools for estimating the soil structure (part 1 and the dimensions of the peds given in European key of humus forms classification (part 2. The categories of soil peds diameter and composition seem to be linked to the main humus forms. For instance, aggregates having a diamater larger than 1 mm and well amalgamate organo-mineral composition are more present in the A horizons of the Mull forms than in which of the other forms; contrary to the OH horizon of the Moder or Mor, the OH horizon of the Amphi forms shows an important percent of small organic lumps. Some propositions have been given in order to improve the European key of humus forms classification.

  14. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  15. Chinese wine classification system based on micrograph using combination of shape and structure features

    Science.gov (United States)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  16. Information-theoretical feature selection using data obtained by Scanning Electron Microscopy coupled with and Energy Dispersive X-ray spectrometer for the classification of glass traces

    International Nuclear Information System (INIS)

    Ramos, Daniel; Zadora, Grzegorz

    2011-01-01

    , obtaining high (almost perfect) discriminating power and good calibration. This allows the proposed models to be used in casework. We also present an in-depth analysis which reveals the benefits of the proposed ECE metric as an assessment tool for classification models based on likelihood ratios.

  17. Thermographic image analysis for classification of ACL rupture disease, bone cancer, and feline hyperthyroid, with Gabor filters

    Science.gov (United States)

    Alvandipour, Mehrdad; Umbaugh, Scott E.; Mishra, Deependra K.; Dahal, Rohini; Lama, Norsang; Marino, Dominic J.; Sackman, Joseph

    2017-05-01

    Thermography and pattern classification techniques are used to classify three different pathologies in veterinary images. Thermographic images of both normal and diseased animals were provided by the Long Island Veterinary Specialists (LIVS). The three pathologies are ACL rupture disease, bone cancer, and feline hyperthyroid. The diagnosis of these diseases usually involves radiology and laboratory tests while the method that we propose uses thermographic images and image analysis techniques and is intended for use as a prescreening tool. Images in each category of pathologies are first filtered by Gabor filters and then various features are extracted and used for classification into normal and abnormal classes. Gabor filters are linear filters that can be characterized by the two parameters wavelength λ and orientation θ. With two different wavelength and five different orientations, a total of ten different filters were studied. Different combinations of camera views, filters, feature vectors, normalization methods, and classification methods, produce different tests that were examined and the sensitivity, specificity and success rate for each test were produced. Using the Gabor features alone, sensitivity, specificity, and overall success rates of 85% for each of the pathologies was achieved.

  18. DECISION LEVEL FUSION OF ORTHOPHOTO AND LIDAR DATA USING CONFUSION MATRIX INFORMATION FOR LNAD COVER CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Daneshtalab

    2017-09-01

    Full Text Available Automatic urban objects extraction from airborne remote sensing data is essential to process and efficiently interpret the vast amount of airborne imagery and Lidar data available today. The aim of this study is to propose a new approach for the integration of high-resolution aerial imagery and Lidar data to improve the accuracy of classification in the city complications. In the proposed method, first, the classification of each data is separately performed using Support Vector Machine algorithm. In this case, extracted Normalized Digital Surface Model (nDSM and pulse intensity are used in classification of LiDAR data, and three spectral visible bands (Red, Green, Blue are considered as feature vector for the orthoimage classification. Moreover, combining the extracted features of the image and Lidar data another classification is also performed using all the features. The outputs of these classifications are integrated in a decision level fusion system according to the their confusion matrices to find the final classification result. The proposed method was evaluated using an urban area of Zeebruges, Belgium. The obtained results represented several advantages of image fusion with respect to a single shot dataset. With the capabilities of the proposed decision level fusion method, most of the object extraction difficulties and uncertainty were decreased and, the overall accuracy and the kappa values were improved 7% and 10%, respectively.

  19. Generative Adversarial Networks-Based Semi-Supervised Learning for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Zhi He

    2017-10-01

    Full Text Available Classification of hyperspectral image (HSI is an important research topic in the remote sensing community. Significant efforts (e.g., deep learning have been concentrated on this task. However, it is still an open issue to classify the high-dimensional HSI with a limited number of training samples. In this paper, we propose a semi-supervised HSI classification method inspired by the generative adversarial networks (GANs. Unlike the supervised methods, the proposed HSI classification method is semi-supervised, which can make full use of the limited labeled samples as well as the sufficient unlabeled samples. Core ideas of the proposed method are twofold. First, the three-dimensional bilateral filter (3DBF is adopted to extract the spectral-spatial features by naturally treating the HSI as a volumetric dataset. The spatial information is integrated into the extracted features by 3DBF, which is propitious to the subsequent classification step. Second, GANs are trained on the spectral-spatial features for semi-supervised learning. A GAN contains two neural networks (i.e., generator and discriminator trained in opposition to one another. The semi-supervised learning is achieved by adding samples from the generator to the features and increasing the dimension of the classifier output. Experimental results obtained on three benchmark HSI datasets have confirmed the effectiveness of the proposed method , especially with a limited number of labeled samples.

  20. Subsurface Event Detection and Classification Using Wireless Signal Networks

    Directory of Open Access Journals (Sweden)

    Muhannad T. Suleiman

    2012-11-01

    Full Text Available Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs. The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  1. Predictive Manufacturing: Classification of categorical data

    DEFF Research Database (Denmark)

    Khan, Abdul Rauf; Schiøler, Henrik; Kulahci, Murat

    2018-01-01

    and classification capabilities of our methodology (on different experimental settings) is done through a specially designed simulation experiment. Secondly, in order to demonstrate the applicability in a real life problem a data set from electronics component manufacturing is being analysed through our proposed...

  2. Proposed classification scheme for high-level and other radioactive wastes

    International Nuclear Information System (INIS)

    Kocher, D.C.; Croff, A.G.

    1986-01-01

    The Nuclear Waste Policy Act (NWPA) of 1982 defines high-level (radioactive) waste (HLW) as (A) the highly radioactive material resulting from the reprocessing of spent nuclear fuel...that contains fission products in sufficient concentrations; and (B) other highly radioactive material that the Commission...determines...requires permanent isolation. This paper presents a generally applicable quantitative definition of HLW that addresses the description in paragraph B. The approach also results in definitions of other wastes classes, i.e., transuranic (TRU) and low-level waste (LLW). The basic waste classification scheme that results from the quantitative definitions of highly radioactive and requires permanent isolation is depicted. The concentrations of radionuclides that correspond to these two boundaries, and that may be used to classify radioactive wastes, are given

  3. Nonlinear Inertia Classification Model and Application

    Directory of Open Access Journals (Sweden)

    Mei Wang

    2014-01-01

    Full Text Available Classification model of support vector machine (SVM overcomes the problem of a big number of samples. But the kernel parameter and the punishment factor have great influence on the quality of SVM model. Particle swarm optimization (PSO is an evolutionary search algorithm based on the swarm intelligence, which is suitable for parameter optimization. Accordingly, a nonlinear inertia convergence classification model (NICCM is proposed after the nonlinear inertia convergence (NICPSO is developed in this paper. The velocity of NICPSO is firstly defined as the weighted velocity of the inertia PSO, and the inertia factor is selected to be a nonlinear function. NICPSO is used to optimize the kernel parameter and a punishment factor of SVM. Then, NICCM classifier is trained by using the optical punishment factor and the optical kernel parameter that comes from the optimal particle. Finally, NICCM is applied to the classification of the normal state and fault states of online power cable. It is experimentally proved that the iteration number for the proposed NICPSO to reach the optimal position decreases from 15 to 5 compared with PSO; the training duration is decreased by 0.0052 s and the recognition precision is increased by 4.12% compared with SVM.

  4. Halitosis: a new definition and classification.

    Science.gov (United States)

    Aydin, M; Harvey-Woodworth, C N

    2014-07-11

    There is no universally accepted, precise definition, nor standardisation in terminology and classification of halitosis. To propose a new definition, free from subjective descriptions (faecal, fish odour, etc), one-time sulphide detector readings and organoleptic estimation of odour levels, and excludes temporary exogenous odours (for example, from dietary sources). Some terms previously used in the literature are revised. A new aetiologic classification is proposed, dividing pathologic halitosis into Type 1 (oral), Type 2 (airway), Type 3 (gastroesophageal), Type 4 (blood-borne) and Type 5 (subjective). In reality, any halitosis complaint is potentially the sum of these types in any combination, superimposed on the Type 0 (physiologic odour) present in health. This system allows for multiple diagnoses in the same patient, reflecting the multifactorial nature of the complaint. It represents the most accurate model to understand halitosis and forms an efficient and logical basis for clinical management of the complaint.

  5. Classification of coronary artery bifurcation lesions and treatments: Time for a consensus!

    DEFF Research Database (Denmark)

    Louvard, Yves; Thomas, Martyn; Dzavik, Vladimir

    2007-01-01

    by intention to treat, it is necessary to clearly define which vessel is the distal main branch and which is (are) the side branche(s) and give each branch a distinct name. Each segment of the bifurcation has been named following the same pattern as the Medina classification. The classification......, heterogeneity, and inadequate description of techniques implemented. Methods: The aim is to propose a consensus established by the European Bifurcation Club (EBC), on the definition and classification of bifurcation lesions and treatments implemented with the purpose of allowing comparisons between techniques...... in various anatomical and clinical settings. Results: A bifurcation lesion is a coronary artery narrowing occurring adjacent to, and/or involving, the origin of a significant side branch. The simple lesion classification proposed by Medina has been adopted. To analyze the outcomes of different techniques...

  6. ILAE Classification of the Epilepsies Position Paper of the ILAE Commission for Classification and Terminology

    Science.gov (United States)

    Scheffer, Ingrid E; Berkovic, Samuel; Capovilla, Giuseppe; Connolly, Mary B; French, Jacqueline; Guilhoto, Laura; Hirsch, Edouard; Jain, Satish; Mathern, Gary W.; Moshé, Solomon L; Nordli, Douglas R; Perucca, Emilio; Tomson, Torbjörn; Wiebe, Samuel; Zhang, Yue-Hua; Zuberi, Sameer M

    2017-01-01

    Summary The ILAE Classification of the Epilepsies has been updated to reflect our gain in understanding of the epilepsies and their underlying mechanisms following the major scientific advances which have taken place since the last ratified classification in 1989. As a critical tool for the practising clinician, epilepsy classification must be relevant and dynamic to changes in thinking, yet robust and translatable to all areas of the globe. Its primary purpose is for diagnosis of patients, but it is also critical for epilepsy research, development of antiepileptic therapies and communication around the world. The new classification originates from a draft document submitted for public comments in 2013 which was revised to incorporate extensive feedback from the international epilepsy community over several rounds of consultation. It presents three levels, starting with seizure type where it assumes that the patient is having epileptic seizures as defined by the new 2017 ILAE Seizure Classification. After diagnosis of the seizure type, the next step is diagnosis of epilepsy type, including focal epilepsy, generalized epilepsy, combined generalized and focal epilepsy, and also an unknown epilepsy group. The third level is that of epilepsy syndrome where a specific syndromic diagnosis can be made. The new classification incorporates etiology along each stage, emphasizing the need to consider etiology at each step of diagnosis as it often carries significant treatment implications. Etiology is broken into six subgroups, selected because of their potential therapeutic consequences. New terminology is introduced such as developmental and epileptic encephalopathy. The term benign is replaced by the terms self-limited and pharmacoresponsive, to be used where appropriate. It is hoped that this new framework will assist in improving epilepsy care and research in the 21st century. PMID:28276062

  7. Diagnostic criteria, classification, and nomenclature for painful bladder syndrome/interstitial cystitis: An ESSIC proposal

    DEFF Research Database (Denmark)

    Merwe, J.P.V. de; Nordling, J.; Bouchelouche, P.

    2008-01-01

    Objectives: Because the term ''interstitial cystitis'' (IC) has different meanings in different centers and different parts of the world, the European Society for the Study of Interstitial Cystitis (ESSIC) has worked to create a consensus on definitions, diagnosis, and classification in an attempt...... to overcome the lack of international agreement on various aspects of IC. Methods: ESSIC has discussed definitions, diagnostic criteria, and disease classification in four meetings and extended e-mail correspondence. Results: It was agreed to name the disease bladder pain syndrome (BPS) BPS would be diagnosed...... might be performed according to findings at cystoscopy with hydrodistention and morphologic findings in bladder biopsies. The presence of other organ symptoms as well as cognitive, behavioral, emotional, and sexual symptoms, should be addressed. Conclusions: The name IC has become misleading...

  8. Reliability of a four-column classification for tibial plateau fractures.

    Science.gov (United States)

    Martínez-Rondanelli, Alfredo; Escobar-González, Sara Sofía; Henao-Alzate, Alejandro; Martínez-Cano, Juan Pablo

    2017-09-01

    A four-column classification system offers a different way of evaluating tibial plateau fractures. The aim of this study is to compare the intra-observer and inter-observer reliability between four-column and classic classifications. This is a reliability study, which included patients presenting with tibial plateau fractures between January 2013 and September 2015 in a level-1 trauma centre. Four orthopaedic surgeons blindly classified each fracture according to four different classifications: AO, Schatzker, Duparc and four-column. Kappa, intra-observer and inter-observer concordance were calculated for the reliability analysis. Forty-nine patients were included. The mean age was 39 ± 14.2 years, with no gender predominance (men: 51%; women: 49%), and 67% of the fractures included at least one of the posterior columns. The intra-observer and inter-observer concordance were calculated for each classification: four-column (84%/79%), Schatzker (60%/71%), AO (50%/59%) and Duparc (48%/58%), with a statistically significant difference among them (p = 0.001/p = 0.003). Kappa coefficient for intr-aobserver and inter-observer evaluations: Schatzker 0.48/0.39, four-column 0.61/0.34, Duparc 0.37/0.23, and AO 0.34/0.11. The proposed four-column classification showed the highest intra and inter-observer agreement. When taking into account the agreement that occurs by chance, Schatzker classification showed the highest inter-observer kappa, but again the four-column had the highest intra-observer kappa value. The proposed classification is a more inclusive classification for the posteromedial and posterolateral fractures. We suggest, therefore, that it be used in addition to one of the classic classifications in order to better understand the fracture pattern, as it allows more attention to be paid to the posterior columns, it improves the surgical planning and allows the surgical approach to be chosen more accurately.

  9. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    Science.gov (United States)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  10. An interobserver reliability comparison between the Orthopaedic Trauma Association's open fracture classification and the Gustilo and Anderson classification.

    Science.gov (United States)

    Ghoshal, A; Enninghorst, N; Sisak, K; Balogh, Z J

    2018-02-01

    To evaluate interobserver reliability of the Orthopaedic Trauma Association's open fracture classification system (OTA-OFC). Patients of any age with a first presentation of an open long bone fracture were included. Standard radiographs, wound photographs, and a short clinical description were given to eight orthopaedic surgeons, who independently evaluated the injury using both the Gustilo and Anderson (GA) and OTA-OFC classifications. The responses were compared for variability using Cohen's kappa. The overall interobserver agreement was ĸ = 0.44 for the GA classification and ĸ = 0.49 for OTA-OFC, which reflects moderate agreement (0.41 to 0.60) for both classifications. The agreement in the five categories of OTA-OFC was: for skin, ĸ = 0.55 (moderate); for muscle, ĸ = 0.44 (moderate); for arterial injury, ĸ = 0.74 (substantial); for contamination, ĸ = 0.35 (fair); and for bone loss, ĸ = 0.41 (moderate). Although the OTA-OFC, with similar interobserver agreement to GA, offers a more detailed description of open fractures, further development may be needed to make it a reliable and robust tool. Cite this article: Bone Joint J 2018;100-B:242-6. ©2018 The British Editorial Society of Bone & Joint Surgery.

  11. A proposed new classification for diabetic retinopathy: The concept of primary and secondary vitreopathy

    Directory of Open Access Journals (Sweden)

    Dubey Arvind

    2008-01-01

    Full Text Available Background: Many eyes with proliferative diabetic retinopathy (PDR require vitreous surgery despite complete regression of new vessels with pan retinal laser photocoagulation (PRP. Changes in the vitreous caused by diabetes mellitus and diabetic retinopathy may continue to progress independent of laser regressed status of retinopathy. Diabetic vitreopathy can be an independent manifestation of the disease process. Aim: To examine this concept by studying the long-term behavior of the vitreous in cases of PDR regressed with PRP. Materials and Methods: Seventy-four eyes with pure PDR (without clinically evident vitreous traction showing fundus fluorescein angiography (FFA proven regression of new vessels following PRP were retrospectively studied out of a total of 1380 eyes photocoagulated between March 2001 and September 2006 for PDR of varying severity. Follow-up was available from one to four years. Results: Twenty-three percent of eyes showing FFA-proven regression of new vessels with laser required to undergo surgery for indications produced by vitreous traction such as recurrent vitreous hemorrhage, tractional retinal detachment, secondary rhegmatogenous retinal detachment and tractional macular edema within one to four years. Conclusion: Vitreous changes continued to progress despite regression of PDR in many diabetics. We identifies this as "clinical diabetic vitreopathy" and propose an expanded classification for diabetic retinopathy to signify these changes and to redefine the indications for surgery.

  12. Light-water reactor accident classification

    International Nuclear Information System (INIS)

    Washburn, B.W.

    1980-02-01

    The evolution of existing classifications and definitions of light-water reactor accidents is considered. Licensing practice and licensing trends are examined with respect to terms of art such as Class 8 and Class 9 accidents. Interim definitions, consistent with current licensing practice and the regulations, are proposed for these terms of art

  13. Basic Hand Gestures Classification Based on Surface Electromyography

    Directory of Open Access Journals (Sweden)

    Aleksander Palkowski

    2016-01-01

    Full Text Available This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method.

  14. Automated retinal vessel type classification in color fundus images

    Science.gov (United States)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  15. A texton-based approach for the classification of lung parenchyma in CT images

    DEFF Research Database (Denmark)

    Gangeh, Mehrdad J.; Sørensen, Lauge; Shaker, Saher B.

    2010-01-01

    In this paper, a texton-based classification system based on raw pixel representation along with a support vector machine with radial basis function kernel is proposed for the classification of emphysema in computed tomography images of the lung. The proposed approach is tested on 168 annotated...... regions of interest consisting of normal tissue, centrilobular emphysema, and paraseptal emphysema. The results show the superiority of the proposed approach to common techniques in the literature including moments of the histogram of filter responses based on Gaussian derivatives. The performance...

  16. Classification and Segmentation of Satellite Orthoimagery Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Martin Längkvist

    2016-04-01

    Full Text Available The availability of high-resolution remote sensing (HRRS data has opened up the possibility for new interesting applications, such as per-pixel classification of individual objects in greater detail. This paper shows how a convolutional neural network (CNN can be applied to multispectral orthoimagery and a digital surface model (DSM of a small city for a full, fast and accurate per-pixel classification. The predicted low-level pixel classes are then used to improve the high-level segmentation. Various design choices of the CNN architecture are evaluated and analyzed. The investigated land area is fully manually labeled into five categories (vegetation, ground, roads, buildings and water, and the classification accuracy is compared to other per-pixel classification works on other land areas that have a similar choice of categories. The results of the full classification and segmentation on selected segments of the map show that CNNs are a viable tool for solving both the segmentation and object recognition task for remote sensing data.

  17. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  18. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  19. A systematic literature review of the situation of the International Classification of Functioning, Disability, and Health and the International Classification of Functioning, Disability, and Health-Children and Youth version in education: a useful tool or a flight of fancy?

    Science.gov (United States)

    Moretti, Marta; Alves, Ines; Maxwell, Gregor

    2012-02-01

    This article presents the outcome of a systematic literature review exploring the applicability of the International Classification of Functioning, Disability, and Health (ICF) and its Children and Youth version (ICF-CY) at various levels and in processes within the education systems in different countries. A systematic database search using selected search terms has been used. The selection of studies was then refined further using four protocols: inclusion and exclusion protocols at abstract and full text and extraction levels along with a quality protocol. Studies exploring the direct relationship between education and the ICF/ICF-CY were sought.As expected, the results show a strong presence of studies from English-speaking countries, namely from Europe and North America. The articles were mainly published in noneducational journals. The most used ICF/ICF-CY components are activity and participation, participation, and environmental factors. From the analysis of the papers included, the results show that the ICF/ICF-CY is currently used as a research tool, theoretical framework, and tool for implementing educational processes. The ICF/ICF-CY can provide a useful language to the education field where there is currently a lot of disparity in theoretical, praxis, and research issues. Although the systematic literature review does not report a high incidence of the use of the ICF/ICF-CY in education, the results show that the ICF/ICF-CY model and classification have potential to be applied in education systems.

  20. An Improved Brain-Inspired Emotional Learning Algorithm for Fast Classification

    Directory of Open Access Journals (Sweden)

    Ying Mei

    2017-06-01

    Full Text Available Classification is an important task of machine intelligence in the field of information. The artificial neural network (ANN is widely used for classification. However, the traditional ANN shows slow training speed, and it is hard to meet the real-time requirement for large-scale applications. In this paper, an improved brain-inspired emotional learning (BEL algorithm is proposed for fast classification. The BEL algorithm was put forward to mimic the high speed of the emotional learning mechanism in mammalian brain, which has the superior features of fast learning and low computational complexity. To improve the accuracy of BEL in classification, the genetic algorithm (GA is adopted for optimally tuning the weights and biases of amygdala and orbitofrontal cortex in the BEL neural network. The combinational algorithm named as GA-BEL has been tested on eight University of California at Irvine (UCI datasets and two well-known databases (Japanese Female Facial Expression, Cohn–Kanade. The comparisons of experiments indicate that the proposed GA-BEL is more accurate than the original BEL algorithm, and it is much faster than the traditional algorithm.