WorldWideScience

Sample records for changing test classification

  1. MULTI-TEMPORAL CLASSIFICATION AND CHANGE DETECTION USING UAV IMAGES

    Directory of Open Access Journals (Sweden)

    S. Makuti

    2018-05-01

    Full Text Available In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV, textural features (GLCM and 3D geometric features. For classification purposes Conditional Random Field (CRF has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  2. 75 FR 10529 - Mail Classification Change

    Science.gov (United States)

    2010-03-08

    ... POSTAL REGULATORY COMMISSION [Docket Nos. MC2010-19; Order No. 415] Mail Classification Change...-filed Postal Service request to make a minor modification to the Mail Classification Schedule. The.... concerning a change in classification which reflects a change in terminology from Bulk Mailing Center (BMC...

  3. 76 FR 47614 - Mail Classification Change

    Science.gov (United States)

    2011-08-05

    ... POSTAL REGULATORY COMMISSION [Docket No. MC2011-27; Order No. 785] Mail Classification Change...-filed Postal Service request for a change in classification to the ``Reply Rides Free'' program. The... Service filed a notice of classification change pursuant to 39 CFR 3020.90 and 3020.91 concerning the...

  4. Computerized Classification Testing with the Rasch Model

    Science.gov (United States)

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  5. Stochastic change detection in uncertain nonlinear systems using reduced-order models: classification

    International Nuclear Information System (INIS)

    Yun, Hae-Bum; Masri, Sami F

    2009-01-01

    A reliable structural health monitoring methodology (SHM) is proposed to detect relatively small changes in uncertain nonlinear systems. A total of 4000 physical tests were performed using a complex nonlinear magneto-rheological (MR) damper. With the effective (or 'genuine') changes and uncertainties in the system characteristics of the semi-active MR damper, which were precisely controlled with known means and standard deviation of the input current, the tested MR damper was identified with the restoring force method (RFM), a non-parametric system identification method involving two-dimensional orthogonal polynomials. Using the identified RFM coefficients, both supervised and unsupervised pattern recognition techniques (including support vector classification and k-means clustering) were employed to detect system changes in the MR damper. The classification results showed that the identified coefficients with orthogonal basis function can be used as reliable indicators for detecting (small) changes, interpreting the physical meaning of the detected changes without a priori knowledge of the monitored system and quantifying the uncertainty bounds of the detected changes. The classification errors were analyzed using the standard detection theory to evaluate the performance of the developed SHM methodology. An optimal classifier design procedure was also proposed and evaluated to minimize type II (or 'missed') errors

  6. 42 CFR 412.10 - Changes in the DRG classification system.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Changes in the DRG classification system. 412.10... § 412.10 Changes in the DRG classification system. (a) General rule. CMS issues changes in the DRG... after the same date the payment rates are effective. (b) Basis for changes in the DRG classification...

  7. Mixing geometric and radiometric features for change classification

    Science.gov (United States)

    Fournier, Alexandre; Descombes, Xavier; Zerubia, Josiane

    2008-02-01

    Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object based approach seems more relevant when the change detection is specifically aimed toward targets (such as small buildings and vehicles). In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in the state space. Tests are currently conducted on Quickbird data.

  8. 26 CFR 1.410(b)-4 - Nondiscriminatory classification test.

    Science.gov (United States)

    2010-04-01

    ..., nature of compensation (i.e., salaried or hourly), geographic location, and similar bona fide business... (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.410(b)-4 Nondiscriminatory classification test. (a) In general. A plan satisfies the nondiscriminatory classification test of...

  9. Change classification in SAR time series: a functional approach

    Science.gov (United States)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2017-10-01

    Change detection represents a broad field of research in SAR remote sensing, consisting of many different approaches. Besides the simple recognition of change areas, the analysis of type, category or class of the change areas is at least as important for creating a comprehensive result. Conventional strategies for change classification are based on supervised or unsupervised landuse / landcover classifications. The main drawback of such approaches is that the quality of the classification result directly depends on the selection of training and reference data. Additionally, supervised processing methods require an experienced operator who capably selects the training samples. This training step is not necessary when using unsupervised strategies, but nevertheless meaningful reference data must be available for identifying the resulting classes. Consequently, an experienced operator is indispensable. In this study, an innovative concept for the classification of changes in SAR time series data is proposed. Regarding the drawbacks of traditional strategies given above, it copes without using any training data. Moreover, the method can be applied by an operator, who does not have detailed knowledge about the available scenery yet. This knowledge is provided by the algorithm. The final step of the procedure, which main aspect is given by the iterative optimization of an initial class scheme with respect to the categorized change objects, is represented by the classification of these objects to the finally resulting classes. This assignment step is subject of this paper.

  10. Classification of the eye changes of Graves' disease

    NARCIS (Netherlands)

    Wiersinga, W. M.; Prummel, M. F.; Mourits, M. P.; Koornneef, L.; Buller, H. R.

    1991-01-01

    Classification of the eye changes of Graves' disease may have clinical use in the description of the present eye state, in the assessment of treatment results, and in the choice of therapy. Requirements for any classification system should include simplicity, clinical nature (i.e., easily carried

  11. An ensemble classification approach for improved Land use/cover change detection

    Science.gov (United States)

    Chellasamy, M.; Ferré, T. P. A.; Humlekrog Greve, M.; Larsen, R.; Chinnasamy, U.

    2014-11-01

    Change Detection (CD) methods based on post-classification comparison approaches are claimed to provide potentially reliable results. They are considered to be most obvious quantitative method in the analysis of Land Use Land Cover (LULC) changes which provides from - to change information. But, the performance of post-classification comparison approaches highly depends on the accuracy of classification of individual images used for comparison. Hence, we present a classification approach that produce accurate classified results which aids to obtain improved change detection results. Machine learning is a part of broader framework in change detection, where neural networks have drawn much attention. Neural network algorithms adaptively estimate continuous functions from input data without mathematical representation of output dependence on input. A common practice for classification is to use Multi-Layer-Perceptron (MLP) neural network with backpropogation learning algorithm for prediction. To increase the ability of learning and prediction, multiple inputs (spectral, texture, topography, and multi-temporal information) are generally stacked to incorporate diversity of information. On the other hand literatures claims backpropagation algorithm to exhibit weak and unstable learning in use of multiple inputs, while dealing with complex datasets characterized by mixed uncertainty levels. To address the problem of learning complex information, we propose an ensemble classification technique that incorporates multiple inputs for classification unlike traditional stacking of multiple input data. In this paper, we present an Endorsement Theory based ensemble classification that integrates multiple information, in terms of prediction probabilities, to produce final classification results. Three different input datasets are used in this study: spectral, texture and indices, from SPOT-4 multispectral imagery captured on 1998 and 2003. Each SPOT image is classified

  12. Study on Classification Accuracy Inspection of Land Cover Data Aided by Automatic Image Change Detection Technology

    Science.gov (United States)

    Xie, W.-J.; Zhang, L.; Chen, H.-P.; Zhou, J.; Mao, W.-J.

    2018-04-01

    The purpose of carrying out national geographic conditions monitoring is to obtain information of surface changes caused by human social and economic activities, so that the geographic information can be used to offer better services for the government, enterprise and public. Land cover data contains detailed geographic conditions information, thus has been listed as one of the important achievements in the national geographic conditions monitoring project. At present, the main issue of the production of the land cover data is about how to improve the classification accuracy. For the land cover data quality inspection and acceptance, classification accuracy is also an important check point. So far, the classification accuracy inspection is mainly based on human-computer interaction or manual inspection in the project, which are time consuming and laborious. By harnessing the automatic high-resolution remote sensing image change detection technology based on the ERDAS IMAGINE platform, this paper carried out the classification accuracy inspection test of land cover data in the project, and presented a corresponding technical route, which includes data pre-processing, change detection, result output and information extraction. The result of the quality inspection test shows the effectiveness of the technical route, which can meet the inspection needs for the two typical errors, that is, missing and incorrect update error, and effectively reduces the work intensity of human-computer interaction inspection for quality inspectors, and also provides a technical reference for the data production and quality control of the land cover data.

  13. Changing patient classification system for hospital reimbursement in Romania.

    Science.gov (United States)

    Radu, Ciprian-Paul; Chiriac, Delia Nona; Vladescu, Cristian

    2010-06-01

    To evaluate the effects of the change in the diagnosis-related group (DRG) system on patient morbidity and hospital financial performance in the Romanian public health care system. Three variables were assessed before and after the classification switch in July 2007: clinical outcomes, the case mix index, and hospital budgets, using the database of the National School of Public Health and Health Services Management, which contains data regularly received from hospitals reimbursed through the Romanian DRG scheme (291 in 2009). The lack of a Romanian system for the calculation of cost-weights imposed the necessity to use an imported system, which was criticized by some clinicians for not accurately reflecting resource consumption in Romanian hospitals. The new DRG classification system allowed a more accurate clinical classification. However, it also exposed a lack of physicians' knowledge on diagnosing and coding procedures, which led to incorrect coding. Consequently, the reported hospital morbidity changed after the DRG switch, reflecting an increase in the national case-mix index of 25% in 2009 (compared with 2007). Since hospitals received the same reimbursement over the first two years after the classification switch, the new DRG system led them sometimes to change patients' diagnoses in order to receive more funding. Lack of oversight of hospital coding and reporting to the national reimbursement scheme allowed the increase in the case-mix index. The complexity of the new classification system requires more resources (human and financial), better monitoring and evaluation, and improved legislation in order to achieve better hospital resource allocation and more efficient patient care.

  14. The DSM-5: Classification and criteria changes.

    Science.gov (United States)

    Regier, Darrel A; Kuhl, Emily A; Kupfer, David J

    2013-06-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) marks the first significant revision of the publication since the DSM-IV in 1994. Changes to the DSM were largely informed by advancements in neuroscience, clinical and public health need, and identified problems with the classification system and criteria put forth in the DSM-IV. Much of the decision-making was also driven by a desire to ensure better alignment with the International Classification of Diseases and its upcoming 11th edition (ICD-11). In this paper, we describe select revisions in the DSM-5, with an emphasis on changes projected to have the greatest clinical impact and those that demonstrate efforts to enhance international compatibility, including integration of cultural context with diagnostic criteria and changes that facilitate DSM-ICD harmonization. It is anticipated that this collaborative spirit between the American Psychiatric Association (APA) and the World Health Organization (WHO) will continue as the DSM-5 is updated further, bringing the field of psychiatry even closer to a singular, cohesive nosology. Copyright © 2013 World Psychiatric Association.

  15. Termination Criteria for Computerized Classification Testing

    Directory of Open Access Journals (Sweden)

    Nathan A. Thompson

    2011-02-01

    Full Text Available Computerized classification testing (CCT is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as - pass- and - fail.- Like adaptive testing for point estimation of ability, the key component is the termination criterion, namely the algorithm that decides whether to classify the examinee and end the test or to continue and administer another item. This paper applies a newly suggested termination criterion, the generalized likelihood ratio (GLR, to CCT. It also explores the role of the indifference region in the specification of likelihood-ratio based termination criteria, comparing the GLR to the sequential probability ratio test. Results from simulation studies suggest that the GLR is always at least as efficient as existing methods.

  16. Changes in classification of genetic variants in BRCA1 and BRCA2.

    Science.gov (United States)

    Kast, Karin; Wimberger, Pauline; Arnold, Norbert

    2018-02-01

    Classification of variants of unknown significance (VUS) in the breast cancer genes BRCA1 and BRCA2 changes with accumulating evidence for clinical relevance. In most cases down-staging towards neutral variants without clinical significance is possible. We searched the database of the German Consortium for Hereditary Breast and Ovarian Cancer (GC-HBOC) for changes in classification of genetic variants as an update to our earlier publication on genetic variants in the Centre of Dresden. Changes between 2015 and 2017 were recorded. In the group of variants of unclassified significance (VUS, Class 3, uncertain), only changes of classification towards neutral genetic variants were noted. In BRCA1, 25% of the Class 3 variants (n = 2/8) changed to Class 2 (likely benign) and Class 1 (benign). In BRCA2, in 50% of the Class 3 variants (n = 16/32), a change to Class 2 (n = 10/16) or Class 1 (n = 6/16) was observed. No change in classification was noted in Class 4 (likely pathogenic) and Class 5 (pathogenic) genetic variants in both genes. No up-staging from Class 1, Class 2 or Class 3 to more clinical significance was observed. All variants with a change in classification in our cohort were down-staged towards no clinical significance by a panel of experts of the German Consortium for Hereditary Breast and Ovarian Cancer (GC-HBOC). Prevention in families with Class 3 variants should be based on pedigree based risks and should not be guided by the presence of a VUS.

  17. A Comparison of Computer-Based Classification Testing Approaches Using Mixed-Format Tests with the Generalized Partial Credit Model

    Science.gov (United States)

    Kim, Jiseon

    2010-01-01

    Classification testing has been widely used to make categorical decisions by determining whether an examinee has a certain degree of ability required by established standards. As computer technologies have developed, classification testing has become more computerized. Several approaches have been proposed and investigated in the context of…

  18. A risk-based classification scheme for genetically modified foods. II: Graded testing.

    Science.gov (United States)

    Chao, Eunice; Krewski, Daniel

    2008-12-01

    This paper presents a graded approach to the testing of crop-derived genetically modified (GM) foods based on concern levels in a proposed risk-based classification scheme (RBCS) and currently available testing methods. A graded approach offers the potential for more efficient use of testing resources by focusing less on lower concern GM foods, and more on higher concern foods. In this proposed approach to graded testing, products that are classified as Level I would have met baseline testing requirements that are comparable to what is widely applied to premarket assessment of GM foods at present. In most cases, Level I products would require no further testing, or very limited confirmatory analyses. For products classified as Level II or higher, additional testing would be required, depending on the type of the substance, prior dietary history, estimated exposure level, prior knowledge of toxicity of the substance, and the nature of the concern related to unintended changes in the modified food. Level III testing applies only to the assessment of toxic and antinutritional effects from intended changes and is tailored to the nature of the substance in question. Since appropriate test methods are not currently available for all effects of concern, future research to strengthen the testing of GM foods is discussed.

  19. 76 FR 16460 - Parcel Select Price and Classification Changes

    Science.gov (United States)

    2011-03-23

    ... a recently-filed Postal Service notice of rate and classification changes affecting Parcel Select. The Postal Service seeks to implement new prices for Parcel Select for forwarding and return to sender... the United States Postal Service of Changes in Rates of General Applicability for a Competitive...

  20. A Challenge to Change: Necessary Changes in the Library Classification System for the Chicago Public Schools.

    Science.gov (United States)

    Williams, Florence M.

    This report addresses the feasibility of changing the classification of library materials in the Chicago Public School libraries from the Dewey Decimal classification system (DDC) to the Library of Congress system (LC), thus patterning the city school libraries after the Chicago Public Library and strengthening the existing close relationship…

  1. Adaptive testing for making unidimensional and multidimensional classification decisions

    NARCIS (Netherlands)

    van Groen, M.M.

    2014-01-01

    Computerized adaptive tests (CATs) were originally developed to obtain an efficient estimate of the examinee’s ability, but they can also be used to classify the examinee into one of two or more levels (e.g. master/non-master). These computerized classification tests have the advantage that they can

  2. Automated classification of Permanent Scatterers time-series based on statistical characterization tests

    Science.gov (United States)

    Berti, Matteo; Corsini, Alessandro; Franceschini, Silvia; Iannacone, Jean Pascal

    2013-04-01

    The application of space borne synthetic aperture radar interferometry has progressed, over the last two decades, from the pioneer use of single interferograms for analyzing changes on the earth's surface to the development of advanced multi-interferogram techniques to analyze any sort of natural phenomena which involves movements of the ground. The success of multi-interferograms techniques in the analysis of natural hazards such as landslides and subsidence is widely documented in the scientific literature and demonstrated by the consensus among the end-users. Despite the great potential of this technique, radar interpretation of slope movements is generally based on the sole analysis of average displacement velocities, while the information embraced in multi interferogram time series is often overlooked if not completely neglected. The underuse of PS time series is probably due to the detrimental effect of residual atmospheric errors, which make the PS time series characterized by erratic, irregular fluctuations often difficult to interpret, and also to the difficulty of performing a visual, supervised analysis of the time series for a large dataset. In this work is we present a procedure for automatic classification of PS time series based on a series of statistical characterization tests. The procedure allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) and retrieve for each trend a series of descriptive parameters which can be efficiently used to characterize the temporal changes of ground motion. The classification algorithms were developed and tested using an ENVISAT datasets available in the frame of EPRS-E project (Extraordinary Plan of Environmental Remote Sensing) of the Italian Ministry of Environment (track "Modena", Northern Apennines). This dataset was generated using standard processing, then the

  3. Screening tests for hazard classification of complex waste materials – Selection of methods

    International Nuclear Information System (INIS)

    Weltens, R.; Vanermen, G.; Tirez, K.; Robbens, J.; Deprez, K.; Michiels, L.

    2012-01-01

    In this study we describe the development of an alternative methodology for hazard characterization of waste materials. Such an alternative methodology for hazard assessment of complex waste materials is urgently needed, because the lack of a validated instrument leads to arbitrary hazard classification of such complex waste materials. False classification can lead to human and environmental health risks and also has important financial consequences for the waste owner. The Hazardous Waste Directive (HWD) describes the methodology for hazard classification of waste materials. For mirror entries the HWD classification is based upon the hazardous properties (H1–15) of the waste which can be assessed from the hazardous properties of individual identified waste compounds or – if not all compounds are identified – from test results of hazard assessment tests performed on the waste material itself. For the latter the HWD recommends toxicity tests that were initially designed for risk assessment of chemicals in consumer products (pharmaceuticals, cosmetics, biocides, food, etc.). These tests (often using mammals) are not designed nor suitable for the hazard characterization of waste materials. With the present study we want to contribute to the development of an alternative and transparent test strategy for hazard assessment of complex wastes that is in line with the HWD principles for waste classification. It is necessary to cope with this important shortcoming in hazardous waste classification and to demonstrate that alternative methods are available that can be used for hazard assessment of waste materials. Next, by describing the pros and cons of the available methods, and by identifying the needs for additional or further development of test methods, we hope to stimulate research efforts and development in this direction. In this paper we describe promising techniques and argument on the test selection for the pilot study that we have performed on different

  4. 75 FR 69142 - Postal Rate and Classification Changes

    Science.gov (United States)

    2010-11-10

    .... Overall, Priority Mail International (PMI) prices increase on average by 3.8 percent. Classification... and for insurance with EMI and PMI increase. The unique price tier for Canada when optional insurance is purchased for PMI parcels is eliminated. Details of these changes may be found in the Attachment...

  5. TESTING OF LAND COVER CLASSIFICATION FROM MULTISPECTRAL AIRBORNE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    K. Bakuła

    2016-06-01

    Full Text Available Multispectral Airborne Laser Scanning provides a new opportunity for airborne data collection. It provides high-density topographic surveying and is also a useful tool for land cover mapping. Use of a minimum of three intensity images from a multiwavelength laser scanner and 3D information included in the digital surface model has the potential for land cover/use classification and a discussion about the application of this type of data in land cover/use mapping has recently begun. In the test study, three laser reflectance intensity images (orthogonalized point cloud acquired in green, near-infrared and short-wave infrared bands, together with a digital surface model, were used in land cover/use classification where six classes were distinguished: water, sand and gravel, concrete and asphalt, low vegetation, trees and buildings. In the tested methods, different approaches for classification were applied: spectral (based only on laser reflectance intensity images, spectral with elevation data as additional input data, and spectro-textural, using morphological granulometry as a method of texture analysis of both types of data: spectral images and the digital surface model. The method of generating the intensity raster was also tested in the experiment. Reference data were created based on visual interpretation of ALS data and traditional optical aerial and satellite images. The results have shown that multispectral ALS data are unlike typical multispectral optical images, and they have a major potential for land cover/use classification. An overall accuracy of classification over 90% was achieved. The fusion of multi-wavelength laser intensity images and elevation data, with the additional use of textural information derived from granulometric analysis of images, helped to improve the accuracy of classification significantly. The method of interpolation for the intensity raster was not very helpful, and using intensity rasters with both first and

  6. 40 CFR 164.25 - Filing copies of notification of intent to cancel registration or change classification or...

    Science.gov (United States)

    2010-07-01

    ... intent to cancel registration or change classification or refusal to register, and statement of issues... copies of notification of intent to cancel registration or change classification or refusal to register... appropriate notice of intention to cancel, the notice of intention to change the classification or the...

  7. [Changes introduced into the recent International Classification of Headache Disorders: ICHD-III beta classification].

    Science.gov (United States)

    Belvis, Robert; Mas, Natàlia; Roig, Carles

    2015-01-16

    The International Headache Society (IHS) has published the third edition of the International Classification of Headache Disorders (ICHD-III beta), the most commonly used guide to diagnosing headaches in the world. To review the recent additions to the guide, to explain the new entities that appear in it and to compare the conditions that have had their criteria further clarified against the criteria in the previous edition. We have recorded a large number of clarifications in the criteria in practically all the headaches and neuralgias in the classification, but the conditions that have undergone the most significant clarifications are chronic migraine, primary headache associated with sexual activity, short-lasting unilateral neuralgiform headache attacks, new daily persistent headache, medication-overuse headache, syndrome of transient headache and neurological deficits with cerebrospinal fluid lymphocytosis. The most notable new entities that have been incorporated are external-compression headache, cold-stimulus headache, nummular headache, headache attributed to aeroplane travel and headache attributed to autonomic dysreflexia. Another point to be highlighted is the case of the new headaches (still not considered entities in their own right) included in the appendix, some of the most noteworthy being epicrania fugax, vestibular migraine and infantile colic. The IHS recommends no longer using the previous classification and changing over to the new classification (ICHD-III beta) in healthcare, teaching and research, in addition to making this new guide as widely known as possible.

  8. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    Directory of Open Access Journals (Sweden)

    Michael Kloth

    2014-05-01

    Full Text Available Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring.

  9. Organizational change in quality management aspects: a quantitative proposal for classification

    Directory of Open Access Journals (Sweden)

    André Tavares de Aquino

    Full Text Available Abstract Periodically, organizations need to change the quality management aspects of processes and products in order to suit the demands of their internal and external (consumer and competitor market environments. In the context of the present study, quality management changes involve tools, programs, methods, standards and procedures that can be applied. The purpose of this study is to help senior management to identify types of change and, consequently, determine how it should be correctly conducted within an organization. The methodology involves a classification model, with multicriteria support, and three organizational change ratings were adopted (the extremes, type I and type II, as confirmed in the literature, and the intermediary, proposed herein. The multicriteria method used was ELECTRE TRI and the model was applied to two companies of the Textile Local Productive Arrangement in Pernambuco, Brazil. The results are interesting and show the consistency and coherence of the proposed classification model.

  10. Risk factors for changing test classification in the Danish surveillance program for Salmonella in dairy herds

    DEFF Research Database (Denmark)

    Nielsen, Lennarth Ravn; Warnick, L. D.; Greiner, M.

    2007-01-01

    test positive to negative, whereas the breed and neighbor factors were not found to be important for small herds. Organic production was associated with remaining test positive, but not with becoming test positive. The results emphasize the importance of external and internal biosecurity measures....... The objective of this study was to evaluate risk factors for changing from test negative to positive, which was indicative of herds becoming infected from one quarter of the year to the next, and risk factors for changing from test positive to negative, which was indicative of herds recovering from infection...

  11. Comparison of accuracy of fibrosis degree classifications by liver biopsy and non-invasive tests in chronic hepatitis C.

    Science.gov (United States)

    Boursier, Jérôme; Bertrais, Sandrine; Oberti, Frédéric; Gallois, Yves; Fouchard-Hubert, Isabelle; Rousselet, Marie-Christine; Zarski, Jean-Pierre; Calès, Paul

    2011-11-30

    Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts) and blood tests, Metavir fibrosis (FM) stage accuracy was 64.4% in local pathologists vs. 82.2% (p blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55) and FibroMeter3G (0.14 ± 0.37, p blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%), Fibroscan: 64.9% (50.7%), FibroMeter2G: 68.7% (68.2%), FibroMeter3G: 77.1% (83.4%), p fibrosis classification of the best-performing blood test outperforms liver biopsy read by a local pathologist, i.e., in clinical practice; however, the classification precision is apparently lesser. This detailed classification accuracy is much lower than that of significant fibrosis with Fibroscan and even Fibrotest but higher with FibroMeter3G. FibroMeter classification accuracy was significantly higher than those of other non-invasive tests. Finally, for hepatitis C evaluation in clinical practice, fibrosis degree can be evaluated using an accurate blood test.

  12. A quantitative index for classification of plantar thermal changes in the diabetic foot

    Science.gov (United States)

    Hernandez-Contreras, D.; Peregrina-Barreto, H.; Rangel-Magdaleno, J.; Gonzalez-Bernal, J. A.; Altamirano-Robles, L.

    2017-03-01

    One of the main complications caused by diabetes mellitus is the development of diabetic foot, which in turn, can lead to ulcerations. Because ulceration risks are linked to an increase in plantar temperatures, recent approaches analyze thermal changes. These approaches try to identify spatial patterns of temperature that could be characteristic of a diabetic group. However, this is a difficult task since thermal patterns have wide variations resulting on complex classification. Moreover, the measurement of contralateral plantar temperatures is important to determine whether there is an abnormal difference but, this only provides information when thermal changes are asymmetric and in absence of ulceration or amputation. Therefore, in this work is proposed a quantitative index for measuring the thermal change in the plantar region of participants diagnosed diabetes mellitus regards to a reliable reference (control) or regards to the contralateral foot (as usual). Also, a classification of the thermal changes based on a quantitative index is proposed. Such classification demonstrate the wide diversity of spatial distributions in the diabetic foot but also demonstrate that it is possible to identify common characteristics. An automatic process, based on the analysis of plantar angiosomes and image processing, is presented to quantify these thermal changes and to provide valuable information to the medical expert.

  13. Image Classification Workflow Using Machine Learning Methods

    Science.gov (United States)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  14. Applying Topographic Classification, Based on the Hydrological Process, to Design Habitat Linkages for Climate Change

    Directory of Open Access Journals (Sweden)

    Yongwon Mo

    2017-11-01

    Full Text Available The use of biodiversity surrogates has been discussed in the context of designing habitat linkages to support the migration of species affected by climate change. Topography has been proposed as a useful surrogate in the coarse-filter approach, as the hydrological process caused by topography such as erosion and accumulation is the basis of ecological processes. However, some studies that have designed topographic linkages as habitat linkages, so far have focused much on the shape of the topography (morphometric topographic classification with little emphasis on the hydrological processes (generic topographic classification to find such topographic linkages. We aimed to understand whether generic classification was valid for designing these linkages. First, we evaluated whether topographic classification is more appropriate for describing actual (coniferous and deciduous and potential (mammals and amphibians habitat distributions. Second, we analyzed the difference in the linkages between the morphometric and generic topographic classifications. The results showed that the generic classification represented the actual distribution of the trees, but neither the morphometric nor the generic classification could represent the potential animal distributions adequately. Our study demonstrated that the topographic classes, according to the generic classification, were arranged successively according to the flow of water, nutrients, and sediment; therefore, it would be advantageous to secure linkages with a width of 1 km or more. In addition, the edge effect would be smaller than with the morphometric classification. Accordingly, we suggest that topographic characteristics, based on the hydrological process, are required to design topographic linkages for climate change.

  15. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    Science.gov (United States)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  16. Test-Enhanced Learning of Natural Concepts: Effects on Recognition Memory, Classification, and Metacognition

    Science.gov (United States)

    Jacoby, Larry L.; Wahlheim, Christopher N.; Coane, Jennifer H.

    2010-01-01

    Three experiments examined testing effects on learning of natural concepts and metacognitive assessments of such learning. Results revealed that testing enhanced recognition memory and classification accuracy for studied and novel exemplars of bird families on immediate and delayed tests. These effects depended on the balance of study and test…

  17. Generalized classification of welds according to defect type based on raidation testing results

    International Nuclear Information System (INIS)

    Adamenko, A.A.; Demidko, V.G.

    1980-01-01

    Constructed is a generalized classification of welds according to defect type, with respect to real danger of defect, which in the first approximation is proportional to relatively decrease of the thickness, and with respect to defect potential danger which can be determined by its pointing. According to this classification the welded joints are divided into five classes according to COMECON guides. The division into classes is carried out according to two-fold numerical criterium which is applicable in case of the presence of experimental data on three defect linear sizes. The above classification is of main importance while automatic data processing of the radiation testing

  18. Can the Ni classification of vessels predict neoplasia?

    DEFF Research Database (Denmark)

    Mehlum, Camilla Slot; Rosenberg, Tine; Dyrvig, Anne-Kirstine

    2018-01-01

    OBJECTIVES: The Ni classification of vascular change from 2011 is well documented for evaluating pharyngeal and laryngeal lesions, primarily focusing on cancer. In the planning of surgery it may be more relevant to differentiate neoplasia from non-neoplasia. We aimed to evaluate the ability...... of the Ni classification to predict laryngeal or hypopharyngeal neoplasia and to investigate if a changed cutoff value would support the recent European Laryngological Society (ELS) proposal of perpendicular vascular changes as indicative of neoplasia. DATA SOURCES: PubMed, Embase, Cochrane, and Scopus....... The pooled sensitivity and specificity of the Ni classification with two different cutoffs were calculated, and bubble and summary receiver operating characteristics plots were created. RESULTS: The combined sensitivity of five studies (n = 687) with Ni type IV-V defined as test-positive was 0.89 (95...

  19. FAST AND ROBUST SEGMENTATION AND CLASSIFICATION FOR CHANGE DETECTION IN URBAN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    X. Roynard

    2016-06-01

    Full Text Available Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  20. SAR-based change detection using hypothesis testing and Markov random field modelling

    Science.gov (United States)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  1. Detecting Arctic Climate Change Using Koeppen Climate Classification

    Energy Technology Data Exchange (ETDEWEB)

    Wang, M. [Joint Institute for the Study of Atmosphere and Oceans, University of Washington, Seattle, Washington (United States); Overland, J.E. [NOAA/Pacific Marine Environmental Laboratory, Sand Point Way NE, Seattle, Washington (United States)

    2004-11-01

    Ecological impacts of the recent warming trend in the Arctic are already noted as changes in tree line and a decrease in tundra area with the replacement of ground cover by shrubs in northern Alaska and several locations in northern Eurasia. The potential impact of vegetation changes to feedbacks on the atmospheric climate system is substantial because of the large land area impacted and the multi-year persistence of the vegetation cover. Satellite NDVI estimates beginning in 1981 and the Koeppen climate classification, which relates surface types to monthly mean air temperatures from 1901 onward, track these changes on an Arctic-wide basis. Temperature fields from the NCEP/NCAR reanalysis and CRU analysis serve as proxy for vegetation cover over the century. A downward trend in the coverage of tundra group for the first 40 yr of the twentieth century was followed by two increases during 1940s and early 1960s, and then a rapid decrease in the last 20 yr. The decrease of tundra group in the 1920-40 period was localized, mostly over Scandinavia; whereas the decrease since 1990 is primarily pan-Arctic, but largest in NW Canada, and eastern and coastal Siberia. The decrease in inferred tundra coverage from 1980 to 2000 was 1.4 x 106 km{sup 2}, or about a 20% reduction in tundra area based on the CRU analyses. This rate of decrease is confirmed by the NDVI data. These tundra group changes in the last 20 yr are accompanied by increase in the area of both the boreal and temperate groups. During the tundra group decrease in the first half of the century boreal group area also decreased while temperate group area increased. The calculated minimum coverage of tundra group from both the Koeppen classification and NDVI indicates that the impact of warming on the spatial coverage of the tundra group in the 1990s is the strongest in the century, and will have multi-decadal consequences for the Arctic.

  2. Hydrological Climate Classification: Can We Improve on Köppen-Geiger?

    Science.gov (United States)

    Knoben, W.; Woods, R. A.; Freer, J. E.

    2017-12-01

    Classification is essential in the study of complex natural systems, yet hydrology so far has no formal way to structure the climate forcing which underlies hydrologic response. Various climate classification systems can be borrowed from other disciplines but these are based on different organizing principles than a hydrological classification might use. From gridded global data we calculate a gridded aridity index, an aridity seasonality index and a rain-vs-snow index, which we use to cluster global locations into climate groups. We then define the membership degree of nearly 1100 catchments to each of our climate groups based on each catchment's climate and investigate the extent to which streamflow responses within each climate group are similar. We compare this climate classification approach with the often-used Köppen-Geiger classification, using statistical tests based on streamflow signature values. We find that three climate indices are sufficient to distinguish 18 different climate types world-wide. Climates tend to change gradually in space and catchments can thus belong to multiple climate groups, albeit with different degrees of membership. Streamflow responses within a climate group tend to be similar, regardless of the catchments' geographical proximity. A Wilcoxon two-sample test based on streamflow signature values for each climate group shows that the new classification can distinguish different flow regimes using this classification scheme. The Köppen-Geiger approach uses 29 climate classes but is less able to differentiate streamflow regimes. Climate forcing exerts a strong control on typical hydrologic response and both change gradually in space. This makes arbitrary hard boundaries in any classification scheme difficult to defend. Any hydrological classification should thus acknowledge these gradual changes in forcing. Catchment characteristics (soil or vegetation type, land use, etc) can vary more quickly in space than climate does, which

  3. Classification of solid industrial waste based on ecotoxicology tests using Daphnia magna: an alternative

    Directory of Open Access Journals (Sweden)

    William Gerson Matias

    2005-11-01

    Full Text Available The adequate treatment and final disposal of solid industrial wastes depends on their classification into class I or II. This classification is proposed by NBR 10.004; however, it is complex and time-consuming. With a view to facilitating this classification, the use of assays with Daphnia magna is proposed. These assays make possible the identification of toxic chemicals in the leach, which denotes the presence of one of the characteristics described by NBR 10.004, the toxicity, which is a sufficient argument to put the waste into class I. Ecotoxicological tests were carried out with ten samples of solid wastes of frequent production and, on the basis of the results from EC(I50/48h of those samples in comparison with the official classification of NBR 10.004, limits were established for the classification of wastes into class I or II. A coincidence in the classification of 50% of the analyzed samples was observed. In cases in which there is no coherence between the methods, the method proposed in this work classifies the waste into class I. These data are preliminary, but they reveal that the classification system proposed here is promising because of its quickness and economic viability.

  4. Comparison of accuracy of fibrosis degree classifications by liver biopsy and non-invasive tests in chronic hepatitis C

    Directory of Open Access Journals (Sweden)

    Boursier Jérôme

    2011-11-01

    Full Text Available Abstract Background Non-invasive tests have been constructed and evaluated mainly for binary diagnoses such as significant fibrosis. Recently, detailed fibrosis classifications for several non-invasive tests have been developed, but their accuracy has not been thoroughly evaluated in comparison to liver biopsy, especially in clinical practice and for Fibroscan. Therefore, the main aim of the present study was to evaluate the accuracy of detailed fibrosis classifications available for non-invasive tests and liver biopsy. The secondary aim was to validate these accuracies in independent populations. Methods Four HCV populations provided 2,068 patients with liver biopsy, four different pathologist skill-levels and non-invasive tests. Results were expressed as percentages of correctly classified patients. Results In population #1 including 205 patients and comparing liver biopsy (reference: consensus reading by two experts and blood tests, Metavir fibrosis (FM stage accuracy was 64.4% in local pathologists vs. 82.2% (p -3 in single expert pathologist. Significant discrepancy (≥ 2FM vs reference histological result rates were: Fibrotest: 17.2%, FibroMeter2G: 5.6%, local pathologists: 4.9%, FibroMeter3G: 0.5%, expert pathologist: 0% (p -3. In population #2 including 1,056 patients and comparing blood tests, the discrepancy scores, taking into account the error magnitude, of detailed fibrosis classification were significantly different between FibroMeter2G (0.30 ± 0.55 and FibroMeter3G (0.14 ± 0.37, p -3 or Fibrotest (0.84 ± 0.80, p -3. In population #3 (and #4 including 458 (359 patients and comparing blood tests and Fibroscan, accuracies of detailed fibrosis classification were, respectively: Fibrotest: 42.5% (33.5%, Fibroscan: 64.9% (50.7%, FibroMeter2G: 68.7% (68.2%, FibroMeter3G: 77.1% (83.4%, p -3 (p -3. Significant discrepancy (≥ 2 FM rates were, respectively: Fibrotest: 21.3% (22.2%, Fibroscan: 12.9% (12.3%, FibroMeter2G: 5.7% (6

  5. A proposal for a pharmacokinetic interaction significance classification system (PISCS) based on predicted drug exposure changes and its potential application to alert classifications in product labelling.

    Science.gov (United States)

    Hisaka, Akihiro; Kusama, Makiko; Ohno, Yoshiyuki; Sugiyama, Yuichi; Suzuki, Hiroshi

    2009-01-01

    Pharmacokinetic drug-drug interactions (DDIs) are one of the major causes of adverse events in pharmacotherapy, and systematic prediction of the clinical relevance of DDIs is an issue of significant clinical importance. In a previous study, total exposure changes of many substrate drugs of cytochrome P450 (CYP) 3A4 caused by coadministration of inhibitor drugs were successfully predicted by using in vivo information. In order to exploit these predictions in daily pharmacotherapy, the clinical significance of the pharmacokinetic changes needs to be carefully evaluated. The aim of the present study was to construct a pharmacokinetic interaction significance classification system (PISCS) in which the clinical significance of DDIs was considered with pharmacokinetic changes in a systematic manner. Furthermore, the classifications proposed by PISCS were compared in a detailed manner with current alert classifications in the product labelling or the summary of product characteristics used in Japan, the US and the UK. A matrix table was composed by stratifying two basic parameters of the prediction: the contribution ratio of CYP3A4 to the oral clearance of substrates (CR), and the inhibition ratio of inhibitors (IR). The total exposure increase was estimated for each cell in the table by associating CR and IR values, and the cells were categorized into nine zones according to the magnitude of the exposure increase. Then, correspondences between the DDI significance and the zones were determined for each drug group considering the observed exposure changes and the current classification in the product labelling. Substrate drugs of CYP3A4 selected from three therapeutic groups, i.e. HMG-CoA reductase inhibitors (statins), calcium-channel antagonists/blockers (CCBs) and benzodiazepines (BZPs), were analysed as representative examples. The product labelling descriptions of drugs in Japan, US and UK were obtained from the websites of each regulatory body. Among 220

  6. The Functional Classification and Field Test Performance in Wheelchair Basketball Players.

    Science.gov (United States)

    Gil, Susana María; Yanci, Javier; Otero, Montserrat; Olasagasti, Jurgi; Badiola, Aduna; Bidaurrazaga-Letona, Iraia; Iturricastillo, Aitor; Granados, Cristina

    2015-06-27

    Wheelchair basketball players are classified in four classes based on the International Wheelchair Basketball Federation (IWBF) system of competition. Thus, the aim of the study was to ascertain if the IWBF classification, the type of injury and the wheelchair experience were related to different performance field-based tests. Thirteen basketball players undertook anthropometric measurements and performance tests (hand dynamometry, 5 m and 20 m sprints, 5 m and 20 m sprints with a ball, a T-test, a Pick-up test, a modified 10 m Yo-Yo intermittent recovery test, a maximal pass and a medicine ball throw). The IWBF class was correlated (pstaff and coaches of the teams when assessing performance of wheelchair basketball players.

  7. Neuropsychological Test Selection for Cognitive Impairment Classification: A Machine Learning Approach

    Science.gov (United States)

    Williams, Jennifer A.; Schmitter-Edgecombe, Maureen; Cook, Diane J.

    2016-01-01

    Introduction Reducing the amount of testing required to accurately detect cognitive impairment is clinically relevant. The aim of this research was to determine the fewest number of clinical measures required to accurately classify participants as healthy older adult, mild cognitive impairment (MCI) or dementia using a suite of classification techniques. Methods Two variable selection machine learning models (i.e., naive Bayes, decision tree), a logistic regression, and two participant datasets (i.e., clinical diagnosis, clinical dementia rating; CDR) were explored. Participants classified using clinical diagnosis criteria included 52 individuals with dementia, 97 with MCI, and 161 cognitively healthy older adults. Participants classified using CDR included 154 individuals CDR = 0, 93 individuals with CDR = 0.5, and 25 individuals with CDR = 1.0+. Twenty-seven demographic, psychological, and neuropsychological variables were available for variable selection. Results No significant difference was observed between naive Bayes, decision tree, and logistic regression models for classification of both clinical diagnosis and CDR datasets. Participant classification (70.0 – 99.1%), geometric mean (60.9 – 98.1%), sensitivity (44.2 – 100%), and specificity (52.7 – 100%) were generally satisfactory. Unsurprisingly, the MCI/CDR = 0.5 participant group was the most challenging to classify. Through variable selection only 2 – 9 variables were required for classification and varied between datasets in a clinically meaningful way. Conclusions The current study results reveal that machine learning techniques can accurately classifying cognitive impairment and reduce the number of measures required for diagnosis. PMID:26332171

  8. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    International Nuclear Information System (INIS)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A.; Caloba, L.P.; Mery, D.

    2004-01-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  9. Estimated accuracy of classification of defects detected in welded joints by radiographic tests

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, M.H.S.; De Silva, R.R.; De Souza, M.P.V.; Rebello, J.M.A. [Federal Univ. of Rio de Janeiro, Dept., of Metallurgical and Materials Engineering, Rio de Janeiro (Brazil); Caloba, L.P. [Federal Univ. of Rio de Janeiro, Dept., of Electrical Engineering, Rio de Janeiro (Brazil); Mery, D. [Pontificia Unversidad Catolica de Chile, Escuela de Ingenieria - DCC, Dept. de Ciencia de la Computacion, Casilla, Santiago (Chile)

    2004-07-01

    This work is a study to estimate the accuracy of classification of the main classes of weld defects detected by radiography test, such as: undercut, lack of penetration, porosity, slag inclusion, crack or lack of fusion. To carry out this work non-linear pattern classifiers were developed, using neural networks, and the largest number of radiographic patterns as possible was used as well as statistical inference techniques of random selection of samples with and without repositioning (bootstrap) in order to estimate the accuracy of the classification. The results pointed to an estimated accuracy of around 80% for the classes of defects analyzed. (author)

  10. Nuclear Power Plant Thermocouple Sensor-Fault Detection and Classification Using Deep Learning and Generalized Likelihood Ratio Test

    Science.gov (United States)

    Mandal, Shyamapada; Santhi, B.; Sridhar, S.; Vinolia, K.; Swaminathan, P.

    2017-06-01

    In this paper, an online fault detection and classification method is proposed for thermocouples used in nuclear power plants. In the proposed method, the fault data are detected by the classification method, which classifies the fault data from the normal data. Deep belief network (DBN), a technique for deep learning, is applied to classify the fault data. The DBN has a multilayer feature extraction scheme, which is highly sensitive to a small variation of data. Since the classification method is unable to detect the faulty sensor; therefore, a technique is proposed to identify the faulty sensor from the fault data. Finally, the composite statistical hypothesis test, namely generalized likelihood ratio test, is applied to compute the fault pattern of the faulty sensor signal based on the magnitude of the fault. The performance of the proposed method is validated by field data obtained from thermocouple sensors of the fast breeder test reactor.

  11. Comparison of pixel -based and artificial neural networks classification methods for detecting forest cover changes in Malaysia

    International Nuclear Information System (INIS)

    Deilmai, B R; Rasib, A W; Ariffin, A; Kanniah, K D

    2014-01-01

    According to the FAO (Food and Agriculture Organization), Malaysia lost 8.6% of its forest cover between 1990 and 2005. In forest cover change detection, remote sensing plays an important role. A lot of change detection methods have been developed, and most of them are semi-automated. These methods are time consuming and difficult to apply. One of the new and robust methods for change detection is artificial neural network (ANN). In this study, (ANN) classification scheme is used to detect the forest cover changes in the Johor state in Malaysia. Landsat Thematic Mapper images covering a period of 9 years (2000 and 2009) are used. Results obtained with ANN technique was compared with Maximum likelihood classification (MLC) to investigate whether ANN can perform better in the tropical environment. Overall accuracy of the ANN and MLC techniques are 75%, 68% (2000) and 80%, 75% (2009) respectively. Using the ANN method, it was found that forest area in Johor decreased as much as 1298 km2 between 2000 and 2009. The results also showed the potential and advantages of neural network in classification and change detection analysis

  12. The Groningen Laryngomalacia Classification System-Based on Systematic Review and Dynamic Airway Changes

    NARCIS (Netherlands)

    van der Heijden, Martijn; Dikkers, Frederik G.; Halmos, Gyorgy B.

    2015-01-01

    Objective: Laryngomalacia is the most common cause of dyspnea and stridor in newborn infants. Laryngomalacia is a dynamic change of the upper airway based on abnormally pliable supraglottic structures, which causes upper airway obstruction. In the past, different classification systems have been

  13. A simple subcritical chromatographic test for an extended ODS high performance liquid chromatography column classification.

    Science.gov (United States)

    Lesellier, Eric; Tchapla, Alain

    2005-12-23

    This paper describes a new test designed in subcritical fluid chromatography (SFC) to compare the commercial C18 stationary phase properties. This test provides, from a single analysis of carotenoid pigments, the absolute hydrophobicity, the silanol activity and the steric separation factor of the ODS stationary phases. Both the choice of the analytical conditions and the validation of the information obtained from the chromatographic measurements are detailed. Correlations of the carotenoid test results with results obtained from other tests (Tanaka, Engelhard, Sander and Wise) performed both in SFC and HPLC are discussed. Two separation factors, calculated from the retention of carotenoid pigments used as probe, allowed to draw a first classification diagram. Columns, which present identical chromatographic behaviors are located in the same area on this diagram. This location can be related to the stationary phase properties: endcapping treatments, bonding density, linkage functionality, specific area or silica pore diameter. From the first classification, eight groups of columns are distinguished. One group of polymer coated silica, three groups of polymeric octadecyl phases, depending on the pore size and the endcapping treatment, and four groups of monomeric stationary phases. An additional classification of the four monomeric groups allows the comparison of these stationary phases inside each group by using the total hydrophobicity. One hundred and twenty-nine columns were analysed by this simple and rapid test, which allows a comparison of columns with the aim of helping along their choice in HPLC.

  14. Changing beliefs for changing movement and pain: Classification-based cognitive functional therapy (CB-CFT) for chronic non-specific low back pain.

    Science.gov (United States)

    Meziat Filho, N

    2016-02-01

    This case report presents the effect of classification-based cognitive functional therapy in a patient with chronic disabling low back pain. The patient was assessed using a multidimensional biopsychosocial classification system and was classified as having flexion pattern of movement impairment disorder. Management of this patient was to change her belief that bending over and sitting would cause damage to her disc, combined with active exercises for graded exposure to lumbar flexion to restore normal movement. Three months after the first appointment, the treatment resulted in reduced pain, the mitigation of fear avoidance beliefs and the remediation of functional disability. The patient returned to work and was walking for one hour a day on a treadmill. The cognitive intervention to change the patient's negative beliefs related to the biomedical model was important to make the graded exercises and the lifestyle changes possible. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Classification of transient processes with a jumplike change in the reactivity

    International Nuclear Information System (INIS)

    Sabaeva, T.A.

    1989-01-01

    The problem of the change in the neutron flux density accompanying a jumplike (instantaneous) change in the reactivity is classical and is studied in most textbooks and monographs devoted to the regulation of nuclear reactors, where in constructing the response only the feedback on delayed neutrons is taken into account. The use of a linear feedback of a general form permits describing reactors of different types. A classification of feedbacks on reactivity was presented by Sabaeva, where a parabolic region in phase space is separated. A peak in the neutron flux corresponds to the image point falling into this region. In this paper the conditions making it possible to find the change in the neutrons flux immediately after an instantaneous change in the reactivity are derived, and the feedbacks are classified based on this

  16. A classification of the mechanisms producing pathological tissue changes.

    Science.gov (United States)

    Grippo, John O; Oh, Daniel S

    2013-05-01

    The objectives are to present a classification of mechanisms which can produce pathological changes in body tissues and fluids, as well as to clarify and define the term biocorrosion, which has had a singular use in engineering. Considering the emerging field of biomedical engineering, it is essential to use precise definitions in the lexicons of engineering, bioengineering and related sciences such as medicine, dentistry and veterinary medicine. The mechanisms of stress, friction and biocorrosion and their pathological effects on tissues are described. Biocorrosion refers to the chemical, biochemical and electrochemical changes by degradation or induced growth of living body tissues and fluids. Various agents which can affect living tissues causing biocorrosion are enumerated which support the necessity and justify the use of this encompassing and more precise definition of biocorrosion. A distinction is made between the mechanisms of corrosion and biocorrosion.

  17. Object-based land cover classification and change analysis in the Baltimore metropolitan area using multitemporal high resolution remote sensing data

    Science.gov (United States)

    Weiqi Zhou; Austin Troy; Morgan Grove

    2008-01-01

    Accurate and timely information about land cover pattern and change in urban areas is crucial for urban land management decision-making, ecosystem monitoring and urban planning. This paper presents the methods and results of an object-based classification and post-classification change detection of multitemporal high-spatial resolution Emerge aerial imagery in the...

  18. The Dysexecutive Questionnaire advanced: item and test score characteristics, 4-factor solution, and severity classification.

    Science.gov (United States)

    Bodenburg, Sebastian; Dopslaff, Nina

    2008-01-01

    The Dysexecutive Questionnaire (DEX, , Behavioral assessment of the dysexecutive syndrome, 1996) is a standardized instrument to measure possible behavioral changes as a result of the dysexecutive syndrome. Although initially intended only as a qualitative instrument, the DEX has also been used increasingly to address quantitative problems. Until now there have not been more fundamental statistical analyses of the questionnaire's testing quality. The present study is based on an unselected sample of 191 patients with acquired brain injury and reports on the data relating to the quality of the items, the reliability and the factorial structure of the DEX. Item 3 displayed too great an item difficulty, whereas item 11 was not sufficiently discriminating. The DEX's reliability in self-rating is r = 0.85. In addition to presenting the statistical values of the tests, a clinical severity classification of the overall scores of the 4 found factors and of the questionnaire as a whole is carried out on the basis of quartile standards.

  19. Changes in computed tomography features following preoperative chemotherapy for nephroblastoma: relation to histopathological classification

    International Nuclear Information System (INIS)

    Olsen, Oeystein E.; Jeanes, Annmarie C.; Roebuck, Derek J.; Owens, Catherine M.; Sebire, Neil J.; Risdon, Rupert A.; Michalski, Anthony J.

    2004-01-01

    The objective of this study is to assess computed tomography (CT) changes, both volume estimates and subjective features, following preoperative chemotherapy for nephroblastoma (Wilms' tumour) in patients treated on the United Kingdom Children's Cancer Study Group Wilms' Tumour Study-3 (UKW-3) protocol and to compare CT changes and histopathological classification. Twenty-one nephroblastomas in 15 patients treated on UKW-3 were included. All patients were examined by CT before and after preoperative chemotherapy treatment. CT images were reviewed (estimated volume change and subjectively assessed features). CT changes were compared to histopathological classification. Of the 21 tumours, all five high-risk tumours decreased in volume following chemotherapy (median -79%; range -37 to -91%). The sole low-risk tumour decreased in volume by 98%. Ten intermediate-risk tumours decreased in volume (median -72%; range -6 to -98%) and five intermediate-risk tumours increased (median +110%; range +11 to +164%). None of the five high-risk tumours, compared to 15/16 intermediate or low-risk tumours, became less dense and/or more homogeneous, or virtually disappeared, following chemotherapy. Volume change following chemotherapy did not relate to histopathological risk group. Changes in subjectively assessed qualitative CT features were more strongly related to histopathological risk group. (orig.)

  20. The 2015 World Health Organization Classification of Lung Tumors: Impact of Genetic, Clinical and Radiologic Advances Since the 2004 Classification.

    Science.gov (United States)

    Travis, William D; Brambilla, Elisabeth; Nicholson, Andrew G; Yatabe, Yasushi; Austin, John H M; Beasley, Mary Beth; Chirieac, Lucian R; Dacic, Sanja; Duhig, Edwina; Flieder, Douglas B; Geisinger, Kim; Hirsch, Fred R; Ishikawa, Yuichi; Kerr, Keith M; Noguchi, Masayuki; Pelosi, Giuseppe; Powell, Charles A; Tsao, Ming Sound; Wistuba, Ignacio

    2015-09-01

    The 2015 World Health Organization (WHO) Classification of Tumors of the Lung, Pleura, Thymus and Heart has just been published with numerous important changes from the 2004 WHO classification. The most significant changes in this edition involve (1) use of immunohistochemistry throughout the classification, (2) a new emphasis on genetic studies, in particular, integration of molecular testing to help personalize treatment strategies for advanced lung cancer patients, (3) a new classification for small biopsies and cytology similar to that proposed in the 2011 Association for the Study of Lung Cancer/American Thoracic Society/European Respiratory Society classification, (4) a completely different approach to lung adenocarcinoma as proposed by the 2011 Association for the Study of Lung Cancer/American Thoracic Society/European Respiratory Society classification, (5) restricting the diagnosis of large cell carcinoma only to resected tumors that lack any clear morphologic or immunohistochemical differentiation with reclassification of the remaining former large cell carcinoma subtypes into different categories, (6) reclassifying squamous cell carcinomas into keratinizing, nonkeratinizing, and basaloid subtypes with the nonkeratinizing tumors requiring immunohistochemistry proof of squamous differentiation, (7) grouping of neuroendocrine tumors together in one category, (8) adding NUT carcinoma, (9) changing the term sclerosing hemangioma to sclerosing pneumocytoma, (10) changing the name hamartoma to "pulmonary hamartoma," (11) creating a group of PEComatous tumors that include (a) lymphangioleiomyomatosis, (b) PEComa, benign (with clear cell tumor as a variant) and (c) PEComa, malignant, (12) introducing the entity pulmonary myxoid sarcoma with an EWSR1-CREB1 translocation, (13) adding the entities myoepithelioma and myoepithelial carcinomas, which can show EWSR1 gene rearrangements, (14) recognition of usefulness of WWTR1-CAMTA1 fusions in diagnosis of epithelioid

  1. Using classification and NDVI differencing methods for monitoring sparse vegetation coverage: a case study of saltcedar in Nevada, USA.

    Science.gov (United States)

    A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...

  2. Is overall similarity classification less effortful than single-dimension classification?

    Science.gov (United States)

    Wills, Andy J; Milton, Fraser; Longmore, Christopher A; Hester, Sarah; Robinson, Jo

    2013-01-01

    It is sometimes argued that the implementation of an overall similarity classification is less effortful than the implementation of a single-dimension classification. In the current article, we argue that the evidence securely in support of this view is limited, and report additional evidence in support of the opposite proposition--overall similarity classification is more effortful than single-dimension classification. Using a match-to-standards procedure, Experiments 1A, 1B and 2 demonstrate that concurrent load reduces the prevalence of overall similarity classification, and that this effect is robust to changes in the concurrent load task employed, the level of time pressure experienced, and the short-term memory requirements of the classification task. Experiment 3 demonstrates that participants who produced overall similarity classifications from the outset have larger working memory capacities than those who produced single-dimension classifications initially, and Experiment 4 demonstrates that instructions to respond meticulously increase the prevalence of overall similarity classification.

  3. Breathing (and Coding?) a Bit Easier: Changes to International Classification of Disease Coding for Pulmonary Hypertension.

    Science.gov (United States)

    Mathai, Stephen C; Mathew, Sherin

    2018-04-20

    International Classification of Disease (ICD) coding system is broadly utilized by healthcare providers, hospitals, healthcare payers, and governments to track health trends and statistics at the global, national, and local levels and to provide a reimbursement framework for medical care based upon diagnosis and severity of illness. The current iteration of the ICD system, ICD-10, was implemented in 2015. While many changes to the prior ICD-9 system were included in the ICD-10 system, the newer revision failed to adequately reflect advances in the clinical classification of certain diseases such as pulmonary hypertension (PH). Recently, a proposal to modify the ICD-10 codes for PH was considered and ultimately adopted for inclusion as updates to ICD-10 coding system. While these revisions better reflect the current clinical classification of PH, in the future, further changes should be considered to improve the accuracy and ease of coding for all forms of PH. Copyright © 2018. Published by Elsevier Inc.

  4. Classification with support hyperplanes

    NARCIS (Netherlands)

    G.I. Nalbantov (Georgi); J.C. Bioch (Cor); P.J.F. Groenen (Patrick)

    2006-01-01

    textabstractA new classification method is proposed, called Support Hy- perplanes (SHs). To solve the binary classification task, SHs consider the set of all hyperplanes that do not make classification mistakes, referred to as semi-consistent hyperplanes. A test object is classified using

  5. Street-side vehicle detection, classification and change detection using mobile laser scanning data

    Science.gov (United States)

    Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas

    2016-04-01

    Statistics on street-side car parks, e.g. occupancy rates, parked vehicle types, parking durations, are of great importance for urban planning and policy making. Related studies, e.g. vehicle detection and classification, mostly focus on static images or video. Whereas mobile laser scanning (MLS) systems are increasingly utilized for urban street environment perception due to their direct 3D information acquisition, high accuracy and movability. In this paper, we design a complete system for car park monitoring, including vehicle recognition, localization, classification and change detection, from laser scanning point clouds. The experimental data are acquired by an MLS system using high frequency laser scanner which scans the streets vertically along the system's moving trajectory. The point clouds are firstly classified as ground, building façade, and street objects which are then segmented using state-of-the-art methods. Each segment is treated as an object hypothesis, and its geometric features are extracted. Moreover, a deformable vehicle model is fitted to each object. By fitting an explicit model to the vehicle points, detailed information, such as precise position and orientation, can be obtained. The model parameters are also treated as vehicle features. Together with the geometric features, they are applied to a supervised learning procedure for vehicle or non-vehicle recognition. The classes of detected vehicles are also investigated. Whether vehicles have changed across two datasets acquired at different times is detected to estimate the durations. Here, vehicles are trained pair-wisely. Two same or different vehicles are paired up as training samples. As a result, the vehicle recognition, classification and change detection accuracies are 95.9%, 86.0% and 98.7%, respectively. Vehicle modelling improves not only the recognition rate, but also the localization precision compared to bounding boxes.

  6. Classification of user performance in the Ruff Figural Fluency Test based on eye-tracking features

    Directory of Open Access Journals (Sweden)

    Borys Magdalena

    2017-01-01

    Full Text Available Cognitive assessment in neurological diseases represents a relevant topic due to its diagnostic significance in detecting disease, but also in assessing progress of the treatment. Computer-based tests provide objective and accurate cognitive skills and capacity measures. The Ruff Figural Fluency Test (RFFT provides information about non-verbal capacity for initiation, planning, and divergent reasoning. The traditional paper form of the test was transformed into a computer application and examined. The RFFT was applied in an experiment performed among 70 male students to assess their cognitive performance in the laboratory environment. Each student was examined in three sequential series. Besides the students’ performances measured by using in app keylogging, the eye-tracking data obtained by non-invasive video-based oculography were gathered, from which several features were extracted. Eye-tracking features combined with performance measures (a total number of designs and/or error ratio were applied in machine learning classification. Various classification algorithms were applied, and their accuracy, specificity, sensitivity and performance were compared.

  7. Utility of Intelligence Tests for Treatment Planning, Classification, and Placement Decisions: Recent Empirical Findings and Future Directions.

    Science.gov (United States)

    Gresham, Frank M.; Witt, Joseph C.

    1997-01-01

    Maintains that intelligence tests contribute little to the planning, implementation, and evaluation of instructional interventions for children. Suggests that intelligence tests are not useful in making differential diagnostic and classification determinations for children with mild learning problems and that such testing is not a cost-beneficial…

  8. Updating the 2001 National Land Cover Database land cover classification to 2006 by using Landsat imagery change detection methods

    Science.gov (United States)

    Xian, George; Homer, Collin G.; Fry, Joyce

    2009-01-01

    The recent release of the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001, which represents the nation's land cover status based on a nominal date of 2001, is widely used as a baseline for national land cover conditions. To enable the updating of this land cover information in a consistent and continuous manner, a prototype method was developed to update land cover by an individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season in 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, land cover classifications at the full NLCD resolution for 2006 areas of change were completed by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain several metropolitan areas including Seattle, Washington; San Diego, California; Sioux Falls, South Dakota; Jackson, Mississippi; and Manchester, New Hampshire. Results from the five study areas show that the vast majority of land cover change was captured and updated with overall land cover classification accuracies of 78.32%, 87.5%, 88.57%, 78.36%, and 83.33% for these areas. The method optimizes mapping efficiency and has the potential to provide users a flexible method to generate updated land cover at national and regional scales by using NLCD 2001 as the baseline.

  9. Nonparametric Bayes Classification and Hypothesis Testing on Manifolds

    Science.gov (United States)

    Bhattacharya, Abhishek; Dunson, David

    2012-01-01

    Our first focus is prediction of a categorical response variable using features that lie on a general manifold. For example, the manifold may correspond to the surface of a hypersphere. We propose a general kernel mixture model for the joint distribution of the response and predictors, with the kernel expressed in product form and dependence induced through the unknown mixing measure. We provide simple sufficient conditions for large support and weak and strong posterior consistency in estimating both the joint distribution of the response and predictors and the conditional distribution of the response. Focusing on a Dirichlet process prior for the mixing measure, these conditions hold using von Mises-Fisher kernels when the manifold is the unit hypersphere. In this case, Bayesian methods are developed for efficient posterior computation using slice sampling. Next we develop Bayesian nonparametric methods for testing whether there is a difference in distributions between groups of observations on the manifold having unknown densities. We prove consistency of the Bayes factor and develop efficient computational methods for its calculation. The proposed classification and testing methods are evaluated using simulation examples and applied to spherical data applications. PMID:22754028

  10. Spectral multi-energy CT texture analysis with machine learning for tissue classification: an investigation using classification of benign parotid tumours as a testing paradigm.

    Science.gov (United States)

    Al Ajmi, Eiman; Forghani, Behzad; Reinhold, Caroline; Bayat, Maryam; Forghani, Reza

    2018-06-01

    There is a rich amount of quantitative information in spectral datasets generated from dual-energy CT (DECT). In this study, we compare the performance of texture analysis performed on multi-energy datasets to that of virtual monochromatic images (VMIs) at 65 keV only, using classification of the two most common benign parotid neoplasms as a testing paradigm. Forty-two patients with pathologically proven Warthin tumour (n = 25) or pleomorphic adenoma (n = 17) were evaluated. Texture analysis was performed on VMIs ranging from 40 to 140 keV in 5-keV increments (multi-energy analysis) or 65-keV VMIs only, which is typically considered equivalent to single-energy CT. Random forest (RF) models were constructed for outcome prediction using separate randomly selected training and testing sets or the entire patient set. Using multi-energy texture analysis, tumour classification in the independent testing set had accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 92%, 86%, 100%, 100%, and 83%, compared to 75%, 57%, 100%, 100%, and 63%, respectively, for single-energy analysis. Multi-energy texture analysis demonstrates superior performance compared to single-energy texture analysis of VMIs at 65 keV for classification of benign parotid tumours. • We present and validate a paradigm for texture analysis of DECT scans. • Multi-energy dataset texture analysis is superior to single-energy dataset texture analysis. • DECT texture analysis has high accura\\cy for diagnosis of benign parotid tumours. • DECT texture analysis with machine learning can enhance non-invasive diagnostic tumour evaluation.

  11. TEXT CLASSIFICATION USING NAIVE BAYES UPDATEABLE ALGORITHM IN SBMPTN TEST QUESTIONS

    Directory of Open Access Journals (Sweden)

    Ristu Saptono

    2017-01-01

    Full Text Available Document classification is a growing interest in the research of text mining. Classification can be done based on the topics, languages, and so on. This study was conducted to determine how Naive Bayes Updateable performs in classifying the SBMPTN exam questions based on its theme. Increment model of one classification algorithm often used in text classification Naive Bayes classifier has the ability to learn from new data introduces with the system even after the classifier has been produced with the existing data. Naive Bayes Classifier classifies the exam questions based on the theme of the field of study by analyzing keywords that appear on the exam questions. One of feature selection method DF-Thresholding is implemented for improving the classification performance. Evaluation of the classification with Naive Bayes classifier algorithm produces 84,61% accuracy.

  12. Application of bivariate mapping for hydrological classification and analysis of temporal change and scale effects in Switzerland

    NARCIS (Netherlands)

    Speich, Matthias J.R.; Bernhard, Luzi; Teuling, Ryan; Zappa, Massimiliano

    2015-01-01

    Hydrological classification schemes are important tools for assessing the impacts of a changing climate on the hydrology of a region. In this paper, we present bivariate mapping as a simple means of classifying hydrological data for a quantitative and qualitative assessment of temporal change.

  13. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  14. Development and application of test apparatus for classification of sealed source

    International Nuclear Information System (INIS)

    Kim, Dong Hak; Seo, Ki Seog; Bang, Kyoung Sik; Lee, Ju Chan; Son, Kwang Je

    2007-01-01

    Sealed sources have to conducted the tests be done according to the classification requirements for their typical usages in accordance with the relevant domestic notice standard and ISO 2919. After each test, the source shall be examined visually for loss of integrity and pass an appropriate leakage test. Tests to class a sealed source are temperature, external pressure, impact, vibration and puncture test. The environmental test conditions for tests with class numbers are arranged in increasing order of severity. In this study, the apparatus of tests, except the vibration test, were developed and applied to three kinds of sealed source. The conditions of the tests to class a sealed source were stated and the difference between the domestic notice standard and ISO 2919 were considered. And apparatus of the tests were made. Using developed apparatus we conducted the test for 192 Ir brachytherapy sealed source and two kinds of sealed source for industrial radiography. 192 Ir brachytherapy sealed source is classified by temperature class 5, external pressure class 3, impact class 2 and vibration and puncture class 1. Two kinds of sealed source for industrial radiography are classified by temperature class 4, external pressure class 2, impact and puncture class 5 and vibration class 1. After the tests, Liquid nitrogen bubble test and vacuum bubble test were done to evaluate the safety of the sealed sources

  15. Classification of morphologic changes in photoplethysmographic waveforms

    Directory of Open Access Journals (Sweden)

    Tigges Timo

    2016-09-01

    Full Text Available An ever increasing number of research is examining the question to what extent physiological information beyond the blood oxygen saturation could be drawn from the photoplethysmogram. One important approach to elicit that information from the photoplethysmogram is the analysis of its waveform. One prominent example for the value of photoplethysmographic waveform analysis in cardiovascular monitoring that has emerged is hemodynamic compensation assessment in the peri-operative setting or trauma situations, as digital pulse waveform dynamically changes with alterations in vascular tone or pulse wave velocity. In this work, we present an algorithm based on modern machine learning techniques that automatically finds individual digital volume pulses in photoplethysmographic signals and sorts them into one of the pulse classes defined by Dawber et al. We evaluate our approach based on two major datasets – a measurement study that we conducted ourselves as well as data from the PhysioNet MIMIC II database. As the results are satisfying we could demonstrate the capabilities of classification algorithms in the automated assessment of the digital volume pulse waveform measured by photoplethysmographic devices.

  16. Hazard classification for the supercritical water oxidation test bed. Revision 1

    International Nuclear Information System (INIS)

    Ramos, A.G.

    1994-10-01

    A hazard classification of ''routinely accepted by the public'' has been determined for the operation of the supercritical water oxidation test bed at the Idaho National Engineering Laboratory. This determination is based on the fact that the design and proposed operation meet or exceed appropriate national standards so that the risks are equivalent to those present in similar activities conducted in private industry. Each of the 17 criteria for hazards ''routinely accepted by the public,'' identified in the EG and G Idaho, Inc., Safety Manual, were analyzed. The supercritical water oxidation (SCWO) test bed will treat simulated mixed waste without the radioactive component. It will be designed to operate with eight test wastes. These test wastes have been chosen to represent a broad cross-section of candidate mixed wastes anticipated for storage or generation by DOE. In particular, the test bed will generate data to evaluate the ability of the technology to treat chlorinated waste and other wastes that have in the past caused severe corrosion and deposition in SCWO reactors

  17. Classification of groundwater at the Nevada Test Site

    International Nuclear Information System (INIS)

    Chapman, J.B.

    1994-08-01

    Groundwater occurring at the Nevada Test Site (NTS) has been classified according to the ''Guidelines for Ground-Water Classification Under the US Environmental Protection Agency (EPA) Ground-Water Protection Strategy'' (June 1988). All of the groundwater units at the NTS are Class II, groundwater currently (IIA) or potentially (IIB) a source of drinking water. The Classification Review Area (CRA) for the NTS is defined as the standard two-mile distance from the facility boundary recommended by EPA. The possibility of expanding the CRA was evaluated, but the two-mile distance encompasses the area expected to be impacted by contaminant transport during a 10-year period (EPA,s suggested limit), should a release occur. The CRA is very large as a consequence of the large size of the NTS and the decision to classify the entire site, not individual areas of activity. Because most activities are located many miles hydraulically upgradient of the NTS boundary, the CRA generally provides much more than the usual two-mile buffer required by EPA. The CRA is considered sufficiently large to allow confident determination of the use and value of groundwater and identification of potentially affected users. The size and complex hydrogeology of the NTS are inconsistent with the EPA guideline assumption of a high degree of hydrologic interconnection throughout the review area. To more realistically depict the site hydrogeology, the CRA is subdivided into eight groundwater units. Two main aquifer systems are recognized: the lower carbonate aquifer system and the Cenozoic aquifer system (consisting of aquifers in Quaternary valley fill and Tertiary volcanics). These aquifer systems are further divided geographically based on the location of low permeability boundaries

  18. Machine learning algorithms for mode-of-action classification in toxicity assessment.

    Science.gov (United States)

    Zhang, Yile; Wong, Yau Shu; Deng, Jian; Anton, Cristina; Gabos, Stephan; Zhang, Weiping; Huang, Dorothy Yu; Jin, Can

    2016-01-01

    Real Time Cell Analysis (RTCA) technology is used to monitor cellular changes continuously over the entire exposure period. Combining with different testing concentrations, the profiles have potential in probing the mode of action (MOA) of the testing substances. In this paper, we present machine learning approaches for MOA assessment. Computational tools based on artificial neural network (ANN) and support vector machine (SVM) are developed to analyze the time-concentration response curves (TCRCs) of human cell lines responding to tested chemicals. The techniques are capable of learning data from given TCRCs with known MOA information and then making MOA classification for the unknown toxicity. A novel data processing step based on wavelet transform is introduced to extract important features from the original TCRC data. From the dose response curves, time interval leading to higher classification success rate can be selected as input to enhance the performance of the machine learning algorithm. This is particularly helpful when handling cases with limited and imbalanced data. The validation of the proposed method is demonstrated by the supervised learning algorithm applied to the exposure data of HepG2 cell line to 63 chemicals with 11 concentrations in each test case. Classification success rate in the range of 85 to 95 % are obtained using SVM for MOA classification with two clusters to cases up to four clusters. Wavelet transform is capable of capturing important features of TCRCs for MOA classification. The proposed SVM scheme incorporated with wavelet transform has a great potential for large scale MOA classification and high-through output chemical screening.

  19. Testing the Potential of Vegetation Indices for Land Use/cover Classification Using High Resolution Data

    Science.gov (United States)

    Karakacan Kuzucu, A.; Bektas Balcik, F.

    2017-11-01

    Accurate and reliable land use/land cover (LULC) information obtained by remote sensing technology is necessary in many applications such as environmental monitoring, agricultural management, urban planning, hydrological applications, soil management, vegetation condition study and suitability analysis. But this information still remains a challenge especially in heterogeneous landscapes covering urban and rural areas due to spectrally similar LULC features. In parallel with technological developments, supplementary data such as satellite-derived spectral indices have begun to be used as additional bands in classification to produce data with high accuracy. The aim of this research is to test the potential of spectral vegetation indices combination with supervised classification methods and to extract reliable LULC information from SPOT 7 multispectral imagery. The Normalized Difference Vegetation Index (NDVI), the Ratio Vegetation Index (RATIO), the Soil Adjusted Vegetation Index (SAVI) were the three vegetation indices used in this study. The classical maximum likelihood classifier (MLC) and support vector machine (SVM) algorithm were applied to classify SPOT 7 image. Catalca is selected region located in the north west of the Istanbul in Turkey, which has complex landscape covering artificial surface, forest and natural area, agricultural field, quarry/mining area, pasture/scrubland and water body. Accuracy assessment of all classified images was performed through overall accuracy and kappa coefficient. The results indicated that the incorporation of these three different vegetation indices decrease the classification accuracy for the MLC and SVM classification. In addition, the maximum likelihood classification slightly outperformed the support vector machine classification approach in both overall accuracy and kappa statistics.

  20. What lies beneath: detecting sub-canopy changes in savanna woodlands using a three-dimensional classification method

    CSIR Research Space (South Africa)

    Fisher, JT

    2015-07-01

    Full Text Available structural diversity. A 3D classification approach was successful in detecting fine-scale, short-term changes between land uses, and can thus be used as amonitoring tool for savannawoody vegetation structure....

  1. Geometric classification of scalp hair for valid drug testing, 6 more reliable than 8 hair curl groups.

    Directory of Open Access Journals (Sweden)

    K Mkentane

    Full Text Available Curly hair is reported to contain higher lipid content than straight hair, which may influence incorporation of lipid soluble drugs. The use of race to describe hair curl variation (Asian, Caucasian and African is unscientific yet common in medical literature (including reports of drug levels in hair. This study investigated the reliability of a geometric classification of hair (based on 3 measurements: the curve diameter, curl index and number of waves.After ethical approval and informed consent, proximal virgin (6cm hair sampled from the vertex of scalp in 48 healthy volunteers were evaluated. Three raters each scored hairs from 48 volunteers at two occasions each for the 8 and 6-group classifications. One rater applied the 6-group classification to 80 additional volunteers in order to further confirm the reliability of this system. The Kappa statistic was used to assess intra and inter rater agreement.Each rater classified 480 hairs on each occasion. No rater classified any volunteer's 10 hairs into the same group; the most frequently occurring group was used for analysis. The inter-rater agreement was poor for the 8-groups (k = 0.418 but improved for the 6-groups (k = 0.671. The intra-rater agreement also improved (k = 0.444 to 0.648 versus 0.599 to 0.836 for 6-groups; that for the one evaluator for all volunteers was good (k = 0.754.Although small, this is the first study to test the reliability of a geometric classification. The 6-group method is more reliable. However, a digital classification system is likely to reduce operator error. A reliable objective classification of human hair curl is long overdue, particularly with the increasing use of hair as a testing substrate for treatment compliance in Medicine.

  2. Architecturally Significant Requirements Identification, Classification and Change Management for Multi-tenant Cloud-Based Systems

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Probst, Christian W.

    2017-01-01

    presented a framework for requirements classification and change management focusing on distributed Platform as a Service (PaaS) and Software as a Service (SaaS) systems as well as complex software ecosystems that are built using PaaS and SaaS, such as Tools as a Service (TaaS). We have demonstrated...

  3. 7 CFR 28.911 - Review classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Review classification. 28.911 Section 28.911... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification and Market News Service for Producers Classification § 28.911 Review classification. (a) A producer may request one review...

  4. Applying post classification change detection technique to monitor an Egyptian coastal zone (Abu Qir Bay

    Directory of Open Access Journals (Sweden)

    Mamdouh M. El-Hattab

    2016-06-01

    Full Text Available Land cover changes considered as one of the important global phenomena exerting perhaps one of the most significant effects on the environment than any other factor. It is, therefore, vital that accurate data on land cover changes are made available to facilitate the understanding of the link between land cover changes and environmental changes to allow planners to make effective decisions. In this paper, the post classification approach was used to detect and assess land cover changes of one of the important coastal zones in Egypt, Abu Qir Bay zone, based on the comparative analysis of independently produced classification images of the same area at different dates. In addition to satellite images, socioeconomic data were used with the aid of land use model EGSLR to indicate relation between land cover and land use changes. Results indicated that changes in different land covers reflected the changes in occupation status in specific zones. For example, in the south of Idku Lake zone, it was observed that the occupation of settlers changed from being unskilled workers to fishermen based on the expansion of the area of fish farms. Change rates increased dramatically in the period from 2004 to 2013 as remarkable negative changes were found especially in fruits and palm trees (i.e. loss of about 66 km2 of land having fruits and palm trees due to industrialization in the coastal area. Also, a rapid urbanization was monitored along the coastline of Abu Qir Bay zone due to the political conditions in Egypt (25th of January Revolution within this period and which resulted to the temporary absence of monitoring systems to regulate urbanization.

  5. Ototoxicity (cochleotoxicity) classifications: A review.

    Science.gov (United States)

    Crundwell, Gemma; Gomersall, Phil; Baguley, David M

    2016-01-01

    Drug-mediated ototoxicity, specifically cochleotoxicity, is a concern for patients receiving medications for the treatment of serious illness. A number of classification schemes exist, most of which are based on pure-tone audiometry, in order to assist non-audiological/non-otological specialists in the identification and monitoring of iatrogenic hearing loss. This review identifies the primary classification systems used in cochleototoxicity monitoring. By bringing together classifications published in discipline-specific literature, the paper aims to increase awareness of their relative strengths and limitations in the assessment and monitoring of ototoxic hearing loss and to indicate how future classification systems may improve upon the status-quo. Literature review. PubMed identified 4878 articles containing the search term ototox*. A systematic search identified 13 key classification systems. Cochleotoxicity classification systems can be divided into those which focus on hearing change from a baseline audiogram and those that focus on the functional impact of the hearing loss. Common weaknesses of these grading scales included a lack of sensitivity to small adverse changes in hearing thresholds, a lack of high-frequency audiometry (>8 kHz), and lack of indication of which changes are likely to be clinically significant for communication and quality of life.

  6. Development and content validity testing of a comprehensive classification of diagnoses for pediatric nurse practitioners.

    Science.gov (United States)

    Burns, C

    1991-01-01

    Pediatric nurse practitioners (PNPs) need an integrated, comprehensive classification that includes nursing, disease, and developmental diagnoses to effectively describe their practice. No such classification exists. Further, methodologic studies to help evaluate the content validity of any nursing taxonomy are unavailable. A conceptual framework was derived. Then 178 diagnoses from the North American Nursing Diagnosis Association (NANDA) 1986 list, selected diagnoses from the International Classification of Diseases, the Diagnostic and Statistical Manual, Third Revision, and others were selected. This framework identified and listed, with definitions, three domains of diagnoses: Developmental Problems, Diseases, and Daily Living Problems. The diagnoses were ranked using a 4-point scale (4 = highly related to 1 = not related) and were placed into the three domains. The rating scale was assigned by a panel of eight expert pediatric nurses. Diagnoses that were assigned to the Daily Living Problems domain were then sorted into the 11 Functional Health patterns described by Gordon (1987). Reliability was measured using proportions of agreement and Kappas. Content validity of the groups created was measured using indices of content validity and average congruency percentages. The experts used a new method to sort the diagnoses in a new way that decreased overlaps among the domains. The Developmental and Disease domains were judged reliable and valid. The Daily Living domain of nursing diagnoses showed marginally acceptable validity with acceptable reliability. Six Functional Health Patterns were judged reliable and valid, mixed results were determined for four categories, and the Coping/Stress Tolerance category was judged reliable but not valid using either test. There were considerable differences between the panel's, Gordon's (1987), and NANDA's clustering of NANDA diagnoses. This study defines the diagnostic practice of nurses from a holistic, patient

  7. Classification of huminite-ICCP System 1994

    Energy Technology Data Exchange (ETDEWEB)

    Sykorova, I. [Institute of Rock Structure and Mechanics, Academy of Science of the Czech Republic, V Holesovicka 41, 182 09 Prague 8 (Czech Republic); Pickel, W. [Coal and Organic Petrology Services Pty Ltd, 23/80 Box Road, Taren Point, NSW 2229 (Australia); Christanis, K. [Department of Geology, University of Patras, 26500 Rio-Patras (Greece); Wolf, M. [Mergelskull 29, 47802 Krefeld (Germany); Taylor, G.H. [15 Hawkesbury Cres, Farrer Act 2607 (Australia); Flores, D. [Departamento de Geologia, Faculdade de Ciencias do Porto, Praca de Gomes Teixeira, 4099-002 Porto (Portugal)

    2005-04-12

    In the new classification (ICCP System 1994), the maceral group huminite has been revised from the previous classification (ICCP, 1971. Int. Handbook Coal Petr., suppl. to 2nd ed.) to accommodate the nomenclature to changes in the other maceral groups, especially the changes in the vitrinite classification (ICCP, 1998. The new vitrinite classification (ICCP System 1994). Fuel 77, 349-358.). The vitrinite and huminite systems have been correlated so that down to the level of sub-maceral groups, the two systems can be used in parallel. At the level of macerals and for finer classifications, the analyst now has, according to the nature of the coal and the purpose of the analysis, a choice of using either of the two classification systems for huminite and vitrinite. This is in accordance with the new ISO Coal Classification that covers low rank coals as well and allows for the simultaneous use of the huminite and vitrinite nomenclature for low rank coals.

  8. On the classification of structures, systems and components of nuclear research and test reactors

    International Nuclear Information System (INIS)

    Mattar Neto, Miguel

    2009-01-01

    The classification of structures, systems and components of nuclear reactors is a relevant issue related to their design because it is directly associated with their safety functions. There is an important statement regarding quality standards and records that says Structures, systems, and components important to safety shall be designed, fabricated, erected, and tested to quality standards commensurate with the importance of the safety functions to be performed. The definition of the codes, standards and technical requirements applied to the nuclear reactor design, fabrication, inspection and tests may be seen as the main result from this statement. There are well established guides to classify structures, systems and components for nuclear power reactors such as the Pressurized Water Reactors but one can not say the same for nuclear research and test reactors. The nuclear reactors safety functions are those required to the safe reactor operation, the safe reactor shutdown and continued safe conditions, the response to anticipated transients, the response to potential accidents and the control of radioactive material. So, it is proposed in this paper an approach to develop the classification of structures, systems and components of these reactors based on their intended safety functions in order to define the applicable set of codes, standards and technical requirements. (author)

  9. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  10. Asthma in pregnancy: association between the Asthma Control Test and the Global Initiative for Asthma classification and comparisons with spirometry.

    Science.gov (United States)

    de Araujo, Georgia Véras; Leite, Débora F B; Rizzo, José A; Sarinho, Emanuel S C

    2016-08-01

    The aim of this study was to identify a possible association between the assessment of clinical asthma control using the Asthma Control Test (ACT) and the Global Initiative for Asthma (GINA) classification and to perform comparisons with values of spirometry. Through this cross-sectional study, 103 pregnant women with asthma were assessed in the period from October 2010 to October 2013 in the asthma pregnancy clinic at the Clinical Hospital of the Federal University of Pernambuco. Questionnaires concerning the level of asthma control were administered using the Global Initiative for Asthma classification, the Asthma Control Test validated for asthmatic expectant mothers and spirometry; all three methods of assessing asthma control were performed during the same visit between the twenty-first and twenty-seventh weeks of pregnancy. There was a significant association between clinical asthma control assessment using the Asthma Control Test and the Global Initiative for Asthma classification (pspirometry. This study shows that both the Global Initiative for Asthma classification and the Asthma Control Test can be used for asthmatic expectant mothers to assess the clinical control of asthma, especially at the end of the second trimester, which is assumed to be the period of worsening asthma exacerbations during pregnancy. We highlight the importance of the Asthma Control Test as a subjective instrument with easy application, easy interpretation and good reproducibility that does not require spirometry to assess the level of asthma control and can be used in the primary care of asthmatic expectant mothers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Dynamic species classification of microorganisms across time, abiotic and biotic environments-A sliding window approach.

    Directory of Open Access Journals (Sweden)

    Frank Pennekamp

    Full Text Available The development of video-based monitoring methods allows for rapid, dynamic and accurate monitoring of individuals or communities, compared to slower traditional methods, with far reaching ecological and evolutionary applications. Large amounts of data are generated using video-based methods, which can be effectively processed using machine learning (ML algorithms into meaningful ecological information. ML uses user defined classes (e.g. species, derived from a subset (i.e. training data of video-observed quantitative features (e.g. phenotypic variation, to infer classes in subsequent observations. However, phenotypic variation often changes due to environmental conditions, which may lead to poor classification, if environmentally induced variation in phenotypes is not accounted for. Here we describe a framework for classifying species under changing environmental conditions based on the random forest classification. A sliding window approach was developed that restricts temporal and environmentally conditions to improve the classification. We tested our approach by applying the classification framework to experimental data. The experiment used a set of six ciliate species to monitor changes in community structure and behavior over hundreds of generations, in dozens of species combinations and across a temperature gradient. Differences in biotic and abiotic conditions caused simplistic classification approaches to be unsuccessful. In contrast, the sliding window approach allowed classification to be highly successful, as phenotypic differences driven by environmental change, could be captured by the classifier. Importantly, classification using the random forest algorithm showed comparable success when validated against traditional, slower, manual identification. Our framework allows for reliable classification in dynamic environments, and may help to improve strategies for long-term monitoring of species in changing environments. Our

  12. Classification of hydrocephalus: critical analysis of classification categories and advantages of "Multi-categorical Hydrocephalus Classification" (Mc HC).

    Science.gov (United States)

    Oi, Shizuo

    2011-10-01

    Hydrocephalus is a complex pathophysiology with disturbed cerebrospinal fluid (CSF) circulation. There are numerous numbers of classification trials published focusing on various criteria, such as associated anomalies/underlying lesions, CSF circulation/intracranial pressure patterns, clinical features, and other categories. However, no definitive classification exists comprehensively to cover the variety of these aspects. The new classification of hydrocephalus, "Multi-categorical Hydrocephalus Classification" (Mc HC), was invented and developed to cover the entire aspects of hydrocephalus with all considerable classification items and categories. Ten categories include "Mc HC" category I: onset (age, phase), II: cause, III: underlying lesion, IV: symptomatology, V: pathophysiology 1-CSF circulation, VI: pathophysiology 2-ICP dynamics, VII: chronology, VII: post-shunt, VIII: post-endoscopic third ventriculostomy, and X: others. From a 100-year search of publication related to the classification of hydrocephalus, 14 representative publications were reviewed and divided into the 10 categories. The Baumkuchen classification graph made from the round o'clock classification demonstrated the historical tendency of deviation to the categories in pathophysiology, either CSF or ICP dynamics. In the preliminary clinical application, it was concluded that "Mc HC" is extremely effective in expressing the individual state with various categories in the past and present condition or among the compatible cases of hydrocephalus along with the possible chronological change in the future.

  13. Testing the McSad depression specific classification system in patients with somatic conditions: validity and performance.

    Science.gov (United States)

    Papageorgiou, Katerina; Vermeulen, Karin M; Schroevers, Maya J; Buskens, Erik; Ranchor, Adelita V

    2013-07-26

    Valuations of depression are useful to evaluate depression interventions offered to patients with chronic somatic conditions. The only classification system to describe depression developed specifically for valuation purposes is the McSad, but it has not been used among somatic patients. The aim of this study was to test the construct validity of the McSad among diabetes and cancer patients and then to compare the McSad to the commonly used EuroQol - 5 Dimensions (EQ-5DTM) classification system. The comparison was expected to shed light on their capacity to reflect the range of depression states experienced by somatic patients. Cross-sectional data were collected online from 114 diabetes and 195 cancer patients; additionally, 241 cancer patients completed part of the survey on paper. Correlational analyses were performed to test the construct validity. Specifically, we hypothesized high correlations of the McSad domains with depression (Center for Epidemiological Studies Depression Scale (CES-D) and the Patient Health Questionnaire (PHQ-9)). We also expected low/moderate correlations with self-esteem (Rosenberg Self-Esteem scale - RSE) and extraversion (Eysenck Personality Questionnaire Extraversion scale - EPQ-e). Multiple linear regression analyses were run so that the proportion of variance in depression scores (CES-D, PHQ-9) explained by the McSad could be compared to the proportion explained by the EQ-5D classification system. As expected, among all patients groups, we found moderate to high correlations for the McSad domains with the CES-D (.41 to .70) and the PHQ-9 (.52 to .76); we also found low to moderate correlations with the RSE (-.21 to .-48) and the EPQ-e (.18 to .31). Linear regression analyses showed that the McSad explained a greater proportion of variance in depression (CES-D, PHQ-9) (Diabetes: 73%, 82%; Cancer: 72%, 72%) than the EQ-5D classification system (Diabetes: 47%, 59%; Cancer: 51%, 47%). Findings support the construct validity of the Mc

  14. Deep learning application: rubbish classification with aid of an android device

    Science.gov (United States)

    Liu, Sijiang; Jiang, Bo; Zhan, Jie

    2017-06-01

    Deep learning is a very hot topic currently in pattern recognition and artificial intelligence researches. Aiming at the practical problem that people usually don't know correct classifications some rubbish should belong to, based on the powerful image classification ability of the deep learning method, we have designed a prototype system to help users to classify kinds of rubbish. Firstly the CaffeNet Model was adopted for our classification network training on the ImageNet dataset, and the trained network was deployed on a web server. Secondly an android app was developed for users to capture images of unclassified rubbish, upload images to the web server for analyzing backstage and retrieve the feedback, so that users can obtain the classification guide by an android device conveniently. Tests on our prototype system of rubbish classification show that: an image of one single type of rubbish with origin shape can be better used to judge its classification, while an image containing kinds of rubbish or rubbish with changed shape may fail to help users to decide rubbish's classification. However, the system still shows promising auxiliary function for rubbish classification if the network training strategy can be optimized further.

  15. Classification of childhood epilepsies in a tertiary pediatric neurology clinic using a customized classification scheme from the international league against epilepsy 2010 report.

    Science.gov (United States)

    Khoo, Teik-Beng

    2013-01-01

    In its 2010 report, the International League Against Epilepsy Commission on Classification and Terminology had made a number of changes to the organization, terminology, and classification of seizures and epilepsies. This study aims to test the usefulness of this revised classification scheme on children with epilepsies aged between 0 and 18 years old. Of 527 patients, 75.1% only had 1 type of seizure and the commonest was focal seizure (61.9%). A specific electroclinical syndrome diagnosis could be made in 27.5%. Only 2.1% had a distinctive constellation. In this cohort, 46.9% had an underlying structural, metabolic, or genetic etiology. Among the important causes were pre-/perinatal insults, malformation of cortical development, intracranial infections, and neurocutaneous syndromes. However, 23.5% of the patients in our cohort were classified as having "epilepsies of unknown cause." The revised classification scheme is generally useful for pediatric patients. To make it more inclusive and clinically meaningful, some local customizations are required.

  16. Phase characteristics of rheograms. Original classification of phase-related changes of rheos

    Directory of Open Access Journals (Sweden)

    Mikhail Y. Rudenko

    2014-05-01

    Full Text Available The phase characteristics of a rheogram are described in literature in general only. The existing theory of impedance rheography is based on an analysis of the form of rheogram envelopes, but not on the phase-related processes and their interpretation according to the applicable laws of physics. The aim of the present paper is to describe the phase-related characteristics of a rheogram of the ascending aorta. The method of the heart cycle phase analysis has been used for this purpose. By synchronizing an ECG of the aorta and a rheogram, an analysis of specific changes in the aorta blood filling in each phase is provided. As a result, the phase changes of a rheogram associated with the ECG phase structure are described and tabulated for first time. The author hereof offers his own original classification of the phase-related changes of rheograms.

  17. Dermal and inhalation acute toxic class methods: test procedures and biometric evaluations for the Globally Harmonized Classification System.

    Science.gov (United States)

    Holzhütter, H G; Genschow, E; Diener, W; Schlede, E

    2003-05-01

    The acute toxic class (ATC) methods were developed for determining LD(50)/LC(50) estimates of chemical substances with significantly fewer animals than needed when applying conventional LD(50)/LC(50) tests. The ATC methods are sequential stepwise procedures with fixed starting doses/concentrations and a maximum of six animals used per dose/concentration. The numbers of dead/moribund animals determine whether further testing is necessary or whether the test is terminated. In recent years we have developed classification procedures for the oral, dermal and inhalation routes of administration by using biometric methods. The biometric approach assumes a probit model for the mortality probability of a single animal and assigns the chemical to that toxicity class for which the best concordance is achieved between the statistically expected and the observed numbers of dead/moribund animals at the various steps of the test procedure. In previous publications we have demonstrated the validity of the biometric ATC methods on the basis of data obtained for the oral ATC method in two-animal ring studies with 15 participants from six countries. Although the test procedures and biometric evaluations for the dermal and inhalation ATC methods have already been published, there was a need for an adaptation of the classification schemes to the starting doses/concentrations of the Globally Harmonized Classification System (GHS) recently adopted by the Organization for Economic Co-operation and Development (OECD). Here we present the biometric evaluation of the dermal and inhalation ATC methods for the starting doses/concentrations of the GHS and of some other international classification systems still in use. We have developed new test procedures and decision rules for the dermal and inhalation ATC methods, which require significantly fewer animals to provide predictions of toxicity classes, that are equally good or even better than those achieved by using the conventional LD(50)/LC

  18. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  19. Swimming level classification of young school age children and their success in a long distance swimming test

    OpenAIRE

    Nováková, Martina

    2010-01-01

    Title: Swimming level classification of young school age children and their success in a long distance swimming test Work objectives: The outcome of our work is comparison and evaluation of the initial and final swimming lenght in a test of long distance swimming. This test is taken during one swimming course. Methodology: Data which were obtained by testing a certain group of people and were statistically processed, showed the swimming level and performance of the young school age children. ...

  20. Land use classification and change analysis using ERTS-1 imagery in CARETS

    Science.gov (United States)

    Alexander, R. H.

    1973-01-01

    Land use detail in the CARETS area obtainable from ERTS exceeds the expectations of the Interagency Steering Committee and the USGS proposed standardized classification, which presents Level 1 categories for ERTS and Level 2 for high altitude aircraft data. Some Levels 2 and 3, in addition to Level 1, categories were identified on ERTS data. Significant land use changes totaling 39.2 sq km in the Norfolk-Portsmouth SMSA were identified and mapped at Level 2 detail using a combination of procedures employing ERTS and high altitude aircraft data.

  1. 34 CFR 222.8 - What action must an applicant take upon a change in its boundary, classification, control...

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false What action must an applicant take upon a change in its boundary, classification, control, governing authority, or identity? 222.8 Section 222.8 Education..., DEPARTMENT OF EDUCATION IMPACT AID PROGRAMS General § 222.8 What action must an applicant take upon a change...

  2. Can the Ni classification of vessels predict neoplasia? A systematic review and meta-analysis.

    Science.gov (United States)

    Mehlum, Camilla S; Rosenberg, Tine; Dyrvig, Anne-Kirstine; Groentved, Aagot Moeller; Kjaergaard, Thomas; Godballe, Christian

    2018-01-01

    The Ni classification of vascular change from 2011 is well documented for evaluating pharyngeal and laryngeal lesions, primarily focusing on cancer. In the planning of surgery it may be more relevant to differentiate neoplasia from non-neoplasia. We aimed to evaluate the ability of the Ni classification to predict laryngeal or hypopharyngeal neoplasia and to investigate if a changed cutoff value would support the recent European Laryngological Society (ELS) proposal of perpendicular vascular changes as indicative of neoplasia. PubMed, Embase, Cochrane, and Scopus databases. A systematic review and meta-analysis was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement. We systematically searched for publications from 2011 until 2016. All retrieved studies were reviewed and qualitatively assessed. The pooled sensitivity and specificity of the Ni classification with two different cutoffs were calculated, and bubble and summary receiver operating characteristics plots were created. The combined sensitivity of five studies (n = 687) with Ni type IV-V defined as test-positive was 0.89 (95% confidence interval [CI]: 0.76-0.95), and specificity was 0.82 (95% CI: 0.72-0.89). The equivalent combined sensitivity of four studies (n = 624) with Ni type V defined as test-positive was 0.82 (95% CI: 0.75-0.87), and specificity was 0.93 (95% CI: 0.82-0.97). The diagnostic accuracy of the Ni classification in predicting neoplasia was high, without significant difference between the two analyzed cutoff values. Implementation of the proposed ELS classification of vascular changes seems reasonable from a clinical perspective, with comparable accuracy. Attention must be drawn to the accompanying risk of exposing patients to unnecessary surgery. Laryngoscope, 128:168-176, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Bio-geographic classification of the Caspian Sea

    Science.gov (United States)

    Fendereski, F.; Vogt, M.; Payne, M. R.; Lachkar, Z.; Gruber, N.; Salmanmahiny, A.; Hosseini, S. A.

    2014-03-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information derived from space and in-situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions using the Hierarchical Agglomerative Clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total suspended matter (TSM) and its seasonal variation (DTSM). The classification results reveal a robust separation between the northern and the middle/southern basins as well as a separation of the shallow near-shore waters from those off-shore. The observed patterns in ecoregions can be attributed to differences in climate and geochemical factors such as distance from river, water depth and currents. A comparison of the annual and monthly mean Chl a concentrations between the different ecoregions shows significant differences (Kruskal-Wallis rank test, P qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 27 different species of plankton, fish and benthic invertebrate also confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics.

  4. Improving Hyperspectral Image Classification Method for Fine Land Use Assessment Application Using Semisupervised Machine Learning

    Directory of Open Access Journals (Sweden)

    Chunyang Wang

    2015-01-01

    Full Text Available Study on land use/cover can reflect changing rules of population, economy, agricultural structure adjustment, policy, and traffic and provide better service for the regional economic development and urban evolution. The study on fine land use/cover assessment using hyperspectral image classification is a focal growing area in many fields. Semisupervised learning method which takes a large number of unlabeled samples and minority labeled samples, improving classification and predicting the accuracy effectively, has been a new research direction. In this paper, we proposed improving fine land use/cover assessment based on semisupervised hyperspectral classification method. The test analysis of study area showed that the advantages of semisupervised classification method could improve the high precision overall classification and objective assessment of land use/cover results.

  5. Classification of parotidectomy: a proposed modification to the European Salivary Gland Society classification system.

    Science.gov (United States)

    Wong, Wai Keat; Shetty, Subhaschandra

    2017-08-01

    Parotidectomy remains the mainstay of treatment for both benign and malignant lesions of the parotid gland. There exists a wide range of possible surgical options in parotidectomy in terms of extent of parotid tissue removed. There is increasing need for uniformity of terminology resulting from growing interest in modifications of the conventional parotidectomy. It is, therefore, of paramount importance for a standardized classification system in describing extent of parotidectomy. Recently, the European Salivary Gland Society (ESGS) proposed a novel classification system for parotidectomy. The aim of this study is to evaluate this system. A classification system proposed by the ESGS was critically re-evaluated and modified to increase its accuracy and its acceptability. Modifications mainly focused on subdividing Levels I and II into IA, IB, IIA, and IIB. From June 2006 to June 2016, 126 patients underwent 130 parotidectomies at our hospital. The classification system was tested in that cohort of patient. While the ESGS classification system is comprehensive, it does not cover all possibilities. The addition of Sublevels IA, IB, IIA, and IIB may help to address some of the clinical situations seen and is clinically relevant. We aim to test the modified classification system for partial parotidectomy to address some of the challenges mentioned.

  6. Safety quality classification test of the sealed neutron sources used in start-up neutron source rods for Qinshan Nuclear Power Plant

    International Nuclear Information System (INIS)

    Yao Chunbing; Guo Gang; Chao Jinglan; Duan Liming

    1992-01-01

    According to the regulations listed in the GB4075, the safety quality classification tests have been carried out for the neutron sources. The test items include temperature, external pressure, impact, vibration and puncture, Two dummy sealed sources are used for each test item. The testing equipment used have been examined and verified to be qualified by the measuring department which is admitted by the National standard Bureau. The leak rate of each tested sample is measured by UL-100 Helium Leak Detector (its minimum detectable leak rate is 1 x 10 -10 Pa·m 3 ·s -1 ). The samples with leak rate less than 1.33 x 10 -8 Pa·m 3 ·s -1 are considered up to the standard. The test results show the safety quality classification class of the neutron sources have reached the class of GB/E66545 which exceeds the preset class

  7. Classification by a neural network approach applied to non destructive testing

    International Nuclear Information System (INIS)

    Lefevre, M.; Preteux, F.; Lavayssiere, B.

    1995-01-01

    Radiography is used by EDF for pipe inspection in nuclear power plants in order to detect defects. The radiographs obtained are then digitized in a well-defined protocol. The aim of EDF consists of developing a non destructive testing system for recognizing defects. In this paper, we describe the recognition procedure of areas with defects. We first present the digitization protocol, specifies the poor quality of images under study and propose a procedure to enhance defects. We then examine the problem raised by the choice of good features for classification. After having proved that statistical or standard textural features such as homogeneity, entropy or contrast are not relevant, we develop a geometrical-statistical approach based on the cooperation between signal correlations study and regional extrema analysis. The principle consists of analysing and comparing for areas with defects and without any defect, the evolution of conditional probabilities matrices for increasing neighborhood sizes, the shape of variograms and the location of regional minima. We demonstrate that anisotropy and surface of series of 'comet tails' associated with probability matrices, variograms slope and statistical indices, regional extrema location, are features able to discriminate areas with defects from areas without any. The classification is then realized by a neural network, which structure, properties and learning mechanisms are detailed. Finally we discuss the results. (authors). 21 refs., 5 figs

  8. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  9. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    Science.gov (United States)

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  10. A comparison of change detection measurements using object-based and pixel-based classification methods on western juniper dominated woodlands in eastern Oregon

    Directory of Open Access Journals (Sweden)

    Ryan G. Howell

    2017-03-01

    Full Text Available Encroachment of pinyon (Pinus spp and juniper (Juniperus spp. woodlands in western North America is considered detrimental due to its effects on ecohydrology, plant community structure, and soil stability. Management plans at the federal, state, and private level often include juniper removal for improving habitat of sensitive species and maintaining sustainable ecosystem processes. Remote sensing has become a useful tool in determining changes in juniper woodland structure because of its uses in comparing archived historic imagery with newly available multispectral images to provide information on changes that are no longer detectable by field measurements. Change in western juniper (J. occidentalis cover was detected following juniper removal treatments between 1995 and 2011 using panchromatic 1-meter NAIP and 4-band 1-meter NAIP imagery, respectively. Image classification was conducted using remotely sensed images taken at the Roaring Springs Ranch in southeastern Oregon. Feature Analyst for ArcGIS (object-based extraction and a supervised classification with ENVI 5.2 (pixel-based extraction were used to delineate juniper canopy cover. Image classification accuracy was calculated using an Accuracy Assessment and Kappa Statistic. Both methods showed approximately a 76% decrease in western juniper cover, although differing in total canopy cover area, with object-based classification being more accurate. Classification results for the 2011 imagery were much more accurate (0.99 Kappa statistic because of its low juniper density and the presence of an infrared band. The development of methods for detecting change in juniper cover can lead to more accurate and efficient data acquisition and subsequently improved land management and monitoring practices. These data can subsequently be used to assess and quantify juniper invasion and succession, potential ecological impacts, and plant community resilience.

  11. Algorithms and data structures for automated change detection and classification of sidescan sonar imagery

    Science.gov (United States)

    Gendron, Marlin Lee

    During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author's Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3--48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author's repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the

  12. The development of rock suitability classification strategies in the Finnish spent nuclear fuel disposal program

    International Nuclear Information System (INIS)

    Hellae, Pirjo; Hagros, Annika; Aaltonen, Ismo; Kosunen, Paula; Mattila, Jussi

    2015-01-01

    This paper describes the development of the rock suitability classification strategies applied to locate the spent fuel repository in crystalline rock in Finland. Development of the classification procedure is motivated not only by the regulatory requirements, but also by the need to more closely integrate site characterization, repository design and long-term safety assessment. The classification procedure has been developed along with the increasing level of detail of the available site data and knowledge on the performance of the engineered barrier system (EBS). The classification system has also been adapted to the changes in the regulations. The present form of the classification system and experiences from testing the system at the site are described. Demonstration activities have shown that the criteria and the stepwise research, construction and decision making protocol can be applied successfully.

  13. The development of rock suitability classification strategies in the Finnish spent nuclear fuel disposal program

    Energy Technology Data Exchange (ETDEWEB)

    Hellae, Pirjo; Hagros, Annika [Saanio and Riekkola Oy (Finland); Aaltonen, Ismo; Kosunen, Paula; Mattila, Jussi [Posiva Oy (Finland)

    2015-07-01

    This paper describes the development of the rock suitability classification strategies applied to locate the spent fuel repository in crystalline rock in Finland. Development of the classification procedure is motivated not only by the regulatory requirements, but also by the need to more closely integrate site characterization, repository design and long-term safety assessment. The classification procedure has been developed along with the increasing level of detail of the available site data and knowledge on the performance of the engineered barrier system (EBS). The classification system has also been adapted to the changes in the regulations. The present form of the classification system and experiences from testing the system at the site are described. Demonstration activities have shown that the criteria and the stepwise research, construction and decision making protocol can be applied successfully.

  14. The Functional Classification and Field Test Performance in Wheelchair Basketball Players

    Directory of Open Access Journals (Sweden)

    Gil Susana María

    2015-06-01

    Full Text Available Wheelchair basketball players are classified in four classes based on the International Wheelchair Basketball Federation (IWBF system of competition. Thus, the aim of the study was to ascertain if the IWBF classification, the type of injury and the wheelchair experience were related to different performance field-based tests. Thirteen basketball players undertook anthropometric measurements and performance tests (hand dynamometry, 5 m and 20 m sprints, 5 m and 20 m sprints with a ball, a T-test, a Pick-up test, a modified 10 m Yo-Yo intermittent recovery test, a maximal pass and a medicine ball throw. The IWBF class was correlated (p<0.05 to the hand dynamometry (r= 0.84, the maximal pass (r=0.67 and the medicine ball throw (r= 0.67. Whereas the years of dependence on the wheelchair were correlated to the velocity (p<0.01: 5 m (r= −0.80 and 20 m (r= −0.77 and agility tests (r= −0.77, p<0.01. Also, the 20 m sprint with a ball (r= 0.68 and the T-test (r= −0.57 correlated (p<0.05 with the experience in playing wheelchair basketball. Therefore, in this team the correlations of the performance variables differed when they were related to the disability class, the years of dependence on the wheelchair and the experience in playing wheelchair basketball. These results should be taken into account by the technical staff and coaches of the teams when assessing performance of wheelchair basketball players.

  15. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  16. Differences in physical-fitness test scores between actively and passively recruited older adults : Consequences for norm-based classification

    NARCIS (Netherlands)

    van Heuvelen, M.J.G.; Stevens, M.; Kempen, G.I.J.M.

    This study investigated differences in physical-fitness test scores between actively and passively recruited older adults and the consequences thereof for norm-based classification of individuals. Walking endurance, grip strength, hip flexibility, balance, manual dexterity, and reaction time were

  17. Mapping of the Universe of Knowledge in Different Classification Schemes

    Directory of Open Access Journals (Sweden)

    M. P. Satija

    2017-06-01

    Full Text Available Given the variety of approaches to mapping the universe of knowledge that have been presented and discussed in the literature, the purpose of this paper is to systematize their main principles and their applications in the major general modern library classification schemes. We conducted an analysis of the literature on classification and the main classification systems, namely Dewey/Universal Decimal Classification, Cutter’s Expansive Classification, Subject Classification of J.D. Brown, Colon Classification, Library of Congress Classification, Bibliographic Classification, Rider’s International Classification, Bibliothecal Bibliographic Klassification (BBK, and Broad System of Ordering (BSO. We conclude that the arrangement of the main classes can be done following four principles that are not mutually exclusive: ideological principle, social purpose principle, scientific order, and division by discipline. The paper provides examples and analysis of each system. We also conclude that as knowledge is ever-changing, classifications also change and present a different structure of knowledge depending upon the society and time of their design.

  18. GED Test Changes and Attainment: Overview of 2014 GED Test Changes and Attainment in Washington State

    Science.gov (United States)

    Larson, Kara; Gaeta, Cristina; Sager, Lou

    2016-01-01

    In January 2014, the GED Testing Service significantly redesigned the GED test to incorporate the Common Core State Standards and the College and Career Readiness Standards for Adult Education. The purpose of this study was to examine the significant changes made to the test in 2014, examine the impact of the changes on Washingtonians, and make…

  19. 7 CFR 28.910 - Classification of samples and issuance of classification data.

    Science.gov (United States)

    2010-01-01

    ... MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Cotton Classification...

  20. Bridging interest, classification and technology gaps in the climate change regime

    International Nuclear Information System (INIS)

    Gupta, J.; Van der Werff, P.; Gagnon-Lebrun, F.; Van Dijk, I.; Verspeek, F.; Arkesteijn, E.; Van der Meer, J.

    2002-01-01

    The climate change regime is affected by a major credibility gap; there is a gap between what countries have been stating that they are willing to do and what they actually do. This is visible not just in the inability of the developed countries to stabilise their emissions at 1990 levels by the year 2000 as provided for in the United Nations Framework Convention on Climate Change (FCCC), but by the general reluctance of all countries to ratify the Kyoto Protocol to the Convention (KPFCCC). This research postulates that this credibility gap is affected further by three other types of gaps: 1) the interest gap; 2) the classification gap; and 3) the technology gap. The purpose of this research is thus to identify ways and means to promote industrial transformation in developing countries as a method to address the climate change problem. The title of this project is: Bridging Gaps - Enhancing Domestic and International Technological Collaboration to Enable the Adoption of Climate Relevant Technologies and Practices (CT and Ps) and thereby Foster Participation and Implementation of the Climate Convention (FCCC) by Developing Countries (DCs). In order to enhance technology co-operation, we believe that graduation profiles are needed at the international level and stakeholder involvement at both the national and international levels. refs

  1. Biogeographic classification of the Caspian Sea

    DEFF Research Database (Denmark)

    Fendereski, F.; Vogt, M.; Payne, Mark

    2014-01-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information deriv...... confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics....... from space and in-situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions...... in phytoplankton phenology, with differences in the date of bloom initiation, its duration and amplitude between ecoregions. A first qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 27 different species of plankton, fish and benthic invertebrate also...

  2. Trends and concepts in fern classification

    Science.gov (United States)

    Christenhusz, Maarten J. M.; Chase, Mark W.

    2014-01-01

    Background and Aims Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. Scope An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Key Results Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called ‘fern allies’ (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is

  3. Trends and concepts in fern classification.

    Science.gov (United States)

    Christenhusz, Maarten J M; Chase, Mark W

    2014-03-01

    Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called 'fern allies' (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is sister to all other vascular plants, whereas

  4. Robust tissue classification for reproducible wound assessment in telemedicine environments

    Science.gov (United States)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  5. Bosniak Classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens

    2014-01-01

    Background: The Bosniak classification is a diagnostic tool for the differentiation of cystic changes in the kidney. The process of categorizing renal cysts may be challenging, involving a series of decisions that may affect the final diagnosis and clinical outcome such as surgical management....... Purpose: To investigate the inter- and intra-observer agreement among experienced uroradiologists when categorizing complex renal cysts according to the Bosniak classification. Material and Methods: The original categories of 100 cystic renal masses were chosen as “Gold Standard” (GS), established...... to the calculated weighted κ all readers performed “very good” for both inter-observer and intra-observer variation. Most variation was seen in cysts catagorized as Bosniak II, IIF, and III. These results show that radiologists who evaluate complex renal cysts routinely may apply the Bosniak classification...

  6. SPITZER IRS SPECTRA OF LUMINOUS 8 μm SOURCES IN THE LARGE MAGELLANIC CLOUD: TESTING COLOR-BASED CLASSIFICATIONS

    International Nuclear Information System (INIS)

    Buchanan, Catherine L.; Kastner, Joel H.; Hrivnak, Bruce J.; Sahai, Raghvendra

    2009-01-01

    We present archival Spitzer Infrared Spectrograph (IRS) spectra of 19 luminous 8 μm selected sources in the Large Magellanic Cloud (LMC). The object classes derived from these spectra and from an additional 24 spectra in the literature are compared with classifications based on Two Micron All Sky Survey (2MASS)/MSX (J, H, K, and 8 μm) colors in order to test the 'JHK8' (Kastner et al.) classification scheme. The IRS spectra confirm the classifications of 22 of the 31 sources that can be classified under the JHK8 system. The spectroscopic classification of 12 objects that were unclassifiable in the JHK8 scheme allow us to characterize regions of the color-color diagrams that previously lacked spectroscopic verification, enabling refinements to the JHK8 classification system. The results of these new classifications are consistent with previous results concerning the identification of the most infrared-luminous objects in the LMC. In particular, while the IRS spectra reveal several new examples of asymptotic giant branch (AGB) stars with O-rich envelopes, such objects are still far outnumbered by carbon stars (C-rich AGB stars). We show that Spitzer IRAC/MIPS color-color diagrams provide improved discrimination between red supergiants and oxygen-rich and carbon-rich AGB stars relative to those based on 2MASS/MSX colors. These diagrams will enable the most luminous IR sources in Local Group galaxies to be classified with high confidence based on their Spitzer colors. Such characterizations of stellar populations will continue to be possible during Spitzer's warm mission through the use of IRAC [3.6]-[4.5] and 2MASS colors.

  7. Seizure classification in EEG signals utilizing Hilbert-Huang transform

    Directory of Open Access Journals (Sweden)

    Abdulhay Enas W

    2011-05-01

    Full Text Available Abstract Background Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG signals. Method Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. Results The t-test results in a P-value Conclusion An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.

  8. Seizure classification in EEG signals utilizing Hilbert-Huang transform.

    Science.gov (United States)

    Oweis, Rami J; Abdulhay, Enas W

    2011-05-24

    Classification method capable of recognizing abnormal activities of the brain functionality are either brain imaging or brain signal analysis. The abnormal activity of interest in this study is characterized by a disturbance caused by changes in neuronal electrochemical activity that results in abnormal synchronous discharges. The method aims at helping physicians discriminate between healthy and seizure electroencephalographic (EEG) signals. Discrimination in this work is achieved by analyzing EEG signals obtained from freely accessible databases. MATLAB has been used to implement and test the proposed classification algorithm. The analysis in question presents a classification of normal and ictal activities using a feature relied on Hilbert-Huang Transform. Through this method, information related to the intrinsic functions contained in the EEG signal has been extracted to track the local amplitude and the frequency of the signal. Based on this local information, weighted frequencies are calculated and a comparison between ictal and seizure-free determinant intrinsic functions is then performed. Methods of comparison used are the t-test and the Euclidean clustering. The t-test results in a P-value with respect to its fast response and ease to use. An original tool for EEG signal processing giving physicians the possibility to diagnose brain functionality abnormalities is presented in this paper. The proposed system bears the potential of providing several credible benefits such as fast diagnosis, high accuracy, good sensitivity and specificity, time saving and user friendly. Furthermore, the classification of mode mixing can be achieved using the extracted instantaneous information of every IMF, but it would be most likely a hard task if only the average value is used. Extra benefits of this proposed system include low cost, and ease of interface. All of that indicate the usefulness of the tool and its use as an efficient diagnostic tool.

  9. Correlation of the New York Heart Association classification and the cardiopulmonary exercise test: A systematic review.

    Science.gov (United States)

    Lim, Fang Yi; Yap, Jonathan; Gao, Fei; Teo, Ling Li; Lam, Carolyn S P; Yeo, Khung Keong

    2018-07-15

    The New York Heart Association (NYHA) classification is frequently used in the management of heart failure but may be limited by patient and physician subjectivity. Cardiopulmonary exercise testing (CPET) provides a potentially more objective measurement of functional status. We aim to study the correlation between NYHA classification and peak oxygen consumption (pVO 2 ) on Cardiopulmonary Exercise Testing (CPET) within and across published studies. A systematic literature review on all studies reporting both NYHA class and CPET data was performed, and pVO 2 from CPET was correlated to reported NYHA class within and across eligible studies. 38 studies involving 2645 patients were eligible. Heterogenity was assessed by the Q statistic, which is a χ2 test and marker of systematic differences between studies. Within each NYHA class, significant heterogeneity in pVO 2 was seen across studies: NYHA I (n = 17, Q = 486.7, p < 0.0001), II (n = 24, Q = 381.0, p < 0.0001), III (n = 32, Q = 761.3, p < 0.0001) and IV (n = 5, Q = 12.8, p = 0.012). Significant differences in mean pVO 2 were observed between NYHA I and II (23.8 vs 17.6 mL/(kg·min), p < 0.0001) and II and III (17.6 vs 13.3 mL/(kg·min), p < 0.0001); but not between NYHA III and IV (13.3 vs 12.5 mL/(kg·min), p = 0.45). These differences remained significant after adjusting for age, gender, ejection fraction and region of study. There was a general inverse correlation between NYHA class and pVO 2. However, significant heterogeneity in pVO 2 exists across studies within each NYHA class. While the NYHA classification holds clinical value in heart failure management, direct comparison across studies may have its limitations. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Falls classification using tri-axial accelerometers during the five-times-sit-to-stand test.

    Science.gov (United States)

    Doheny, Emer P; Walsh, Cathal; Foran, Timothy; Greene, Barry R; Fan, Chie Wei; Cunningham, Clodagh; Kenny, Rose Anne

    2013-09-01

    The five-times-sit-to-stand test (FTSS) is an established assessment of lower limb strength, balance dysfunction and falls risk. Clinically, the time taken to complete the task is recorded with longer times indicating increased falls risk. Quantifying the movement using tri-axial accelerometers may provide a more objective and potentially more accurate falls risk estimate. 39 older adults, 19 with a history of falls, performed four repetitions of the FTSS in their homes. A tri-axial accelerometer was attached to the lateral thigh and used to identify each sit-stand-sit phase and sit-stand and stand-sit transitions. A second tri-axial accelerometer, attached to the sternum, captured torso acceleration. The mean and variation of the root-mean-squared amplitude, jerk and spectral edge frequency of the acceleration during each section of the assessment were examined. The test-retest reliability of each feature was examined using intra-class correlation analysis, ICC(2,k). A model was developed to classify participants according to falls status. Only features with ICC>0.7 were considered during feature selection. Sequential forward feature selection within leave-one-out cross-validation resulted in a model including four reliable accelerometer-derived features, providing 74.4% classification accuracy, 80.0% specificity and 68.7% sensitivity. An alternative model using FTSS time alone resulted in significantly reduced classification performance. Results suggest that the described methodology could provide a robust and accurate falls risk assessment. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    International Nuclear Information System (INIS)

    Pichara, Karim; Protopapas, Pavlos

    2013-01-01

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same

  12. AUTOMATIC CLASSIFICATION OF VARIABLE STARS IN CATALOGS WITH MISSING DATA

    Energy Technology Data Exchange (ETDEWEB)

    Pichara, Karim [Computer Science Department, Pontificia Universidad Católica de Chile, Santiago (Chile); Protopapas, Pavlos [Institute for Applied Computational Science, Harvard University, Cambridge, MA (United States)

    2013-11-10

    We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks and a probabilistic graphical model that allows us to perform inference to predict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilizes sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model, we use three catalogs with missing data (SAGE, Two Micron All Sky Survey, and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches, and at what computational cost. Integrating these catalogs with missing data, we find that classification of variable objects improves by a few percent and by 15% for quasar detection while keeping the computational cost the same.

  13. Tree Classification with Fused Mobile Laser Scanning and Hyperspectral Data

    Science.gov (United States)

    Puttonen, Eetu; Jaakkola, Anttoni; Litkey, Paula; Hyyppä, Juha

    2011-01-01

    Mobile Laser Scanning data were collected simultaneously with hyperspectral data using the Finnish Geodetic Institute Sensei system. The data were tested for tree species classification. The test area was an urban garden in the City of Espoo, Finland. Point clouds representing 168 individual tree specimens of 23 tree species were determined manually. The classification of the trees was done using first only the spatial data from point clouds, then with only the spectral data obtained with a spectrometer, and finally with the combined spatial and hyperspectral data from both sensors. Two classification tests were performed: the separation of coniferous and deciduous trees, and the identification of individual tree species. All determined tree specimens were used in distinguishing coniferous and deciduous trees. A subset of 133 trees and 10 tree species was used in the tree species classification. The best classification results for the fused data were 95.8% for the separation of the coniferous and deciduous classes. The best overall tree species classification succeeded with 83.5% accuracy for the best tested fused data feature combination. The respective results for paired structural features derived from the laser point cloud were 90.5% for the separation of the coniferous and deciduous classes and 65.4% for the species classification. Classification accuracies with paired hyperspectral reflectance value data were 90.5% for the separation of coniferous and deciduous classes and 62.4% for different species. The results are among the first of their kind and they show that mobile collected fused data outperformed single-sensor data in both classification tests and by a significant margin. PMID:22163894

  14. Natural vs human-induced changes at the Tauranga Harbour area (New Zealand): a time -series acoustic seabed classification comparison

    Science.gov (United States)

    Capperucci, Ruggero Maria; Bartholomä, Alexander; Renken, Sabrina; De Lange, Willem

    2013-04-01

    The Tauranga Harbour Bay (New Zealand) is a mesotidal estuary system, enclosed by the Matakana barrier island. It hosts the leading export port in New Zealand and the second largest import port by value. Coastal changes are well documented over the last decades, mainly at the southern entrance of the area, between Matakana Island and Mt. Maunganui. It is an extremely dynamic environment, where natural processes are strongly influenced by human activities. In particular, the understanding of the recent evolution of the system is crucial for policymakers. In fact, the cumulative impact due to the maintenance of the port (mainly dredging activities, shipping, facilities construction, but also increasing tourism) and its already approved expansion clashes with the claim of the local Maori communities, which recently leaded to a court action. A hydroacoustic multiple-device survey (Side-scan Sonar SSS, Multibeam Echo-sounder MBES and Single Beam Echo-sounder) coupled with sediment sampling was carried out in March 2011 over an area of 0.8 km2, southern Matakana Island, along the Western Channel. The area is not directly impacted by dredging activities, resulting in an optimal testing site for assessing indirect effects of human disturbance on coastal dynamics. The main goals were: 1. To test the response of different acoustic systems in such a highly dynamic environment; 2. To study the influence of dredging activities on sediment dynamics and habitat changes, by means of comparing the current data with existing ones, in order to distinguish between natural and human induced changes Results demonstrate a good agreement between acoustic classifications from different systems. They seem to be mainly driven by the sediment distribution, with a distinctive fingerprint given by shells and shell fragments. Nevertheless, the presence of relevant topographic features (i.e. large bedform fields) influences swath-looking systems (SSS and MBES). SSS and MBES classifications tend

  15. Biogeographic classification of the Caspian Sea

    Science.gov (United States)

    Fendereski, F.; Vogt, M.; Payne, M. R.; Lachkar, Z.; Gruber, N.; Salmanmahiny, A.; Hosseini, S. A.

    2014-11-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information derived from space and in situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions using the hierarchical agglomerative clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total suspended matter (TSM) and its seasonal variation (DTSM). The classification results reveal a robust separation between the northern and the middle/southern basins as well as a separation of the shallow nearshore waters from those offshore. The observed patterns in ecoregions can be attributed to differences in climate and geochemical factors such as distance from river, water depth and currents. A comparison of the annual and monthly mean Chl a concentrations between the different ecoregions shows significant differences (one-way ANOVA, P qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 25 different species of plankton, fish and benthic invertebrate also confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics.

  16. Medical Devices; Clinical Chemistry and Clinical Toxicology Devices; Classification of the Organophosphate Test System. Final order.

    Science.gov (United States)

    2017-10-18

    The Food and Drug Administration (FDA or we) is classifying the organophosphate test system into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the organophosphate test system's classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.

  17. Medical Devices; Hematology and Pathology Devices; Classification of a Cervical Intraepithelial Neoplasia Test System. Final order.

    Science.gov (United States)

    2018-01-03

    The Food and Drug Administration (FDA or we) is classifying the cervical intraepithelial neoplasia (CIN) test system into class II (special controls). The special controls that apply to the device type are identified in this order and will be part of the codified language for the CIN test system's classification. We are taking this action because we have determined that classifying the device into class II (special controls) will provide a reasonable assurance of safety and effectiveness of the device. We believe this action will also enhance patients' access to beneficial innovative devices, in part by reducing regulatory burdens.

  18. Creating high-resolution time series land-cover classifications in rapidly changing forested areas with BULC-U in Google Earth Engine

    Science.gov (United States)

    Cardille, J. A.; Lee, J.

    2017-12-01

    With the opening of the Landsat archive, there is a dramatically increased potential for creating high-quality time series of land use/land-cover (LULC) classifications derived from remote sensing. Although LULC time series are appealing, their creation is typically challenging in two fundamental ways. First, there is a need to create maximally correct LULC maps for consideration at each time step; and second, there is a need to have the elements of the time series be consistent with each other, without pixels that flip improbably between covers due only to unavoidable, stray classification errors. We have developed the Bayesian Updating of Land Cover - Unsupervised (BULC-U) algorithm to address these challenges simultaneously, and introduce and apply it here for two related but distinct purposes. First, with minimal human intervention, we produced an internally consistent, high-accuracy LULC time series in rapidly changing Mato Grosso, Brazil for a time interval (1986-2000) in which cropland area more than doubled. The spatial and temporal resolution of the 59 LULC snapshots allows users to witness the establishment of towns and farms at the expense of forest. The new time series could be used by policy-makers and analysts to unravel important considerations for conservation and management, including the timing and location of past development, the rate and nature of changes in forest connectivity, the connection with road infrastructure, and more. The second application of BULC-U is to sharpen the well-known GlobCover 2009 classification from 300m to 30m, while improving accuracy measures for every class. The greatly improved resolution and accuracy permits a better representation of the true LULC proportions, the use of this map in models, and quantification of the potential impacts of changes. Given that there may easily be thousands and potentially millions of images available to harvest for an LULC time series, it is imperative to build useful algorithms

  19. Exploratory analysis of methods for automated classification of laboratory test orders into syndromic groups in veterinary medicine.

    Directory of Open Access Journals (Sweden)

    Fernanda C Dórea

    Full Text Available BACKGROUND: Recent focus on earlier detection of pathogen introduction in human and animal populations has led to the development of surveillance systems based on automated monitoring of health data. Real- or near real-time monitoring of pre-diagnostic data requires automated classification of records into syndromes--syndromic surveillance--using algorithms that incorporate medical knowledge in a reliable and efficient way, while remaining comprehensible to end users. METHODS: This paper describes the application of two of machine learning (Naïve Bayes and Decision Trees and rule-based methods to extract syndromic information from laboratory test requests submitted to a veterinary diagnostic laboratory. RESULTS: High performance (F1-macro = 0.9995 was achieved through the use of a rule-based syndrome classifier, based on rule induction followed by manual modification during the construction phase, which also resulted in clear interpretability of the resulting classification process. An unmodified rule induction algorithm achieved an F(1-micro score of 0.979 though this fell to 0.677 when performance for individual classes was averaged in an unweighted manner (F(1-macro, due to the fact that the algorithm failed to learn 3 of the 16 classes from the training set. Decision Trees showed equal interpretability to the rule-based approaches, but achieved an F(1-micro score of 0.923 (falling to 0.311 when classes are given equal weight. A Naïve Bayes classifier learned all classes and achieved high performance (F(1-micro= 0.994 and F(1-macro = .955, however the classification process is not transparent to the domain experts. CONCLUSION: The use of a manually customised rule set allowed for the development of a system for classification of laboratory tests into syndromic groups with very high performance, and high interpretability by the domain experts. Further research is required to develop internal validation rules in order to establish

  20. Classification of Building Object Types

    DEFF Research Database (Denmark)

    Jørgensen, Kaj Asbjørn

    2011-01-01

    made. This is certainly the case in the Danish development. Based on the theories about these abstraction mechanisms, the basic principles for classification systems are presented and the observed misconceptions are analyses and explained. Furthermore, it is argued that the purpose of classification...... systems has changed and that new opportunities should be explored. Some proposals for new applications are presented and carefully aligned with IT opportunities. Especially, the use of building modelling will give new benefits and many of the traditional uses of classification systems will instead...... be managed by software applications and on the basis of building models. Classification systems with taxonomies of building object types have many application opportunities but can still be beneficial in data exchange between building construction partners. However, this will be performed by new methods...

  1. Evaluation of classification systems for nonspecific idiopathic orbital inflammation

    NARCIS (Netherlands)

    Bijlsma, Ward R.; van 't Hullenaar, Fleur C.; Mourits, Maarten P.; Kalmann, Rachel

    2012-01-01

    To systematically analyze existing classification systems for idiopathic orbital inflammation (IOI) and propose and test a new best practice classification system. A systematic literature search was conducted to find all studies that described and applied a classification system for IOI.

  2. Ancillary testing, diagnostic/classification criteria and severity grading in Behçet disease.

    Science.gov (United States)

    Okada, Annabelle A; Stanford, Miles; Tabbara, Khalid

    2012-12-01

    Since there is no pathognomonic clinical sign or laboratory test to distinguish Behçet disease from other uveitic entities, the diagnosis must be made based on characteristic ocular and systemic findings in the absence of evidence of other disease that can explain the findings. Ancillary tests, including ocular and brain imaging studies, are used to assess the severity of intraocular inflammation and systemic manifestations of Behçet disease, to identify latent infections and other medical conditions that might worsen with systemic treatment, and to monitor for adverse effects of drugs used. There are two diagnostic or classification criteria in general use by the uveitis community, one from Japan and one from an international group; both rely on a minimum number and/or combination of clinical findings to identify Behçet disease. Finally, several grading schemes have been proposed to assess severity of ocular disease and response to treatment.

  3. Toward the establishment of standardized in vitro tests for lipid-based formulations, part 4: proposing a new lipid formulation performance classification system.

    Science.gov (United States)

    Williams, Hywel D; Sassene, Philip; Kleberg, Karen; Calderone, Marilyn; Igonin, Annabel; Jule, Eduardo; Vertommen, Jan; Blundell, Ross; Benameur, Hassan; Müllertz, Anette; Porter, Christopher J H; Pouton, Colin W

    2014-08-01

    The Lipid Formulation Classification System Consortium looks to develop standardized in vitro tests and to generate much-needed performance criteria for lipid-based formulations (LBFs). This article highlights the value of performing a second, more stressful digestion test to identify LBFs near a performance threshold and to facilitate lead formulation selection in instances where several LBF prototypes perform adequately under standard digestion conditions (but where further discrimination is necessary). Stressed digestion tests can be designed based on an understanding of the factors that affect LBF performance, including the degree of supersaturation generated on dispersion/digestion. Stresses evaluated included decreasing LBF concentration (↓LBF), increasing bile salt, and decreasing pH. Their capacity to stress LBFs was dependent on LBF composition and drug type: ↓LBF was a stressor to medium-chain glyceride-rich LBFs, but not more hydrophilic surfactant-rich LBFs, whereas decreasing pH stressed tolfenamic acid LBFs, but not fenofibrate LBFs. Lastly, a new Performance Classification System, that is, LBF composition independent, is proposed to promote standardized LBF comparisons, encourage robust LBF development, and facilitate dialogue with the regulatory authorities. This classification system is based on the concept that performance evaluations across three in vitro tests, designed to subject a LBF to progressively more challenging conditions, will enable effective LBF discrimination and performance grading. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  4. Asynchronous data-driven classification of weapon systems

    International Nuclear Information System (INIS)

    Jin, Xin; Mukherjee, Kushal; Gupta, Shalabh; Ray, Asok; Phoha, Shashi; Damarla, Thyagaraju

    2009-01-01

    This communication addresses real-time weapon classification by analysis of asynchronous acoustic data, collected from microphones on a sensor network. The weapon classification algorithm consists of two parts: (i) feature extraction from time-series data using symbolic dynamic filtering (SDF), and (ii) pattern classification based on the extracted features using the language measure (LM) and support vector machine (SVM). The proposed algorithm has been tested on field data, generated by firing of two types of rifles. The results of analysis demonstrate high accuracy and fast execution of the pattern classification algorithm with low memory requirements. Potential applications include simultaneous shooter localization and weapon classification with soldier-wearable networked sensors. (rapid communication)

  5. Soil classification based on cone penetration test (CPT) data in Western Central Java

    Science.gov (United States)

    Apriyono, Arwan; Yanto, Santoso, Purwanto Bekti; Sumiyanto

    2018-03-01

    This study presents a modified friction ratio range for soil classification i.e. gravel, sand, silt & clay and peat, using CPT data in Western Central Java. The CPT data was obtained solely from Soil Mechanic Laboratory of Jenderal Soedirman University that covers more than 300 sites within the study area. About 197 data were produced from data filtering process. IDW method was employed to interpolated friction ratio values in a regular grid point for soil classification map generation. Soil classification map was generated and presented using QGIS software. In addition, soil classification map with respect to modified friction ratio range was validated using 10% of total measurements. The result shows that silt and clay dominate soil type in the study area, which is in agreement with two popular methods namely Begemann and Vos. However, the modified friction ratio range produces 85% similarity with laboratory measurements whereby Begemann and Vos method yields 70% similarity. In addition, modified friction ratio range can effectively distinguish fine and coarse grains, thus useful for soil classification and subsequently for landslide analysis. Therefore, modified friction ratio range proposed in this study can be used to identify soil type for mountainous tropical region.

  6. Vietnamese Document Representation and Classification

    Science.gov (United States)

    Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter

    Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.

  7. [New International Classification of Chronic Pancreatitis (M-ANNHEIM multifactor classification system, 2007): principles, merits, and demerits].

    Science.gov (United States)

    Tsimmerman, Ia S

    2008-01-01

    The new International Classification of Chronic Pancreatitis (designated as M-ANNHEIM) proposed by a group of German specialists in late 2007 is reviewed. All its sections are subjected to analysis (risk group categories, clinical stages and phases, variants of clinical course, diagnostic criteria for "established" and "suspected" pancreatitis, instrumental methods and functional tests used in the diagnosis, evaluation of the severity of the disease using a scoring system, stages of elimination of pain syndrome). The new classification is compared with the earlier classification proposed by the author. Its merits and demerits are discussed.

  8. Seismic texture classification. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Vinther, R.

    1997-12-31

    The seismic texture classification method, is a seismic attribute that can both recognize the general reflectivity styles and locate variations from these. The seismic texture classification performs a statistic analysis for the seismic section (or volume) aiming at describing the reflectivity. Based on a set of reference reflectivities the seismic textures are classified. The result of the seismic texture classification is a display of seismic texture categories showing both the styles of reflectivity from the reference set and interpolations and extrapolations from these. The display is interpreted as statistical variations in the seismic data. The seismic texture classification is applied to seismic sections and volumes from the Danish North Sea representing both horizontal stratifications and salt diapers. The attribute succeeded in recognizing both general structure of successions and variations from these. Also, the seismic texture classification is not only able to display variations in prospective areas (1-7 sec. TWT) but can also be applied to deep seismic sections. The seismic texture classification is tested on a deep reflection seismic section (13-18 sec. TWT) from the Baltic Sea. Applied to this section the seismic texture classification succeeded in locating the Moho, which could not be located using conventional interpretation tools. The seismic texture classification is a seismic attribute which can display general reflectivity styles and deviations from these and enhance variations not found by conventional interpretation tools. (LN)

  9. New guidelines for dam safety classification

    International Nuclear Information System (INIS)

    Dascal, O.

    1999-01-01

    Elements are outlined of recommended new guidelines for safety classification of dams. Arguments are provided for the view that dam classification systems should require more than one system as follows: (a) classification for selection of design criteria, operation procedures and emergency measures plans, based on potential consequences of a dam failure - the hazard classification of water retaining structures; (b) classification for establishment of surveillance activities and for safety evaluation of dams, based on the probability and consequences of failure - the risk classification of water retaining structures; and (c) classification for establishment of water management plans, for safety evaluation of the entire project, for preparation of emergency measures plans, for definition of the frequency and extent of maintenance operations, and for evaluation of changes and modifications required - the hazard classification of the project. The hazard classification of the dam considers, as consequence, mainly the loss of lives or persons in jeopardy and the property damages to third parties. Difficulties in determining the risk classification of the dam lie in the fact that no tool exists to evaluate the probability of the dam's failure. To overcome this, the probability of failure can be substituted for by a set of dam characteristics that express the failure potential of the dam and its foundation. The hazard classification of the entire project is based on the probable consequences of dam failure influencing: loss of life, persons in jeopardy, property and environmental damage. The classification scheme is illustrated for dam threatening events such as earthquakes and floods. 17 refs., 5 tabs

  10. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  11. Classification of Radioactive Waste. General Safety Guide

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-11-15

    This publication is a revision of an earlier Safety Guide of the same title issued in 1994. It recommends revised waste management strategies that reflect changes in practices and approaches since then. It sets out a classification system for the management of waste prior to disposal and for disposal, driven by long term safety considerations. It includes a number of schemes for classifying radioactive waste that can be used to assist with planning overall national approaches to radioactive waste management and to assist with operational management at facilities. Contents: 1. Introduction; 2. The radioactive waste classification scheme; Appendix: The classification of radioactive waste; Annex I: Evolution of IAEA standards on radioactive waste classification; Annex II: Methods of classification; Annex III: Origin and types of radioactive waste.

  12. Classification of Radioactive Waste. General Safety Guide

    International Nuclear Information System (INIS)

    2009-01-01

    This publication is a revision of an earlier Safety Guide of the same title issued in 1994. It recommends revised waste management strategies that reflect changes in practices and approaches since then. It sets out a classification system for the management of waste prior to disposal and for disposal, driven by long term safety considerations. It includes a number of schemes for classifying radioactive waste that can be used to assist with planning overall national approaches to radioactive waste management and to assist with operational management at facilities. Contents: 1. Introduction; 2. The radioactive waste classification scheme; Appendix: The classification of radioactive waste; Annex I: Evolution of IAEA standards on radioactive waste classification; Annex II: Methods of classification; Annex III: Origin and types of radioactive waste

  13. A Classification System to Guide Physical Therapy Management in Huntington Disease: A Case Series.

    Science.gov (United States)

    Fritz, Nora E; Busse, Monica; Jones, Karen; Khalil, Hanan; Quinn, Lori

    2017-07-01

    Individuals with Huntington disease (HD), a rare neurological disease, experience impairments in mobility and cognition throughout their disease course. The Medical Research Council framework provides a schema that can be applied to the development and evaluation of complex interventions, such as those provided by physical therapists. Treatment-based classifications, based on expert consensus and available literature, are helpful in guiding physical therapy management across the stages of HD. Such classifications also contribute to the development and further evaluation of well-defined complex interventions in this highly variable and complex neurodegenerative disease. The purpose of this case series was to illustrate the use of these classifications in the management of 2 individuals with late-stage HD. Two females, 40 and 55 years of age, with late-stage HD participated in this case series. Both experienced progressive declines in ambulatory function and balance as well as falls or fear of falling. Both individuals received daily care in the home for activities of daily living. Physical therapy Treatment-Based Classifications for HD guided the interventions and outcomes. Eight weeks of in-home balance training, strength training, task-specific practice of functional activities including transfers and walking tasks, and family/carer education were provided. Both individuals demonstrated improvements that met or exceeded the established minimal detectible change values for gait speed and Timed Up and Go performance. Both also demonstrated improvements on Berg Balance Scale and Physical Performance Test performance, with 1 of the 2 individuals exceeding the established minimal detectible changes for both tests. Reductions in fall risk were evident in both cases. These cases provide proof-of-principle to support use of treatment-based classifications for physical therapy management in individuals with HD. Traditional classification of early-, mid-, and late

  14. An automated Pearson's correlation change classification (APC3) approach for GC/MS metabonomic data using total ion chromatograms (TICs).

    Science.gov (United States)

    Prakash, Bhaskaran David; Esuvaranathan, Kesavan; Ho, Paul C; Pasikanti, Kishore Kumar; Chan, Eric Chun Yong; Yap, Chun Wei

    2013-05-21

    A fully automated and computationally efficient Pearson's correlation change classification (APC3) approach is proposed and shown to have overall comparable performance with both an average accuracy and an average AUC of 0.89 ± 0.08 but is 3.9 to 7 times faster, easier to use and have low outlier susceptibility in contrast to other dimensional reduction and classification combinations using only the total ion chromatogram (TIC) intensities of GC/MS data. The use of only the TIC permits the possible application of APC3 to other metabonomic data such as LC/MS TICs or NMR spectra. A RapidMiner implementation is available for download at http://padel.nus.edu.sg/software/padelapc3.

  15. Non-Hodgkin lymphoma response evaluation with MRI texture classification

    Directory of Open Access Journals (Sweden)

    Heinonen Tomi T

    2009-06-01

    Full Text Available Abstract Background To show magnetic resonance imaging (MRI texture appearance change in non-Hodgkin lymphoma (NHL during treatment with response controlled by quantitative volume analysis. Methods A total of 19 patients having NHL with an evaluable lymphoma lesion were scanned at three imaging timepoints with 1.5T device during clinical treatment evaluation. Texture characteristics of images were analyzed and classified with MaZda application and statistical tests. Results NHL tissue MRI texture imaged before treatment and under chemotherapy was classified within several subgroups, showing best discrimination with 96% correct classification in non-linear discriminant analysis of T2-weighted images. Texture parameters of MRI data were successfully tested with statistical tests to assess the impact of the separability of the parameters in evaluating chemotherapy response in lymphoma tissue. Conclusion Texture characteristics of MRI data were classified successfully; this proved texture analysis to be potential quantitative means of representing lymphoma tissue changes during chemotherapy response monitoring.

  16. Magnetic resonance imaging texture analysis classification of primary breast cancer

    International Nuclear Information System (INIS)

    Waugh, S.A.; Lerski, R.A.; Purdie, C.A.; Jordan, L.B.; Vinnicombe, S.; Martin, P.; Thompson, A.M.

    2016-01-01

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  17. Magnetic resonance imaging texture analysis classification of primary breast cancer

    Energy Technology Data Exchange (ETDEWEB)

    Waugh, S.A.; Lerski, R.A. [Ninewells Hospital and Medical School, Department of Medical Physics, Dundee (United Kingdom); Purdie, C.A.; Jordan, L.B. [Ninewells Hospital and Medical School, Department of Pathology, Dundee (United Kingdom); Vinnicombe, S. [University of Dundee, Division of Imaging and Technology, Ninewells Hospital and Medical School, Dundee (United Kingdom); Martin, P. [Ninewells Hospital and Medical School, Department of Clinical Radiology, Dundee (United Kingdom); Thompson, A.M. [University of Texas MD Anderson Cancer Center, Department of Surgical Oncology, Houston, TX (United States)

    2016-02-15

    Patient-tailored treatments for breast cancer are based on histological and immunohistochemical (IHC) subtypes. Magnetic Resonance Imaging (MRI) texture analysis (TA) may be useful in non-invasive lesion subtype classification. Women with newly diagnosed primary breast cancer underwent pre-treatment dynamic contrast-enhanced breast MRI. TA was performed using co-occurrence matrix (COM) features, by creating a model on retrospective training data, then prospectively applying to a test set. Analyses were blinded to breast pathology. Subtype classifications were performed using a cross-validated k-nearest-neighbour (k = 3) technique, with accuracy relative to pathology assessed and receiver operator curve (AUROC) calculated. Mann-Whitney U and Kruskal-Wallis tests were used to assess raw entropy feature values. Histological subtype classifications were similar across training (n = 148 cancers) and test sets (n = 73 lesions) using all COM features (training: 75 %, AUROC = 0.816; test: 72.5 %, AUROC = 0.823). Entropy features were significantly different between lobular and ductal cancers (p < 0.001; Mann-Whitney U). IHC classifications using COM features were also similar for training and test data (training: 57.2 %, AUROC = 0.754; test: 57.0 %, AUROC = 0.750). Hormone receptor positive and negative cancers demonstrated significantly different entropy features. Entropy features alone were unable to create a robust classification model. Textural differences on contrast-enhanced MR images may reflect underlying lesion subtypes, which merits testing against treatment response. (orig.)

  18. Influence of test conditions and exposure duration on the result of ecotoxicological tests

    DEFF Research Database (Denmark)

    Rosenkrantz, Rikke Tjørnhøj

    be calculated from results of ecotoxicological tests performed according to internationally approved guidelines, such as from the Organisation for Economic Co-operation and Development (OECD) or International Standardization Organisation (ISO). Such guidelines were originally developed to enable classification......H and exposure duration on the toxicity recorded in tests using four sulfonylurea herbicides (SUs) and the aquatic macrophyte Lemna gibba as study objects. The study showed that changing the physical and chemical test conditions influenced the toxicity of sulfonylurea herbicides towards L. gibba. Lowering...

  19. Stream classification of the Apalachicola-Chattahoochee-Flint River System to support modeling of aquatic habitat response to climate change

    Science.gov (United States)

    Elliott, Caroline M.; Jacobson, Robert B.; Freeman, Mary C.

    2014-01-01

    A stream classification and associated datasets were developed for the Apalachicola-Chattahoochee-Flint River Basin to support biological modeling of species response to climate change in the southeastern United States. The U.S. Geological Survey and the Department of the Interior’s National Climate Change and Wildlife Science Center established the Southeast Regional Assessment Project (SERAP) which used downscaled general circulation models to develop landscape-scale assessments of climate change and subsequent effects on land cover, ecosystems, and priority species in the southeastern United States. The SERAP aquatic and hydrologic dynamics modeling efforts involve multiscale watershed hydrology, stream-temperature, and fish-occupancy models, which all are based on the same stream network. Models were developed for the Apalachicola-Chattahoochee-Flint River Basin and subbasins in Alabama, Florida, and Georgia, and for the Upper Roanoke River Basin in Virginia. The stream network was used as the spatial scheme through which information was shared across the various models within SERAP. Because these models operate at different scales, coordinated pair versions of the network were delineated, characterized, and parameterized for coarse- and fine-scale hydrologic and biologic modeling. The stream network used for the SERAP aquatic models was extracted from a 30-meter (m) scale digital elevation model (DEM) using standard topographic analysis of flow accumulation. At the finer scale, reaches were delineated to represent lengths of stream channel with fairly homogenous physical characteristics (mean reach length = 350 m). Every reach in the network is designated with geomorphic attributes including upstream drainage basin area, channel gradient, channel width, valley width, Strahler and Shreve stream order, stream power, and measures of stream confinement. The reach network was aggregated from tributary junction to tributary junction to define segments for the

  20. Impact of job classification on employment of seasonal workers

    Directory of Open Access Journals (Sweden)

    Zoran Pandža

    2011-07-01

    Full Text Available The paper aims to improve the existing work organization, thus improving success of business process and ultimately reducing company costs. A change in organizational structure is proposed with the objective of achieving better and more efficient use of resources available within the company. Since the existing organization and classification of jobs does not meet the requirements of the age we live in, there is a need for new classification which would address many changes that have taken place over the years, including changes that are yet to be made for the purpose of further development of the company. Organization and management of the company as well as reorganization and implementation of a new classification is necessary to make it possible for the company to perform regular adjustment of business activities, because the conditions in which the company operates are changing fast. New classification would not actually change the number of sectors. Rather, existing personnel would be allocated in a better way, which would result in reduced needs for seasonal work force. In the process of defining the new organizational structure, one should consider the type, way of doing business, structural variables (division of labour, unity of command, authority and responsibility, span of control, division in business units, etc.. Expected results include improved organization and classification of jobs, improved quality of work, speed and efficiency. It should result in a company organized according to standards that are adjusted to modern times.

  1. Radar transmitter classification using non-stationary signal classifier

    CSIR Research Space (South Africa)

    Du Plessis, MC

    2009-07-01

    Full Text Available support vector machine which is applied to the radar pulse's time-frequency representation. The time-frequency representation is refined using particle swarm optimization to increase the classification accuracy. The classification accuracy is tested...

  2. Evaluation of physicochemical properties of radioactive cesium in municipal solid waste incineration fly ash by particle size classification and leaching tests.

    Science.gov (United States)

    Fujii, Kengo; Ochi, Kotaro; Ohbuchi, Atsushi; Koike, Yuya

    2018-07-01

    After the Fukushima Daiichi-Nuclear Power Plant accident, environmental recovery was a major issue because a considerable amount of municipal solid waste incineration (MSWI) fly ash was highly contaminated with radioactive cesium. To the best of our knowledge, only a few studies have evaluated the detailed physicochemical properties of radioactive cesium in MSWI fly ash to propose an effective method for the solidification and reuse of MSWI fly ash. In this study, MSWI fly ash was sampled in Fukushima Prefecture. The physicochemical properties of radioactive cesium in MSWI fly ash were evaluated by particle size classification (less than 25, 25-45, 45-100, 100-300, 300-500, and greater than 500 μm) and the Japanese leaching test No. 13 called "JLT-13". These results obtained from the classification of fly ash indicated that the activity concentration of radioactive cesium and the content of the coexisting matter (i.e., chloride and potassium) temporarily change in response to the particle size of fly ash. X-ray diffraction results indicated that water-soluble radioactive cesium exists as CsCl because of the cooling process and that insoluble cesium is bound to the inner sphere of amorphous matter. These results indicated that the distribution of radioactive cesium depends on the characteristics of MSWI fly ash. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Development and test of a classification scheme for human factors in incident reports

    International Nuclear Information System (INIS)

    Miller, R.; Freitag, M.; Wilpert, B.

    1997-01-01

    The Research Center System Safety of the Berlin University of Technology conducted a research project on the analysis of Human Factors (HF) aspects in incident reported by German Nuclear Power Plants. Based on psychological theories and empirical studies a classification scheme was developed which permits the identification of human involvement in incidents. The classification scheme was applied in an epidemiological study to a selection of more than 600 HF - relevant incidents. The results allow insights into HF related problem areas. An additional study proved that the application of the classification scheme produces results which are reliable and independent from raters. (author). 13 refs, 1 fig

  4. Proposed changes in the classification of carcinogenic chemicals in the work area.

    Science.gov (United States)

    Neumann, H G; Thielmann, H W; Filser, J G; Gelbke, H P; Greim, H; Kappus, H; Norpoth, K H; Reuter, U; Vamvakas, S; Wardenbach, P; Wichmann, H E

    1997-12-01

    Carcinogenic chemicals in the work area are currently classified into three categories in Section III of the German List of MAK and BAT Values. This classification is based on qualitative criteria and reflects essentially the weight of evidence available for judging the carcinogenic potential of the chemicals. It is proposed that these Categories--IIIA1, IIIA2, and IIIB--be retained as Categories 1, 2, and 3, to conform with EU regulations. On the basis of our advancing knowledge of reaction mechanisms and the potency of carcinogens, it is now proposed that these three categories be supplemented with two additional categories. The essential feature of substances classified in the new categories is that exposure to these chemicals does not convey a significant risk of cancer to man, provided that an appropriate exposure limit (MAK value) is observed. It is proposed that chemicals known to act typically by nongenotoxic mechanisms and for which information is available that allows evaluation of the effects of low-dose exposures be classified in Category 4. Genotoxic chemicals for which low carcinogenic potency can be expected on the basis of dose-response relationships and toxicokinetics and for which risk at low doses can be assessed will be classified in Category 5. The basis for a better differentiation of carcinogens is discussed, the new categories are defined, and possible criteria for classification are described. Examples for Category 4 (1,4-dioxane) and Category 5 (styrene) are presented. The proposed changes in classifying carcinogenic chemicals in the work area are presented for further discussion.

  5. Classifying Classifications

    DEFF Research Database (Denmark)

    Debus, Michael S.

    2017-01-01

    This paper critically analyzes seventeen game classifications. The classifications were chosen on the basis of diversity, ranging from pre-digital classification (e.g. Murray 1952), over game studies classifications (e.g. Elverdam & Aarseth 2007) to classifications of drinking games (e.g. LaBrie et...... al. 2013). The analysis aims at three goals: The classifications’ internal consistency, the abstraction of classification criteria and the identification of differences in classification across fields and/or time. Especially the abstraction of classification criteria can be used in future endeavors...... into the topic of game classifications....

  6. Issues surrounding the classification of accounting information

    Directory of Open Access Journals (Sweden)

    Huibrecht Van der Poll

    2011-06-01

    Full Text Available The act of classifying information created by accounting practices is ubiquitous in the accounting process; from recording to reporting, it has almost become second nature. The classification has to correspond to the requirements and demands of the changing environment in which it is practised. Evidence suggests that the current classification of items in financial statements is not keeping pace with the needs of users and the new financial constructs generated by the industry. This study addresses the issue of classification in two ways: by means of a critical analysis of classification theory and practices and by means of a questionnaire that was developed and sent to compilers and users of financial statements. A new classification framework for accounting information in the balance sheet and income statement is proposed.

  7. Automatic classification of transient ischaemic and transient non-ischaemic heart-rate related ST segment deviation episodes in ambulatory ECG records

    International Nuclear Information System (INIS)

    Faganeli, J; Jager, F

    2010-01-01

    In ambulatory ECG records, besides transient ischaemic ST segment deviation episodes, there are also transient non-ischaemic heart-rate related ST segment deviation episodes present, which appear only due to a change in heart rate and thus complicate automatic detection of true ischaemic episodes. The goal of this work was to automatically classify these two types of episodes. The tested features to classify the ST segment deviation episodes were changes of heart rate, changes of the Mahalanobis distance of the first five Karhunen–Loève transform (KLT) coefficients of the QRS complex, changes of time-domain morphologic parameters of the ST segment and changes of the Legendre orthonormal polynomial coefficients of the ST segment. We chose Legendre basis functions because they best fit typical shapes of the ST segment morphology, thus allowing direct insight into the ST segment morphology changes through the feature space. The classification was performed with the help of decision trees. We tested the classification method using all records of the Long-Term ST Database on all ischaemic and all non-ischaemic heart-rate related deviation episodes according to annotation protocol B. In order to predict the real-world performance of the classification we used second-order aggregate statistics, gross and average statistics, and the bootstrap method. We obtained the best performance when we combined the heart-rate features, the Mahalanobis distance and the Legendre orthonormal polynomial coefficient features, with average sensitivity of 98.1% and average specificity of 85.2%

  8. Classification of Polarimetric SAR Data Using Dictionary Learning

    DEFF Research Database (Denmark)

    Vestergaard, Jacob Schack; Nielsen, Allan Aasbjerg; Dahl, Anders Lindbjerg

    2012-01-01

    This contribution deals with classification of multilook fully polarimetric synthetic aperture radar (SAR) data by learning a dictionary of crop types present in the Foulum test site. The Foulum test site contains a large number of agricultural fields, as well as lakes, forests, natural vegetation......, grasslands and urban areas, which make it ideally suited for evaluation of classification algorithms. Dictionary learning centers around building a collection of image patches typical for the classification problem at hand. This requires initial manual labeling of the classes present in the data and is thus...... a method for supervised classification. Sparse coding of these image patches aims to maintain a proficient number of typical patches and associated labels. Data is consecutively classified by a nearest neighbor search of the dictionary elements and labeled with probabilities of each class. Each dictionary...

  9. Automated classification of Acid Rock Drainage potential from Corescan drill core imagery

    Science.gov (United States)

    Cracknell, M. J.; Jackson, L.; Parbhakar-Fox, A.; Savinova, K.

    2017-12-01

    Classification of the acid forming potential of waste rock is important for managing environmental hazards associated with mining operations. Current methods for the classification of acid rock drainage (ARD) potential usually involve labour intensive and subjective assessment of drill core and/or hand specimens. Manual methods are subject to operator bias, human error and the amount of material that can be assessed within a given time frame is limited. The automated classification of ARD potential documented here is based on the ARD Index developed by Parbhakar-Fox et al. (2011). This ARD Index involves the combination of five indicators: A - sulphide content; B - sulphide alteration; C - sulphide morphology; D - primary neutraliser content; and E - sulphide mineral association. Several components of the ARD Index require accurate identification of sulphide minerals. This is achieved by classifying Corescan Red-Green-Blue true colour images into the presence or absence of sulphide minerals using supervised classification. Subsequently, sulphide classification images are processed and combined with Corescan SWIR-based mineral classifications to obtain information on sulphide content, indices representing sulphide textures (disseminated versus massive and degree of veining), and spatially associated minerals. This information is combined to calculate ARD Index indicator values that feed into the classification of ARD potential. Automated ARD potential classifications of drill core samples associated with a porphyry Cu-Au deposit are compared to manually derived classifications and those obtained by standard static geochemical testing and X-ray diffractometry analyses. Results indicate a high degree of similarity between automated and manual ARD potential classifications. Major differences between approaches are observed in sulphide and neutraliser mineral percentages, likely due to the subjective nature of manual estimates of mineral content. The automated approach

  10. Classification of solid industrial waste based on ecotoxicology tests using Daphnia magna: an alternative

    OpenAIRE

    William Gerson Matias; Vanessa Guimarães Machado; Cátia Regina Silva de Carvalho-Pinto; Débora Monteiro Brentano; Letícia Flohr

    2005-01-01

    The adequate treatment and final disposal of solid industrial wastes depends on their classification into class I or II. This classification is proposed by NBR 10.004; however, it is complex and time-consuming. With a view to facilitating this classification, the use of assays with Daphnia magna is proposed. These assays make possible the identification of toxic chemicals in the leach, which denotes the presence of one of the characteristics described by NBR 10.004, the toxicity, which is a s...

  11. Machine-learning methods in the classification of water bodies

    Directory of Open Access Journals (Sweden)

    Sołtysiak Marek

    2016-06-01

    Full Text Available Amphibian species have been considered as useful ecological indicators. They are used as indicators of environmental contamination, ecosystem health and habitat quality., Amphibian species are sensitive to changes in the aquatic environment and therefore, may form the basis for the classification of water bodies. Water bodies in which there are a large number of amphibian species are especially valuable even if they are located in urban areas. The automation of the classification process allows for a faster evaluation of the presence of amphibian species in the water bodies. Three machine-learning methods (artificial neural networks, decision trees and the k-nearest neighbours algorithm have been used to classify water bodies in Chorzów – one of 19 cities in the Upper Silesia Agglomeration. In this case, classification is a supervised data mining method consisting of several stages such as building the model, the testing phase and the prediction. Seven natural and anthropogenic features of water bodies (e.g. the type of water body, aquatic plants, the purpose of the water body (destination, position of the water body in relation to any possible buildings, condition of the water body, the degree of littering, the shore type and fishing activities have been taken into account in the classification. The data set used in this study involved information about 71 different water bodies and 9 amphibian species living in them. The results showed that the best average classification accuracy was obtained with the multilayer perceptron neural network.

  12. A new flood type classification method for use in climate change impact studies

    Directory of Open Access Journals (Sweden)

    Thea Turkington

    2016-12-01

    Full Text Available Flood type classification is an optimal tool to cluster floods with similar meteorological triggering conditions. Under climate change these flood types may change differently as well as new flood types develop. This paper presents a new methodology to classify flood types, particularly for use in climate change impact studies. A weather generator is coupled with a conceptual rainfall-runoff model to create long synthetic records of discharge to efficiently build an inventory with high number of flood events. Significant discharge days are classified into causal types using k-means clustering of temperature and precipitation indicators capturing differences in rainfall amount, antecedent rainfall and snow-cover and day of year. From climate projections of bias-corrected temperature and precipitation, future discharge and associated change in flood types are assessed. The approach is applied to two different Alpine catchments: the Ubaye region, a small catchment in France, dominated by rain-on-snow flood events during spring, and the larger Salzach catchment in Austria, affected more by rainfall summer/autumn flood events. The results show that the approach is able to reproduce the observed flood types in both catchments. Under future climate scenarios, the methodology identifies changes in the distribution of flood types and characteristics of the flood types in both study areas. The developed methodology has potential to be used flood impact assessment and disaster risk management as future changes in flood types will have implications for both the local social and ecological systems in the future.

  13. Inter-examiner reliability of a standardized Ultra-sonographic method for classification of changes related to supraspinatus tendinopathy – a pilot study

    DEFF Research Database (Denmark)

    Larsen, Camilla Marie; Ingwersen, Kim Gordon; Hjarnbæk, John

    2015-01-01

    Inter-examiner reliability of a standardized Ultra-sonographic method for classification of changes related to supraspinatus tendinopathy – a pilot study Ingwersen KG1, 2, Hjarbaek J3, Eshøj H1, Larsen CM1, 4, Vobbe J5, Juul-Kristensen B1, 6 1Institute of Sports Science and Clinical Biomechanics......, University of Southern Denmark, Odense, Denmark. 2Physiotherapy Department, Hospital Lillebaelt, Vejle Hospital, Vejle, Denmark 3Department of Radiology, Musculoskeletal section, Odense University Hospital, Odense, Denmark 4Health Sciences Research Centre, University College Lillebaelt, Odense Denmark 5...... athletes. For optimizing rehabilitation to the different stages of tendinopathy (1) ultra-sonography (US) may be used. Reliability of such method for RT is lacking. Aims. To develop and test inter-examiner reliability of US for classifying RT. Materials and Methods. A three-phased standardized protocol...

  14. Signal processing for non-destructive testing of railway tracks

    Science.gov (United States)

    Heckel, Thomas; Casperson, Ralf; Rühe, Sven; Mook, Gerhard

    2018-04-01

    Increased speed, heavier loads, altered material and modern drive systems result in an increasing number of rail flaws. The appearance of these flaws also changes continually due to the rapid change in damage mechanisms of modern rolling stock. Hence, interpretation has become difficult when evaluating non-destructive rail testing results. Due to the changed interplay between detection methods and flaws, the recorded signals may result in unclassified types of rail flaws. Methods for automatic rail inspection (according to defect detection and classification) undergo continual development. Signal processing is a key technology to master the challenge of classification and maintain resolution and detection quality, independent of operation speed. The basic ideas of signal processing, based on the Glassy-Rail-Diagram for classification purposes, are presented herein. Examples for the detection of damages caused by rolling contact fatigue also are given, and synergetic effects of combined evaluation of diverse inspection methods are shown.

  15. Classification using diffraction patterns for single-particle analysis

    International Nuclear Information System (INIS)

    Hu, Hongli; Zhang, Kaiming; Meng, Xing

    2016-01-01

    An alternative method has been assessed; diffraction patterns derived from the single particle data set were used to perform the first round of classification in creating the initial averages for proteins data with symmetrical morphology. The test protein set was a collection of Caenorhabditis elegans small heat shock protein 17 obtained by Cryo EM, which has a tetrahedral (12-fold) symmetry. It is demonstrated that the initial classification on diffraction patterns is workable as well as the real-space classification that is based on the phase contrast. The test results show that the information from diffraction patterns has the enough details to make the initial model faithful. The potential advantage using the alternative method is twofold, the ability to handle the sets with poor signal/noise or/and that break the symmetry properties. - Highlights: • New classification method. • Create the accurate initial model. • Better in handling noisy data.

  16. Classification using diffraction patterns for single-particle analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Hongli; Zhang, Kaiming [Department of Biophysics, the Health Science Centre, Peking University, Beijing 100191 (China); Meng, Xing, E-mail: xmeng101@gmail.com [Wadsworth Centre, New York State Department of Health, Albany, New York 12201 (United States)

    2016-05-15

    An alternative method has been assessed; diffraction patterns derived from the single particle data set were used to perform the first round of classification in creating the initial averages for proteins data with symmetrical morphology. The test protein set was a collection of Caenorhabditis elegans small heat shock protein 17 obtained by Cryo EM, which has a tetrahedral (12-fold) symmetry. It is demonstrated that the initial classification on diffraction patterns is workable as well as the real-space classification that is based on the phase contrast. The test results show that the information from diffraction patterns has the enough details to make the initial model faithful. The potential advantage using the alternative method is twofold, the ability to handle the sets with poor signal/noise or/and that break the symmetry properties. - Highlights: • New classification method. • Create the accurate initial model. • Better in handling noisy data.

  17. Classification of hand eczema

    DEFF Research Database (Denmark)

    Agner, T; Aalto-Korte, K; Andersen, K E

    2015-01-01

    BACKGROUND: Classification of hand eczema (HE) is mandatory in epidemiological and clinical studies, and also important in clinical work. OBJECTIVES: The aim was to test a recently proposed classification system of HE in clinical practice in a prospective multicentre study. METHODS: Patients were...... recruited from nine different tertiary referral centres. All patients underwent examination by specialists in dermatology and were checked using relevant allergy testing. Patients were classified into one of the six diagnostic subgroups of HE: allergic contact dermatitis, irritant contact dermatitis, atopic...... system investigated in the present study was useful, being able to give an appropriate main diagnosis for 89% of HE patients, and for another 7% when using two main diagnoses. The fact that more than half of the patients had one or more additional diagnoses illustrates that HE is a multifactorial disease....

  18. The effects of clinical, epidemiological and economic aspects of changes in classification criteria of selected rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Aleksander J. Owczarek

    2014-06-01

    Full Text Available The paper presents the epidemiology and socio-economic aspects of the three most common rheumatic diseases: rheumatoid arthritis (RA, systemic lupus erythematosus (SLE and scleroderma. The incidence of rheumatic diseases in a population is estimated at 4–5%. Prevalence rate for RA in Poland is 0.45% of the adult population and is similar to the rate reported in the EU (0.49%. It is estimated that the average incidence of SLE is 40–55 per 100 thousand and that the annual incidence of systemic sclerosis is 19–35 cases per million (depending on the country. Nearly 18% of all hospital admissions in Poland are associated with rheumatic diseases. The introduction of new classification criteria for rheumatoid arthritis, allowing classification of the early forms of the disease and their use in clinical practice will probably change the assessment of incidence of this disease in the population.

  19. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  20. Optimizing Neuropsychological Assessments for Cognitive, Behavioral, and Functional Impairment Classification: A Machine Learning Study

    Directory of Open Access Journals (Sweden)

    Petronilla Battista

    2017-01-01

    Full Text Available Subjects with Alzheimer’s disease (AD show loss of cognitive functions and change in behavioral and functional state affecting the quality of their daily life and that of their families and caregivers. A neuropsychological assessment plays a crucial role in detecting such changes from normal conditions. However, despite the existence of clinical measures that are used to classify and diagnose AD, a large amount of subjectivity continues to exist. Our aim was to assess the potential of machine learning in quantifying this process and optimizing or even reducing the amount of neuropsychological tests used to classify AD patients, also at an early stage of impairment. We investigated the role of twelve state-of-the-art neuropsychological tests in the automatic classification of subjects with none, mild, or severe impairment as measured by the clinical dementia rating (CDR. Data were obtained from the ADNI database. In the groups of measures used as features, we included measures of both cognitive domains and subdomains. Our findings show that some tests are more frequently best predictors for the automatic classification, namely, LM, ADAS-Cog, AVLT, and FAQ, with a major role of the ADAS-Cog measures of delayed and immediate memory and the FAQ measure of financial competency.

  1. Catchment Classification: Connecting Climate, Structure and Function

    Science.gov (United States)

    Sawicz, K. A.; Wagener, T.; Sivapalan, M.; Troch, P. A.; Carrillo, G. A.

    2010-12-01

    Hydrology does not yet possess a generally accepted catchment classification framework. Such a classification framework needs to: [1] give names to things, i.e. the main classification step, [2] permit transfer of information, i.e. regionalization of information, [3] permit development of generalizations, i.e. to develop new theory, and [4] provide a first order environmental change impact assessment, i.e., the hydrologic implications of climate, land use and land cover change. One strategy is to create a catchment classification framework based on the notion of catchment functions (partitioning, storage, and release). Results of an empirical study presented here connects climate and structure to catchment function (in the form of select hydrologic signatures), based on analyzing over 300 US catchments. Initial results indicate a wide assortment of signature relationships with properties of climate, geology, and vegetation. The uncertainty in the different regionalized signatures varies widely, and therefore there is variability in the robustness of classifying ungauged basins. This research provides insight into the controls of hydrologic behavior of a catchment, and enables a classification framework applicable to gauged and ungauged across the study domain. This study sheds light on what we can expect to achieve in mapping climate, structure and function in a top-down manner. Results of this study complement work done using a bottom-up physically-based modeling framework to generalize this approach (Carrillo et al., this session).

  2. Classification of perovskites with supervised self-organizing maps

    International Nuclear Information System (INIS)

    Kuzmanovski, Igor; Dimitrovska-Lazova, Sandra; Aleksovska, Slobotka

    2007-01-01

    In this work supervised self-organizing maps were used for structural classification of perovskites. For this purpose, structural data for total number of 286 perovskites, belonging to ABO 3 and/or A 2 BB'O 6 types, were collected from literature: 130 of these are cubic, 85 orthorhombic and 71 monoclinic. For classification purposes, the effective ionic radii of the cations, electronegativities of the cations in B-position, as well as, the oxidation states of these cations, were used as input variables. The parameters of the developed models, as well as, the most suitable variables for classification purposes were selected using genetic algorithms. Two-third of all the compounds were used in the training phase. During the optimization process the performances of the models were checked using cross-validation leave-1/10-out. The performances of obtained solutions were checked using the test set composed of the remaining one-third of the compounds. The obtained models for classification of these three classes of perovskite compounds show very good results. Namely, the classification of the compounds in the test set resulted in small number of discrepancies (4.2-6.4%) between the actual crystallographic class and the one predicted by the models. All these results are strong arguments for the validity of supervised self-organizing maps for performing such types of classification. Therefore, the proposed procedure could be successfully used for crystallographic classification of perovskites in one of these three classes

  3. Classification and pharmacological treatment of preschool wheezing : changes since 2008

    NARCIS (Netherlands)

    Brand, Paul L. P.; Caudri, Daan; Eber, Ernst; Gaillard, Erol A.; Garcia-Marcos, Luis; Hedlin, Gunilla; Henderson, John; Kuehni, Claudia E.; Merkus, Peter J. F. M.; Pedersen, Soren; Valiuis, Arunas; Wennergren, Goeran; Bush, Andrew

    Since the publication of the European Respiratory Society Task Force report in 2008, significant new evidence has become available on the classification and management of preschool wheezing disorders. In this report, an international consensus group reviews this new evidence and proposes some

  4. Classification of sudden and arrhythmic death

    DEFF Research Database (Denmark)

    Torp-Pedersen, C; Køber, L; Elming, H

    1997-01-01

    was nearly abolished by the implantable defibrillator, indicating that arrhythmic death by this classification is meaningful, at least in the population studied. For future investigations, a call is made for committees to present data in a way that allows the reader to examine the quality of the data used......Since all death is (eventually) sudden and associated with cardiac arrhythmias, the concept of sudden death is only meaningful if it is unexpected, while arrhythmic death is only meaningful if life could have continued had the arrhythmia been prevented or treated. Current classifications of death...... or autopsy) are available in only a few percent of cases. A main problem in using classifications is the lack of validation data. This situation has, with the MADIT trial, changed in the case of the Thaler and Hinkle classification of arrhythmic death. The MADIT trial demonstrated that arrhythmic death...

  5. Column Selection for Biomedical Analysis Supported by Column Classification Based on Four Test Parameters.

    Science.gov (United States)

    Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz

    2016-01-21

    This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.

  6. A Color-Texture-Structure Descriptor for High-Resolution Satellite Image Classification

    Directory of Open Access Journals (Sweden)

    Huai Yu

    2016-03-01

    Full Text Available Scene classification plays an important role in understanding high-resolution satellite (HRS remotely sensed imagery. For remotely sensed scenes, both color information and texture information provide the discriminative ability in classification tasks. In recent years, substantial performance gains in HRS image classification have been reported in the literature. One branch of research combines multiple complementary features based on various aspects such as texture, color and structure. Two methods are commonly used to combine these features: early fusion and late fusion. In this paper, we propose combining the two methods under a tree of regions and present a new descriptor to encode color, texture and structure features using a hierarchical structure-Color Binary Partition Tree (CBPT, which we call the CTS descriptor. Specifically, we first build the hierarchical representation of HRS imagery using the CBPT. Then we quantize the texture and color features of dense regions. Next, we analyze and extract the co-occurrence patterns of regions based on the hierarchical structure. Finally, we encode local descriptors to obtain the final CTS descriptor and test its discriminative capability using object categorization and scene classification with HRS images. The proposed descriptor contains the spectral, textural and structural information of the HRS imagery and is also robust to changes in illuminant color, scale, orientation and contrast. The experimental results demonstrate that the proposed CTS descriptor achieves competitive classification results compared with state-of-the-art algorithms.

  7. Towards a formal genealogical classification of the Lezgian languages (North Caucasus: testing various phylogenetic methods on lexical data.

    Directory of Open Access Journals (Sweden)

    Alexei Kassian

    Full Text Available A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ, Neighbor joining (NJ, Unweighted pair group method with arithmetic mean (UPGMA, Bayesian Markov chain Monte Carlo (MCMC, Unweighted maximum parsimony (UMP. Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances. Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists, the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP have yielded less likely topologies.

  8. Towards a formal genealogical classification of the Lezgian languages (North Caucasus): testing various phylogenetic methods on lexical data.

    Science.gov (United States)

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies.

  9. Improved classification of Alzheimer's disease data via removal of nuisance variability.

    Directory of Open Access Journals (Sweden)

    Juha Koikkalainen

    Full Text Available Diagnosis of Alzheimer's disease is based on the results of neuropsychological tests and available supporting biomarkers such as the results of imaging studies. The results of the tests and the values of biomarkers are dependent on the nuisance features, such as age and gender. In order to improve diagnostic power, the effects of the nuisance features have to be removed from the data. In this paper, four types of interactions between classification features and nuisance features were identified. Three methods were tested to remove these interactions from the classification data. In stratified analysis, a homogeneous subgroup was generated from a training set. Data correction method utilized linear regression model to remove the effects of nuisance features from data. The third method was a combination of these two methods. The methods were tested using all the baseline data from the Alzheimer's Disease Neuroimaging Initiative database in two classification studies: classifying control subjects from Alzheimer's disease patients and discriminating stable and progressive mild cognitive impairment subjects. The results show that both stratified analysis and data correction are able to statistically significantly improve the classification accuracy of several neuropsychological tests and imaging biomarkers. The improvements were especially large for the classification of stable and progressive mild cognitive impairment subjects, where the best improvements observed were 6% units. The data correction method gave better results for imaging biomarkers, whereas stratified analysis worked well with the neuropsychological tests. In conclusion, the study shows that the excess variability caused by nuisance features should be removed from the data to improve the classification accuracy, and therefore, the reliability of diagnosis making.

  10. Cognitive-motivational deficits in ADHD: development of a classification system.

    Science.gov (United States)

    Gupta, Rashmi; Kar, Bhoomika R; Srinivasan, Narayanan

    2011-01-01

    The classification systems developed so far to detect attention deficit/hyperactivity disorder (ADHD) do not have high sensitivity and specificity. We have developed a classification system based on several neuropsychological tests that measure cognitive-motivational functions that are specifically impaired in ADHD children. A total of 240 (120 ADHD children and 120 healthy controls) children in the age range of 6-9 years and 32 Oppositional Defiant Disorder (ODD) children (aged 9 years) participated in the study. Stop-Signal, Task-Switching, Attentional Network, and Choice Delay tests were administered to all the participants. Receiver operating characteristic (ROC) analysis indicated that percentage choice of long-delay reward best classified the ADHD children from healthy controls. Single parameters were not helpful in making a differential classification of ADHD with ODD. Multinominal logistic regression (MLR) was performed with multiple parameters (data fusion) that produced improved overall classification accuracy. A combination of stop-signal reaction time, posterror-slowing, mean delay, switch cost, and percentage choice of long-delay reward produced an overall classification accuracy of 97.8%; with internal validation, the overall accuracy was 92.2%. Combining parameters from different tests of control functions not only enabled us to accurately classify ADHD children from healthy controls but also in making a differential classification with ODD. These results have implications for the theories of ADHD.

  11. Extension classification method for low-carbon product cases

    Directory of Open Access Journals (Sweden)

    Yanwei Zhao

    2016-05-01

    Full Text Available In product low-carbon design, intelligent decision systems integrated with certain classification algorithms recommend the existing design cases to designers. However, these systems mostly dependent on prior experience, and product designers not only expect to get a satisfactory case from an intelligent system but also hope to achieve assistance in modifying unsatisfactory cases. In this article, we proposed a new categorization method composed of static and dynamic classification based on extension theory. This classification method can be integrated into case-based reasoning system to get accurate classification results and to inform designers of detailed information about unsatisfactory cases. First, we establish the static classification model for cases by dependent function in a hierarchical structure. Then for dynamic classification, we make transformation for cases based on case model, attributes, attribute values, and dependent function, thus cases can take qualitative changes. Finally, the applicability of proposed method is demonstrated through a case study of screw air compressor cases.

  12. Fuzzy set classifier for waste classification tracking

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1992-01-01

    We have developed an expert system based on fuzzy logic theory to fuse the data from multiple sensors and make classification decisions for objects in a waste reprocessing stream. Fuzzy set theory has been applied in decision and control applications with some success, particularly by the Japanese. We have found that the fuzzy logic system is rather easy to design and train, a feature that can cut development costs considerably. With proper training, the classification accuracy is quite high. We performed several tests sorting radioactive test samples using a gamma spectrometer to compare fuzzy logic to more conventional sorting schemes

  13. Interventions to Educate Family Physicians to Change Test Ordering

    Directory of Open Access Journals (Sweden)

    Roger Edmund Thomas MD, PhD, CCFP, MRCGP

    2016-03-01

    Full Text Available The purpose is to systematically review randomised controlled trials (RCTs to change family physicians’ laboratory test-ordering. We searched 15 electronic databases (no language/date limitations. We identified 29 RCTs (4,111 physicians, 175,563 patients. Six studies specifically focused on reducing unnecessary tests, 23 on increasing screening tests. Using Cochrane methodology 48.5% of studies were low risk-of-bias for randomisation, 7% concealment of randomisation, 17% blinding of participants/personnel, 21% blinding outcome assessors, 27.5% attrition, 93% selective reporting. Only six studies were low risk for both randomisation and attrition. Twelve studies performed a power computation, three an intention-to-treat analysis and 13 statistically controlled clustering. Unweighted averages were computed to compare intervention/control groups for tests assessed by >5 studies. The results were that fourteen studies assessed lipids (average 10% more tests than control, 14 diabetes (average 8% > control, 5 cervical smears, 2 INR, one each thyroid, fecal occult-blood, cotinine, throat-swabs, testing after prescribing, and urine-cultures. Six studies aimed to decrease test groups (average decrease 18%, and two to increase test groups. Intervention strategies: one study used education (no change: two feedback (one 5% increase, one 27% desired decrease; eight education + feedback (average increase in desired direction >control 4.9%, ten system change (average increase 14.9%, one system change + feedback (increases 5-44%, three education + system change (average increase 6%, three education + system change + feedback (average 7.7% increase, one delayed testing. The conclusions are that only six RCTs were assessed at low risk of bias from both randomisation and attrition. Nevertheless, despite methodological shortcomings studies that found large changes (e.g. >20% probably obtained real change.

  14. New risk markers may change the HeartScore risk classification significantly in one-fifth of the population

    DEFF Research Database (Denmark)

    Olsen, M H; Hansen, T W; Christensen, M K

    2008-01-01

    subjects with estimated risk below 5%. During the following 9.5 years the composite end point of cardiovascular death, non-fatal myocardial infarction or stroke (CEP) occurred in 204 subjects. CEP was predicted in all three groups by UACR (HRs: 2.1, 2.1 and 2.3 per 10-fold increase, all P...CRP in subjects with low-moderate risk and UACR and Nt-proBNP in subjects with known diabetes of cardiovascular disease changed HeartScore risk classification significantly in 19% of the population....

  15. 75 FR 78213 - Proposed Information Collection; Comment Request; 2012 Economic Census Classification Report for...

    Science.gov (United States)

    2010-12-15

    ... 8-digit North American Industry Classification System (NAICS) based code for use in the 2012... classification due to changes in NAICS for 2012. Collecting this classification information will ensure the... the reporting burden on sampled sectors. Proper NAICS classification data ensures high quality...

  16. Scaling theory and the classification of phase transitions

    International Nuclear Information System (INIS)

    Hilfer, R.

    1992-01-01

    In this paper, the recent classification theory for phase transitions and its relation with the foundations of statistical physics is reviewed. First it is outlined how Ehrenfests classification scheme can be generalized into a general thermodynamic classification theory for phase transitions. The classification theory implies scaling and multiscaling thereby eliminating the need to postulate the scaling hypothesis as a fourth law of thermodynamics. The new classification has also led to the discovery and distinction of nonequilibrium transitions within equilibrium statistical physics. Nonequilibrium phase transitions are distinguished from equilibrium transitions by orders less than unity and by the fact the equilibrium thermodynamics and statistical mechanics become inapplicable at the critical point. The latter fact requires a change in the Gibbs assumption underlying the canonical and grandcanonical ensembles in order to recover the thermodynamic description in the critical limit

  17. Built-up Area Change Analysis in Hanoi Using Support Vector Machine Classification of Landsat Multi-Temporal Image Stacks and Population Data

    Directory of Open Access Journals (Sweden)

    Duong H. Nong

    2015-12-01

    Full Text Available In 1986, the Government of Vietnam implemented free market reforms known as Doi Moi (renovation that provided private ownership of farms and companies, and encouraged deregulation and foreign investment. Since then, the economy of Vietnam has achieved rapid growth in agricultural and industrial production, construction and housing, and exports and foreign investments, each of which have resulted in momentous landscape transformations. One of the most evident changes is urbanization and an accompanying loss of agricultural lands and open spaces. These rapid changes pose enormous challenges for local populations as well as planning authorities. Accurate and timely data on changes in built-up urban environments are essential for supporting sound urban development. In this study, we applied the Support Vector Machine classification (SVM to multi-temporal stacks of Landsat Thematic Mapper (TM and Enhanced Thematic Mapper Plus (ETM+ images from 1993 to 2010 to quantify changes in built-up areas. The SVM classification algorithm produced a highly accurate map of land cover change with an overall accuracy of 95%. The study showed that most urban expansion occurred in the periods 2001–2006 and 2006–2010. The analysis was strengthened by the incorporation of population and other socio-economic data. This study provides state authorities a means to examine correlations between urban growth, spatial expansion, and other socio-economic factors in order to not only assess patterns of urban growth but also become aware of potential environmental, social, and economic problems.

  18. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  19. Building and Solving Odd-One-Out Classification Problems: A Systematic Approach

    Science.gov (United States)

    Ruiz, Philippe E.

    2011-01-01

    Classification problems ("find the odd-one-out") are frequently used as tests of inductive reasoning to evaluate human or animal intelligence. This paper introduces a systematic method for building the set of all possible classification problems, followed by a simple algorithm for solving the problems of the R-ASCM, a psychometric test derived…

  20. Classification of right-hand grasp movement based on EMOTIV Epoc+

    Science.gov (United States)

    Tobing, T. A. M. L.; Prawito, Wijaya, S. K.

    2017-07-01

    Combinations of BCT elements for right-hand grasp movement have been obtained, providing the average value of their classification accuracy. The aim of this study is to find a suitable combination for best classification accuracy of right-hand grasp movement based on EEG headset, EMOTIV Epoc+. There are three movement classifications: grasping hand, relax, and opening hand. These classifications take advantage of Event-Related Desynchronization (ERD) phenomenon that makes it possible to differ relaxation, imagery, and movement state from each other. The combinations of elements are the usage of Independent Component Analysis (ICA), spectrum analysis by Fast Fourier Transform (FFT), maximum mu and beta power with their frequency as features, and also classifier Probabilistic Neural Network (PNN) and Radial Basis Function (RBF). The average values of classification accuracy are ± 83% for training and ± 57% for testing. To have a better understanding of the signal quality recorded by EMOTIV Epoc+, the result of classification accuracy of left or right-hand grasping movement EEG signal (provided by Physionet) also be given, i.e.± 85% for training and ± 70% for testing. The comparison of accuracy value from each combination, experiment condition, and external EEG data are provided for the purpose of value analysis of classification accuracy.

  1. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  2. 78 FR 68983 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-11-18

    ...-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing... regulations to allow for the addition of an optional cotton futures classification procedure--identified and... response to requests from the U.S. cotton industry and ICE, AMS will offer a futures classification option...

  3. Phylogenetic classification of the world’s tropical forests

    OpenAIRE

    Slik, J. W. Ferry; Franklin, Janet; Arroyo-Rodríguez, Víctor; Field, Richard; Aguilar, Salomon; Aguirre, Nikolay; Ahumada, Jorge; Aiba, Shin-Ichiro; Alves, Luciana F.; K, Anitha; Avella, Andres; Mora, Francisco; Aymard C., Gerardo A.; Báez, Selene; Balvanera, Patricia

    2018-01-01

    Identifying and explaining regional differences in tropical forest dynamics, structure, diversity, and composition are critical for anticipating region-specific responses to global environmental change. Floristic classifications are of fundamental importance for these efforts. Here we provide a global tropical forest classification that is explicitly based on community evolutionary similarity, resulting in identification of five major tropical forest regions and their relationships: (i) Indo-...

  4. On music genre classification via compressive sampling

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    Recent work \\cite{Chang2010} combines low-level acoustic features and random projection (referred to as ``compressed sensing'' in \\cite{Chang2010}) to create a music genre classification system showing an accuracy among the highest reported for a benchmark dataset. This not only contradicts previ...

  5. Automotive System for Remote Surface Classification.

    Science.gov (United States)

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-04-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.

  6. A comparative study of PCA, SIMCA and Cole model for classification of bioimpedance spectroscopy measurements.

    Science.gov (United States)

    Nejadgholi, Isar; Bolic, Miodrag

    2015-08-01

    Due to safety and low cost of bioimpedance spectroscopy (BIS), classification of BIS can be potentially a preferred way of detecting changes in living tissues. However, for longitudinal datasets linear classifiers fail to classify conventional Cole parameters extracted from BIS measurements because of their high variability. In some applications, linear classification based on Principal Component Analysis (PCA) has shown more accurate results. Yet, these methods have not been established for BIS classification, since PCA features have neither been investigated in combination with other classifiers nor have been compared to conventional Cole features in benchmark classification tasks. In this work, PCA and Cole features are compared in three synthesized benchmark classification tasks which are expected to be detected by BIS. These three tasks are classification of before and after geometry change, relative composition change and blood perfusion in a cylindrical organ. Our results show that in all tasks the features extracted by PCA are more discriminant than Cole parameters. Moreover, a pilot study was done on a longitudinal arm BIS dataset including eight subjects and three arm positions. The goal of the study was to compare different methods in arm position classification which includes all three synthesized changes mentioned above. Our comparative study on various classification methods shows that the best classification accuracy is obtained when PCA features are classified by a K-Nearest Neighbors (KNN) classifier. The results of this work suggest that PCA+KNN is a promising method to be considered for classification of BIS datasets that deal with subject and time variability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Locality-preserving sparse representation-based classification in hyperspectral imagery

    Science.gov (United States)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  8. Sea ice classification using dual polarization SAR data

    International Nuclear Information System (INIS)

    Huiying, Liu; Huadong, Guo; Lu, Zhang

    2014-01-01

    Sea ice is an indicator of climate change and also a threat to the navigation security of ships. Polarimetric SAR images are useful in the sea ice detection and classification. In this paper, backscattering coefficients and texture features derived from dual polarization SAR images are used for sea ice classification. Firstly, the HH image is recalculated based on the angular dependences of sea ice types. Then the effective gray level co-occurrence matrix (GLCM) texture features are selected for the support vector machine (SVM) classification. In the end, because sea ice concentration can provide a better separation of pancake ice from old ice, it is used to improve the SVM result. This method provides a good classification result, compared with the sea ice chart from CIS

  9. Proposed International League Against Epilepsy Classification 2010: new insights.

    Science.gov (United States)

    Udani, Vrajesh; Desai, Neelu

    2014-09-01

    The International League Against Epilepsy (ILAE) Classification of Seizures in 1981 and the Classification of the Epilepsies, in 1989 have been widely accepted the world over for the last 3 decades. Since then, there has been an explosive growth in imaging, genetics and other fields in the epilepsies which have changed many of our concepts. It was felt that a revision was in order and hence the ILAE commissioned a group of experts who submitted the initial draft of this revised classification in 2010. This review focuses on what are the strengths and weaknesses of this new proposed classification, especially in the context of a developing country.

  10. Mimicking human texture classification

    NARCIS (Netherlands)

    Rogowitz, B.E.; van Rikxoort, Eva M.; van den Broek, Egon; Pappas, T.N.; Schouten, Theo E.; Daly, S.J.

    2005-01-01

    In an attempt to mimic human (colorful) texture classification by a clustering algorithm three lines of research have been encountered, in which as test set 180 texture images (both their color and gray-scale equivalent) were drawn from the OuTex and VisTex databases. First, a k-means algorithm was

  11. AUTOMATIC APPROACH TO VHR SATELLITE IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    P. Kupidura

    2016-06-01

    preliminary step of recalculation of pixel DNs to reflectance is required. Thanks to this, the proposed approach is in theory universal, and might be applied to different satellite system images of different acquisition dates. The test data consists of 3 Pleiades images captured on different dates. Research allowed to determine optimal indices values. Using the same parameters, we obtained a very good accuracy of extraction of 5 land cover/use classes: water, low vegetation, bare soil, wooded area and built-up area in all the test images (kappa from 87% to 96%. What constitutes important, even significant changes in parameter values, did not cause a significant declination of classification accuracy, which demonstrates how robust the proposed method is.

  12. Mapping changes in the largest continuous Amazonian mangrove belt using object-based classification of multisensor satellite imagery

    Science.gov (United States)

    Nascimento, Wilson R.; Souza-Filho, Pedro Walfir M.; Proisy, Christophe; Lucas, Richard M.; Rosenqvist, Ake

    2013-01-01

    Mapping and monitoring mangrove ecosystems is a crucial objective for tropical countries, particularly where human disturbance occurs and because of uncertainties associated with sea level and climatic fluctuation. In many tropical regions, such efforts have focused largely on the use of optical data despite low capture rates because of persistent cloud cover. Recognizing the ability of Synthetic Aperture Radar (SAR) for providing cloud-free observations, this study investigated the use of JERS-1 SAR and ALOS PALSAR data, acquired in 1996 and 2008 respectively, for mapping the extent of mangroves along the Brazilian coastline, from east of the Amazon River mouth, Pará State, to the Bay of São José in Maranhão. For each year, an object-orientated classification of major land covers (mangrove, secondary vegetation, gallery and swamp forest, open water, intermittent lakes and bare areas) was performed with the resulting maps then compared to quantify change. Comparison with available ground truth data indicated a general accuracy in the 2008 image classification of all land covers of 96% (kappa = 90.6%, tau = 92.6%). Over the 12 year period, the area of mangrove increased by 718.6 km2 from 6705 m2 to 7423.60 km2, with 1931.0 km² of expansion and 1213 km² of erosion noted; 5493 km² remained unchanged in extent. The general accuracy relating to changes in mangroves was 83.3% (Kappa 66.1%; tau 66.7%). The study confirmed that these mangroves constituted the largest continuous belt globally and were experiencing significant change because of the dynamic coastal environment and the influence of sedimentation from the Amazon River along the shoreline. The study recommends continued observations using combinations of SAR and optical data to establish trends in mangrove distributions and implications for provision of ecosystem services (e.g., fish/invertebrate nurseries, carbon storage and coastal protection).

  13. Evaluation of soft segment modeling on a context independent phoneme classification system

    International Nuclear Information System (INIS)

    Razzazi, F.; Sayadiyan, A.

    2007-01-01

    The geometric distribution of states duration is one of the main performance limiting assumptions of hidden Markov modeling of speech signals. Stochastic segment models, generally, and segmental HMM, specifically overcome this deficiency partly at the cost of more complexity in both training and recognition phases. In addition to this assumption, the gradual temporal changes of speech statistics has not been modeled in HMM. In this paper, a new duration modeling approach is presented. The main idea of the model is to consider the effect of adjacent segments on the probability density function estimation and evaluation of each acoustic segment. This idea not only makes the model robust against segmentation errors, but also it models gradual change from one segment to the next one with a minimum set of parameters. The proposed idea is analytically formulated and tested on a TIMIT based context independent phenomena classification system. During the test procedure, the phoneme classification of different phoneme classes was performed by applying various proposed recognition algorithms. The system was optimized and the results have been compared with a continuous density hidden Markov model (CDHMM) with similar computational complexity. The results show 8-10% improvement in phoneme recognition rate in comparison with standard continuous density hidden Markov model. This indicates improved compatibility of the proposed model with the speech nature. (author)

  14. Challenges to the Use of Artificial Neural Networks for Diagnostic Classifications with Student Test Data

    Science.gov (United States)

    Briggs, Derek C.; Circi, Ruhan

    2017-01-01

    Artificial Neural Networks (ANNs) have been proposed as a promising approach for the classification of students into different levels of a psychological attribute hierarchy. Unfortunately, because such classifications typically rely upon internally produced item response patterns that have not been externally validated, the instability of ANN…

  15. Multi-test decision tree and its application to microarray data classification.

    Science.gov (United States)

    Czajkowski, Marcin; Grześ, Marek; Kretowski, Marek

    2014-05-01

    The desirable property of tools used to investigate biological data is easy to understand models and predictive decisions. Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity. We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions. Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on 14 datasets by an average 6%. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model are supported by biological evidence in the literature. This paper introduces a new type of decision tree which is more suitable for solving biological problems. MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Application of FT-IR Classification Method in Silica-Plant Extracts Composites Quality Testing

    Science.gov (United States)

    Bicu, A.; Drumea, V.; Mihaiescu, D. E.; Purcareanu, B.; Florea, M. A.; Trică, B.; Vasilievici, G.; Draga, S.; Buse, E.; Olariu, L.

    2018-06-01

    Our present work is concerned with the validation and quality testing efforts of mesoporous silica - plant extracts composites, in order to sustain the standardization process of plant-based pharmaceutical products. The synthesis of the silica support were performed by using a TEOS based synthetic route and CTAB as a template, at room temperature and normal pressure. The silica support was analyzed by advanced characterization methods (SEM, TEM, BET, DLS and FT-IR), and loaded with Calendula officinalis and Salvia officinalis standardized extracts. Further desorption studies were performed in order to prove the sustained release properties of the final materials. Intermediate and final product identification was performed by a FT-IR classification method, using the MID-range of the IR spectra, and statistical representative samples from repetitive synthetic stages. The obtained results recommend this analytical method as a fast and cost effective alternative to the classic identification methods.

  17. High Dimensional Classification Using Features Annealed Independence Rules.

    Science.gov (United States)

    Fan, Jianqing; Fan, Yingying

    2008-01-01

    Classification using high-dimensional features arises frequently in many contemporary statistical studies such as tumor classification using microarray or other high-throughput data. The impact of dimensionality on classifications is largely poorly understood. In a seminal paper, Bickel and Levina (2004) show that the Fisher discriminant performs poorly due to diverging spectra and they propose to use the independence rule to overcome the problem. We first demonstrate that even for the independence classification rule, classification using all the features can be as bad as the random guessing due to noise accumulation in estimating population centroids in high-dimensional feature space. In fact, we demonstrate further that almost all linear discriminants can perform as bad as the random guessing. Thus, it is paramountly important to select a subset of important features for high-dimensional classification, resulting in Features Annealed Independence Rules (FAIR). The conditions under which all the important features can be selected by the two-sample t-statistic are established. The choice of the optimal number of features, or equivalently, the threshold value of the test statistics are proposed based on an upper bound of the classification error. Simulation studies and real data analysis support our theoretical results and demonstrate convincingly the advantage of our new classification procedure.

  18. Subsurface event detection and classification using Wireless Signal Networks.

    Science.gov (United States)

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T

    2012-11-05

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  19. Classification of Strawberry Fruit Shape by Machine Learning

    Science.gov (United States)

    Ishikawa, T.; Hayashi, A.; Nagamatsu, S.; Kyutoku, Y.; Dan, I.; Wada, T.; Oku, K.; Saeki, Y.; Uto, T.; Tanabata, T.; Isobe, S.; Kochi, N.

    2018-05-01

    Shape is one of the most important traits of agricultural products due to its relationships with the quality, quantity, and value of the products. For strawberries, the nine types of fruit shape were defined and classified by humans based on the sampler patterns of the nine types. In this study, we tested the classification of strawberry shapes by machine learning in order to increase the accuracy of the classification, and we introduce the concept of computerization into this field. Four types of descriptors were extracted from the digital images of strawberries: (1) the Measured Values (MVs) including the length of the contour line, the area, the fruit length and width, and the fruit width/length ratio; (2) the Ellipse Similarity Index (ESI); (3) Elliptic Fourier Descriptors (EFDs), and (4) Chain Code Subtraction (CCS). We used these descriptors for the classification test along with the random forest approach, and eight of the nine shape types were classified with combinations of MVs + CCS + EFDs. CCS is a descriptor that adds human knowledge to the chain codes, and it showed higher robustness in classification than the other descriptors. Our results suggest machine learning's high ability to classify fruit shapes accurately. We will attempt to increase the classification accuracy and apply the machine learning methods to other plant species.

  20. NEW CLASSIFICATION OF ECOPOLICES

    Directory of Open Access Journals (Sweden)

    VOROBYOV V. V.

    2016-09-01

    Full Text Available Problem statement. Ecopolices are the newest stage of the urban planning. They have to be consideredsuchas material and energy informational structures, included to the dynamic-evolutionary matrix netsofex change processes in the ecosystems. However, there are not made the ecopolice classifications, developing on suchapproaches basis. And this determined the topicality of the article. Analysis of publications on theoretical and applied aspects of the ecopolices formation showed, that the work on them is managed mainly in the context of the latest scientific and technological achievements in the various knowledge fields. These settlements are technocratic. They are connected with the morphology of space, network structures of regional and local natural ecosystems, without independent stability, can not exist without continuous man support. Another words, they do not work in with an ecopolices idea. It is come to a head for objective, symbiotic searching of ecopolices concept with the development of their classifications. Purpose statement is to develop the objective evidence for ecopolices and to propose their new classification. Conclusion. On the base of the ecopolices classification have to lie an elements correlation idea of their general plans and men activity type according with natural mechanism of accepting, reworking and transmission of material, energy and information between geo-ecosystems, planet, man, ecopolices material part and Cosmos. New ecopolices classification should be based on the principles of multi-dimensional, time-spaced symbiotic clarity with exchange ecosystem networks. The ecopolice function with this approach comes not from the subjective anthropocentric economy but from the holistic objective of Genesis paradigm. Or, otherwise - not from the Consequence, but from the Cause.

  1. A proposed United States resource classification system

    International Nuclear Information System (INIS)

    Masters, C.D.

    1980-01-01

    Energy is a world-wide problem calling for world-wide communication to resolve the many supply and distribution problems. Essential to a communication problem are a definition and comparability of elements being communicated. The US Geological Survey, with the co-operation of the US Bureau of Mines and the US Department of Energy, has devised a classification system for all mineral resources, the principles of which, it is felt, offer the possibility of world communication. At present several other systems, extant or under development (Potential Gas Committee of the USA, United Nations Resource Committee, and the American Society of Testing and Materials) are internally consistent and provide easy communication linkage. The system in use by the uranium community in the United States of America, however, ties resource quantities to forward-cost dollar values rendering them inconsistent with other classifications and therefore not comparable. This paper develops the rationale for the new USGS resource classification and notes its benefits relative to a forward-cost classification and its relationship specifically to other current classifications. (author)

  2. Median Robust Extended Local Binary Pattern for Texture Classification.

    Science.gov (United States)

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  3. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    Energy Technology Data Exchange (ETDEWEB)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); McEwen, Jason D., E-mail: dr.michelle.lochner@gmail.com [Mullard Space Science Laboratory, University College London, Surrey RH5 6NT (United Kingdom)

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  4. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    International Nuclear Information System (INIS)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.; McEwen, Jason D.

    2016-01-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  5. A New Method for Solving Supervised Data Classification Problems

    Directory of Open Access Journals (Sweden)

    Parvaneh Shabanzadeh

    2014-01-01

    Full Text Available Supervised data classification is one of the techniques used to extract nontrivial information from data. Classification is a widely used technique in various fields, including data mining, industry, medicine, science, and law. This paper considers a new algorithm for supervised data classification problems associated with the cluster analysis. The mathematical formulations for this algorithm are based on nonsmooth, nonconvex optimization. A new algorithm for solving this optimization problem is utilized. The new algorithm uses a derivative-free technique, with robustness and efficiency. To improve classification performance and efficiency in generating classification model, a new feature selection algorithm based on techniques of convex programming is suggested. Proposed methods are tested on real-world datasets. Results of numerical experiments have been presented which demonstrate the effectiveness of the proposed algorithms.

  6. A Comparison of Machine Learning Methods in a High-Dimensional Classification Problem

    Directory of Open Access Journals (Sweden)

    Zekić-Sušac Marijana

    2014-09-01

    Full Text Available Background: Large-dimensional data modelling often relies on variable reduction methods in the pre-processing and in the post-processing stage. However, such a reduction usually provides less information and yields a lower accuracy of the model. Objectives: The aim of this paper is to assess the high-dimensional classification problem of recognizing entrepreneurial intentions of students by machine learning methods. Methods/Approach: Four methods were tested: artificial neural networks, CART classification trees, support vector machines, and k-nearest neighbour on the same dataset in order to compare their efficiency in the sense of classification accuracy. The performance of each method was compared on ten subsamples in a 10-fold cross-validation procedure in order to assess computing sensitivity and specificity of each model. Results: The artificial neural network model based on multilayer perceptron yielded a higher classification rate than the models produced by other methods. The pairwise t-test showed a statistical significance between the artificial neural network and the k-nearest neighbour model, while the difference among other methods was not statistically significant. Conclusions: Tested machine learning methods are able to learn fast and achieve high classification accuracy. However, further advancement can be assured by testing a few additional methodological refinements in machine learning methods.

  7. Visualization and classification in biomedical terahertz pulsed imaging

    International Nuclear Information System (INIS)

    Loeffler, Torsten; Siebert, Karsten; Czasch, Stephanie; Bauer, Tobias; Roskos, Hartmut G

    2002-01-01

    'Visualization' in imaging is the process of extracting useful information from raw data in such a way that meaningful physical contrasts are developed. 'Classification' is the subsequent process of defining parameter ranges which allow us to identify elements of images such as different tissues or different objects. In this paper, we explore techniques for visualization and classification in terahertz pulsed imaging (TPI) for biomedical applications. For archived (formalin-fixed, alcohol-dehydrated and paraffin-mounted) test samples, we investigate both time- and frequency-domain methods based on bright- and dark-field TPI. Successful tissue classification is demonstrated

  8. Exploring different approaches for music genre classification

    Directory of Open Access Journals (Sweden)

    Antonio Jose Homsi Goulart

    2012-07-01

    Full Text Available In this letter, we present different approaches for music genre classification. The proposed techniques, which are composed of a feature extraction stage followed by a classification procedure, explore both the variations of parameters used as input and the classifier architecture. Tests were carried out with three styles of music, namely blues, classical, and lounge, which are considered informally by some musicians as being “big dividers” among music genres, showing the efficacy of the proposed algorithms and establishing a relationship between the relevance of each set of parameters for each music style and each classifier. In contrast to other works, entropies and fractal dimensions are the features adopted for the classifications.

  9. A simple semi-automatic approach for land cover classification from multispectral remote sensing imagery.

    Directory of Open Access Journals (Sweden)

    Dong Jiang

    Full Text Available Land cover data represent a fundamental data source for various types of scientific research. The classification of land cover based on satellite data is a challenging task, and an efficient classification method is needed. In this study, an automatic scheme is proposed for the classification of land use using multispectral remote sensing images based on change detection and a semi-supervised classifier. The satellite image can be automatically classified using only the prior land cover map and existing images; therefore human involvement is reduced to a minimum, ensuring the operability of the method. The method was tested in the Qingpu District of Shanghai, China. Using Environment Satellite 1(HJ-1 images of 2009 with 30 m spatial resolution, the areas were classified into five main types of land cover based on previous land cover data and spectral features. The results agreed on validation of land cover maps well with a Kappa value of 0.79 and statistical area biases in proportion less than 6%. This study proposed a simple semi-automatic approach for land cover classification by using prior maps with satisfied accuracy, which integrated the accuracy of visual interpretation and performance of automatic classification methods. The method can be used for land cover mapping in areas lacking ground reference information or identifying rapid variation of land cover regions (such as rapid urbanization with convenience.

  10. Classification Accuracy Is Not Enough

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A recent review of the research literature evaluating music genre recognition (MGR) systems over the past two decades shows that most works (81\\%) measure the capacity of a system to recognize genre by its classification accuracy. We show here, by implementing and testing three categorically...

  11. Conceptual Scoring and Classification Accuracy of Vocabulary Testing in Bilingual Children

    Science.gov (United States)

    Anaya, Jissel B.; Peña, Elizabeth D.; Bedore, Lisa M.

    2018-01-01

    Purpose: This study examined the effects of single-language and conceptual scoring on the vocabulary performance of bilingual children with and without specific language impairment. We assessed classification accuracy across 3 scoring methods. Method: Participants included Spanish-English bilingual children (N = 247) aged 5;1 (years;months) to…

  12. Automated Decision Tree Classification of Corneal Shape

    Science.gov (United States)

    Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.

    2011-01-01

    Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification

  13. Differential Classification of Dementia

    Directory of Open Access Journals (Sweden)

    E. Mohr

    1995-01-01

    Full Text Available In the absence of biological markers, dementia classification remains complex both in terms of characterization as well as early detection of the presence or absence of dementing symptoms, particularly in diseases with possible secondary dementia. An empirical, statistical approach using neuropsychological measures was therefore developed to distinguish demented from non-demented patients and to identify differential patterns of cognitive dysfunction in neurodegenerative disease. Age-scaled neurobehavioral test results (Wechsler Adult Intelligence Scale—Revised and Wechsler Memory Scale from Alzheimer's (AD and Huntington's (HD patients, matched for intellectual disability, as well as normal controls were used to derive a classification formula. Stepwise discriminant analysis accurately (99% correct distinguished controls from demented patients, and separated the two patient groups (79% correct. Variables discriminating between HD and AD patient groups consisted of complex psychomotor tasks, visuospatial function, attention and memory. The reliability of the classification formula was demonstrated with a new, independent sample of AD and HD patients which yielded virtually identical results (classification accuracy for dementia: 96%; AD versus HD: 78%. To validate the formula, the discriminant function was applied to Parkinson's (PD patients, 38% of whom were classified as demented. The validity of the classification was demonstrated by significant PD subgroup differences on measures of dementia not included in the discriminant function. Moreover, a majority of demented PD patients (65% were classified as having an HD-like pattern of cognitive deficits, in line with previous reports of the subcortical nature of PD dementia. This approach may thus be useful in classifying presence or absence of dementia and in discriminating between dementia subtypes in cases of secondary or coincidental dementia.

  14. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  15. APPLICATION OF SENSOR FUSION TO IMPROVE UAV IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Jabari

    2017-08-01

    Full Text Available Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan camera along with either a colour camera or a four-band multi-spectral (MS camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC. We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  16. Application of Sensor Fusion to Improve Uav Image Classification

    Science.gov (United States)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  17. Comparison of Danish dichotomous and BI-RADS classifications of mammographic density

    DEFF Research Database (Denmark)

    Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My

    2014-01-01

    BACKGROUND: In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. PURPOSE: To compare the Danish...... dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. MATERIAL AND METHODS: The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive......, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. RESULTS...

  18. Random forests for classification in ecology

    Science.gov (United States)

    Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J.

    2007-01-01

    Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature. ?? 2007 by the Ecological Society of America.

  19. Round-Robin Test of Paraffin Phase-Change Material

    Science.gov (United States)

    Vidi, S.; Mehling, H.; Hemberger, F.; Haussmann, Th.; Laube, A.

    2015-11-01

    A round-robin test between three institutes was performed on a paraffin phase-change material (PCM) in the context of the German quality association for phase-change materials. The aim of the quality association is to define quality and test specifications for PCMs and to award certificates for successfully tested materials. To ensure the reproducibility and comparability of the measurements performed at different institutes using different measuring methods, a round-robin test was performed. The sample was unknown. The four methods used by the three participating institutes in the round-robin test were differential scanning calorimetry, Calvet calorimetry and three-layer calorimetry. Additionally, T-history measurements were made. The aim of the measurements was the determination of the enthalpy as a function of temperature. The results achieved following defined test specifications are in excellent agreement.

  20. New Classification of Focal Cortical Dysplasia: Application to Practical Diagnosis

    Science.gov (United States)

    Bae, Yoon-Sung; Kang, Hoon-Chul; Kim, Heung Dong; Kim, Se Hoon

    2012-01-01

    Background and Purpose: Malformation of cortical development (MCD) is a well-known cause of drug-resistant epilepsy and focal cortical dysplasia (FCD) is the most common neuropathological finding in surgical specimens from drug-resistant epilepsy patients. Palmini’s classification proposed in 2004 is now widely used to categorize FCD. Recently, however, Blumcke et al. recommended a new system for classifying FCD in 2011. Methods: We applied the new classification system in practical diagnosis of a sample of 117 patients who underwent neurosurgical operations due to drug-resistant epilepsy at Severance Hospital in Seoul, Korea. Results: Among 117 cases, a total of 16 cases were shifted to other FCD subtypes under the new classification system. Five cases were reclassified to type IIIa and five cases were categorized as dual pathology. The other six cases were changed within the type I category. Conclusions: The most remarkable changes in the new classification system are the advent of dual pathology and FCD type III. Thus, it will be very important for pathologists and clinicians to discriminate between these new categories. More large-scale research needs to be conducted to elucidate the clinical influence of the alterations within the classification of type I disease. Although the new FCD classification system has several advantages compared to the former, the correlation with clinical characteristics is not yet clear. PMID:24649461

  1. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  2. Optimization of Support Vector Machine (SVM) for Object Classification

    Science.gov (United States)

    Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.

  3. Machine learning for radioxenon event classification for the Comprehensive Nuclear-Test-Ban Treaty

    Energy Technology Data Exchange (ETDEWEB)

    Stocki, Trevor J., E-mail: trevor_stocki@hc-sc.gc.c [Radiation Protection Bureau, 775 Brookfield Road, A.L. 6302D1, Ottawa, ON, K1A 1C1 (Canada); Li, Guichong; Japkowicz, Nathalie [School of Information Technology and Engineering, University of Ottawa, 800 King Edward Avenue, Ottawa, ON, K1N 6N5 (Canada); Ungar, R. Kurt [Radiation Protection Bureau, 775 Brookfield Road, A.L. 6302D1, Ottawa, ON, K1A 1C1 (Canada)

    2010-01-15

    A method of weapon detection for the Comprehensive nuclear-Test-Ban-Treaty (CTBT) consists of monitoring the amount of radioxenon in the atmosphere by measuring and sampling the activity concentration of {sup 131m}Xe, {sup 133}Xe, {sup 133m}Xe, and {sup 135}Xe by radionuclide monitoring. Several explosion samples were simulated based on real data since the measured data of this type is quite rare. These data sets consisted of different circumstances of a nuclear explosion, and are used as training data sets to establish an effective classification model employing state-of-the-art technologies in machine learning. A study was conducted involving classic induction algorithms in machine learning including Naive Bayes, Neural Networks, Decision Trees, k-Nearest Neighbors, and Support Vector Machines, that revealed that they can successfully be used in this practical application. In particular, our studies show that many induction algorithms in machine learning outperform a simple linear discriminator when a signal is found in a high radioxenon background environment.

  4. Machine learning for radioxenon event classification for the Comprehensive Nuclear-Test-Ban Treaty

    International Nuclear Information System (INIS)

    Stocki, Trevor J.; Li, Guichong; Japkowicz, Nathalie; Ungar, R. Kurt

    2010-01-01

    A method of weapon detection for the Comprehensive nuclear-Test-Ban-Treaty (CTBT) consists of monitoring the amount of radioxenon in the atmosphere by measuring and sampling the activity concentration of 131m Xe, 133 Xe, 133m Xe, and 135 Xe by radionuclide monitoring. Several explosion samples were simulated based on real data since the measured data of this type is quite rare. These data sets consisted of different circumstances of a nuclear explosion, and are used as training data sets to establish an effective classification model employing state-of-the-art technologies in machine learning. A study was conducted involving classic induction algorithms in machine learning including Naive Bayes, Neural Networks, Decision Trees, k-Nearest Neighbors, and Support Vector Machines, that revealed that they can successfully be used in this practical application. In particular, our studies show that many induction algorithms in machine learning outperform a simple linear discriminator when a signal is found in a high radioxenon background environment.

  5. Real-time, resource-constrained object classification on a micro-air vehicle

    Science.gov (United States)

    Buck, Louis; Ray, Laura

    2013-12-01

    A real-time embedded object classification algorithm is developed through the novel combination of binary feature descriptors, a bag-of-visual-words object model and the cortico-striatal loop (CSL) learning algorithm. The BRIEF, ORB and FREAK binary descriptors are tested and compared to SIFT descriptors with regard to their respective classification accuracies, execution times, and memory requirements when used with CSL on a 12.6 g ARM Cortex embedded processor running at 800 MHz. Additionally, the effect of x2 feature mapping and opponent-color representations used with these descriptors is examined. These tests are performed on four data sets of varying sizes and difficulty, and the BRIEF descriptor is found to yield the best combination of speed and classification accuracy. Its use with CSL achieves accuracies between 67% and 95% of those achieved with SIFT descriptors and allows for the embedded classification of a 128x192 pixel image in 0.15 seconds, 60 times faster than classification with SIFT. X2 mapping is found to provide substantial improvements in classification accuracy for all of the descriptors at little cost, while opponent-color descriptors are offer accuracy improvements only on colorful datasets.

  6. Functional classifications for cerebral palsy: correlations between the gross motor function classification system (GMFCS), the manual ability classification system (MACS) and the communication function classification system (CFCS).

    Science.gov (United States)

    Compagnone, Eliana; Maniglio, Jlenia; Camposeo, Serena; Vespino, Teresa; Losito, Luciana; De Rinaldis, Marta; Gennaro, Leonarda; Trabacca, Antonio

    2014-11-01

    This study aimed to investigate a possible correlation between the gross motor function classification system-expanded and revised (GMFCS-E&R), the manual abilities classification system (MACS) and the communication function classification system (CFCS) functional levels in children with cerebral palsy (CP) by CP subtype. It was also geared to verify whether there is a correlation between these classification systems and intellectual functioning (IF) and parental socio-economic status (SES). A total of 87 children (47 males and 40 females, age range 4-18 years, mean age 8.9±4.2) were included in the study. A strong correlation was found between the three classifications: Level V of the GMFCS-E&R corresponds to Level V of the MACS (rs=0.67, p=0.001); the same relationship was found for the CFCS and the MACS (rs=0.73, p<0.001) and for the GMFCS-E&R and the CFCS (rs=0.61, p=0.001). The correlations between the IQ and the global functional disability profile were strong or moderate (GMFCS and IQ: rs=0.66, p=0.001; MACS and IQ: rs=0.58, p=0.001; CFCS and MACS: rs=0.65, p=0.001). The Kruskal-Wallis test was used to determine if there were differences between the GMFCS-E&R, the CFCS and the MACS by CP type. CP types showed different scores for the IQ level (Chi-square=8.59, df=2, p=0.014), the GMFCS-E&R (Chi-square=36.46, df=2, p<0.001), the CFCS (Chi-square=12.87, df=2, p=0.002), and the MACS Level (Chi-square=13.96, df=2, p<0.001) but no significant differences emerged for the SES (Chi-square=1.19, df=2, p=0.554). This study shows how the three functional classifications (GMFCS-E&R, CFCS and MACS) complement each other to provide a better description of the functional profile of CP. The systematic evaluation of the IQ can provide useful information about a possible future outcome for every functional level. The SES does not appear to affect functional profiles. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. IRIS COLOUR CLASSIFICATION SCALES--THEN AND NOW.

    Science.gov (United States)

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual's eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale.

  8. Empirical evaluation of data normalization methods for molecular classification.

    Science.gov (United States)

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  9. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  10. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  11. Content Abstract Classification Using Naive Bayes

    Science.gov (United States)

    Latif, Syukriyanto; Suwardoyo, Untung; Aldrin Wihelmus Sanadi, Edwin

    2018-03-01

    This study aims to classify abstract content based on the use of the highest number of words in an abstract content of the English language journals. This research uses a system of text mining technology that extracts text data to search information from a set of documents. Abstract content of 120 data downloaded at www.computer.org. Data grouping consists of three categories: DM (Data Mining), ITS (Intelligent Transport System) and MM (Multimedia). Systems built using naive bayes algorithms to classify abstract journals and feature selection processes using term weighting to give weight to each word. Dimensional reduction techniques to reduce the dimensions of word counts rarely appear in each document based on dimensional reduction test parameters of 10% -90% of 5.344 words. The performance of the classification system is tested by using the Confusion Matrix based on comparative test data and test data. The results showed that the best classification results were obtained during the 75% training data test and 25% test data from the total data. Accuracy rates for categories of DM, ITS and MM were 100%, 100%, 86%. respectively with dimension reduction parameters of 30% and the value of learning rate between 0.1-0.5.

  12. Crop Classification by Polarimetric SAR

    DEFF Research Database (Denmark)

    Skriver, Henning; Svendsen, Morten Thougaard; Nielsen, Flemming

    1999-01-01

    Polarimetric SAR-data of agricultural fields have been acquired by the Danish polarimetric L- and C-band SAR (EMISAR) during a number of missions at the Danish agricultural test site Foulum during 1995. The data are used to study the classification potential of polarimetric SAR data using...

  13. Memristive Perceptron for Combinational Logic Classification

    Directory of Open Access Journals (Sweden)

    Lidan Wang

    2013-01-01

    Full Text Available The resistance of the memristor depends upon the past history of the input current or voltage; so it can function as synapse in neural networks. In this paper, a novel perceptron combined with the memristor is proposed to implement the combinational logic classification. The relationship between the memristive conductance change and the synapse weight update is deduced, and the memristive perceptron model and its synaptic weight update rule are explored. The feasibility of the novel memristive perceptron for implementing the combinational logic classification (NAND, NOR, XOR, and NXOR is confirmed by MATLAB simulation.

  14. TESTING THE GENERALIZATION EFFICIENCY OF OIL SLICK CLASSIFICATION ALGORITHM USING MULTIPLE SAR DATA FOR DEEPWATER HORIZON OIL SPILL

    Directory of Open Access Journals (Sweden)

    C. Ozkan

    2012-07-01

    Full Text Available Marine oil spills due to releases of crude oil from tankers, offshore platforms, drilling rigs and wells, etc. are seriously affecting the fragile marine and coastal ecosystem and cause political and environmental concern. A catastrophic explosion and subsequent fire in the Deepwater Horizon oil platform caused the platform to burn and sink, and oil leaked continuously between April 20th and July 15th of 2010, releasing about 780,000 m3 of crude oil into the Gulf of Mexico. Today, space-borne SAR sensors are extensively used for the detection of oil spills in the marine environment, as they are independent from sun light, not affected by cloudiness, and more cost-effective than air patrolling due to covering large areas. In this study, generalization extent of an object based classification algorithm was tested for oil spill detection using multiple SAR imagery data. Among many geometrical, physical and textural features, some more distinctive ones were selected to distinguish oil and look alike objects from each others. The tested classifier was constructed from a Multilayer Perception Artificial Neural Network trained by ABC, LM and BP optimization algorithms. The training data to train the classifier were constituted from SAR data consisting of oil spill originated from Lebanon in 2007. The classifier was then applied to the Deepwater Horizon oil spill data in the Gulf of Mexico on RADARSAT-2 and ALOS PALSAR images to demonstrate the generalization efficiency of oil slick classification algorithm.

  15. Failure diagnosis using deep belief learning based health state classification

    International Nuclear Information System (INIS)

    Tamilselvan, Prasanna; Wang, Pingfeng

    2013-01-01

    Effective health diagnosis provides multifarious benefits such as improved safety, improved reliability and reduced costs for operation and maintenance of complex engineered systems. This paper presents a novel multi-sensor health diagnosis method using deep belief network (DBN). DBN has recently become a popular approach in machine learning for its promised advantages such as fast inference and the ability to encode richer and higher order network structures. The DBN employs a hierarchical structure with multiple stacked restricted Boltzmann machines and works through a layer by layer successive learning process. The proposed multi-sensor health diagnosis methodology using DBN based state classification can be structured in three consecutive stages: first, defining health states and preprocessing sensory data for DBN training and testing; second, developing DBN based classification models for diagnosis of predefined health states; third, validating DBN classification models with testing sensory dataset. Health diagnosis using DBN based health state classification technique is compared with four existing diagnosis techniques. Benchmark classification problems and two engineering health diagnosis applications: aircraft engine health diagnosis and electric power transformer health diagnosis are employed to demonstrate the efficacy of the proposed approach

  16. Towards Automatic Classification of Wikipedia Content

    Science.gov (United States)

    Szymański, Julian

    Wikipedia - the Free Encyclopedia encounters the problem of proper classification of new articles everyday. The process of assignment of articles to categories is performed manually and it is a time consuming task. It requires knowledge about Wikipedia structure, which is beyond typical editor competence, which leads to human-caused mistakes - omitting or wrong assignments of articles to categories. The article presents application of SVM classifier for automatic classification of documents from The Free Encyclopedia. The classifier application has been tested while using two text representations: inter-documents connections (hyperlinks) and word content. The results of the performed experiments evaluated on hand crafted data show that the Wikipedia classification process can be partially automated. The proposed approach can be used for building a decision support system which suggests editors the best categories that fit new content entered to Wikipedia.

  17. Subsurface Event Detection and Classification Using Wireless Signal Networks

    Directory of Open Access Journals (Sweden)

    Muhannad T. Suleiman

    2012-11-01

    Full Text Available Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs. The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  18. Mineral resources of Slovakia, questions of classification and valuation

    Directory of Open Access Journals (Sweden)

    Baláž Peter

    1999-06-01

    Full Text Available According to the Constitution of Slovak Republic, mineral resources of Slovakia are in the ownership of Slovak Republic. In 1997, 721 exclusive mineral deposits of mineral fuels, metals and industrial minerals were registered in Slovakia. The classification for economic and uneconomic reserves/resources requires an annual updating, concerning changes of market mineral prices and mine production costs. In terms of economic valuation of mineral resources, a new United Nations international classification for reserves/resources appears as a perspective alternative. Changes of geological and mining legislation are necessary for real valuation of Slovak mineral resources.

  19. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... Service 7 CFR Part 27 [AMS-CN-13-0043] RIN 0581-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed rule. SUMMARY: The... optional cotton futures classification procedure--identified and known as ``registration'' by the U.S...

  20. Testing a bedside personal computer Clinical Care Classification System for nursing students using Microsoft Access.

    Science.gov (United States)

    Feeg, Veronica D; Saba, Virginia K; Feeg, Alan N

    2008-01-01

    This study tested a personal computer-based version of the Sabacare Clinical Care Classification System on students' performance of charting patient care plans. The application was designed as an inexpensive alternative to teach electronic charting for use on any laptop or personal computer with Windows and Microsoft Access. The data-based system was tested in a randomized trial with the control group using a type-in text-based-only system also mounted on a laptop at the bedside in the laboratory. Student care plans were more complete using the data-based system over the type-in text version. Students were more positive but not necessarily more efficient with the data-based system. The results demonstrate that the application is effective for improving student nursing care charting using the nursing process and capturing patient care information with a language that is standardized and ready for integration with other patient electronic health record data. It can be implemented on a bedside stand in the clinical laboratory or used to aggregate care planning over a student's clinical experience.

  1. Computerized Adaptive Tests Detect Change Following Orthopaedic Surgery in Youth with Cerebral Palsy.

    Science.gov (United States)

    Mulcahey, M J; Slavin, Mary D; Ni, Pengsheng; Vogel, Lawrence C; Kozin, Scott H; Haley, Stephen M; Jette, Alan M

    2015-09-16

    The Cerebral Palsy Computerized Adaptive Test (CP-CAT) is a parent-reported outcomes instrument for measuring lower and upper-extremity function, activity, and global health across impairment levels and a broad age range of children with cerebral palsy (CP). This study was performed to examine whether the Lower Extremity/Mobility (LE) CP-CAT detects change in mobility following orthopaedic surgery in children with CP. This multicenter, longitudinal study involved administration of the LE CP-CAT, the Pediatric Outcomes Data Collection Instrument (PODCI) Transfer/Mobility and Sports/Physical Functioning domains, and the Timed "Up & Go" test (TUG) before and after elective orthopaedic surgery in a convenience sample of 255 children, four to twenty years of age, who had CP and a Gross Motor Function Classification System (GMFCS) level of I, II, or III. Standardized response means (SRMs) and 95% confidence intervals (CIs) were calculated for all measures at six, twelve, and twenty-four months following surgery. SRM estimates for the LE CP-CAT were significantly greater than the SRM estimates for the PODCI Transfer/Mobility domain at twelve months, the PODCI Sports/Physical Functioning domain at twelve months, and the TUG at twelve and twenty-four months. When the results for the children at GMFCS levels I, II, and III were grouped together, the improvements in function detected by the LE CP-CAT at twelve and twenty-four months were found to be greater than the changes detected by the PODCI Transfer/Mobility and Sports/Physical Functioning scales. The LE CP-CAT outperformed the PODCI scales for GMFCS levels I and III at both of these follow-up intervals; none of the scales performed well for patients with GMFCS level II. The results of this study showed that the LE CP-CAT displayed superior sensitivity to change than the PODCI and TUG scales after musculoskeletal surgery in children with CP. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  2. Classification across gene expression microarray studies

    Directory of Open Access Journals (Sweden)

    Kuner Ruprecht

    2009-12-01

    Full Text Available Abstract Background The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods on four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive and histological grade (low/high of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM, predictive analysis of microarrays (PAM, random forest (RF and k-top scoring pairs (kTSP. Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing. Results For each individual study the generalization error was benchmarked via complete cross-validation and was found to be similar for all classification methods. The misclassification rates were substantially higher in classification across studies, when each single study was used as an independent test set while all remaining studies were combined for the training of the classifier. However, with increasing number of independent microarray studies used in the training, the overall classification performance improved. DV performed better than the average and showed slightly less variance. In

  3. ILAE Classification of the Epilepsies Position Paper of the ILAE Commission for Classification and Terminology

    Science.gov (United States)

    Scheffer, Ingrid E; Berkovic, Samuel; Capovilla, Giuseppe; Connolly, Mary B; French, Jacqueline; Guilhoto, Laura; Hirsch, Edouard; Jain, Satish; Mathern, Gary W.; Moshé, Solomon L; Nordli, Douglas R; Perucca, Emilio; Tomson, Torbjörn; Wiebe, Samuel; Zhang, Yue-Hua; Zuberi, Sameer M

    2017-01-01

    Summary The ILAE Classification of the Epilepsies has been updated to reflect our gain in understanding of the epilepsies and their underlying mechanisms following the major scientific advances which have taken place since the last ratified classification in 1989. As a critical tool for the practising clinician, epilepsy classification must be relevant and dynamic to changes in thinking, yet robust and translatable to all areas of the globe. Its primary purpose is for diagnosis of patients, but it is also critical for epilepsy research, development of antiepileptic therapies and communication around the world. The new classification originates from a draft document submitted for public comments in 2013 which was revised to incorporate extensive feedback from the international epilepsy community over several rounds of consultation. It presents three levels, starting with seizure type where it assumes that the patient is having epileptic seizures as defined by the new 2017 ILAE Seizure Classification. After diagnosis of the seizure type, the next step is diagnosis of epilepsy type, including focal epilepsy, generalized epilepsy, combined generalized and focal epilepsy, and also an unknown epilepsy group. The third level is that of epilepsy syndrome where a specific syndromic diagnosis can be made. The new classification incorporates etiology along each stage, emphasizing the need to consider etiology at each step of diagnosis as it often carries significant treatment implications. Etiology is broken into six subgroups, selected because of their potential therapeutic consequences. New terminology is introduced such as developmental and epileptic encephalopathy. The term benign is replaced by the terms self-limited and pharmacoresponsive, to be used where appropriate. It is hoped that this new framework will assist in improving epilepsy care and research in the 21st century. PMID:28276062

  4. Age group classification and gender detection based on forced expiratory spirometry.

    Science.gov (United States)

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  5. Classification of refrigerants; Classification des fluides frigorigenes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  6. A new classification of post-sternotomy dehiscence

    Science.gov (United States)

    Anger, Jaime; Dantas, Daniel Chagas; Arnoni, Renato Tambellini; Farsky, Pedro Sílvio

    2015-01-01

    The dehiscence after median transesternal sternotomy used as surgical access for cardiac surgery is one of its complications and it increases the patient's morbidity and mortality. A variety of surgical techniques were recently described resulting to the need of a classification bringing a measure of objectivity to the management of these complex and dangerous wounds. The different related classifications are based in the primary causal infection, but recently the anatomical description of the wound including the deepness and the vertical extension showed to be more useful. We propose a new classification based only on the anatomical changes following sternotomy dehiscence and chronic wound formation separating it in four types according to the deepness and in two sub-groups according to the vertical extension based on the inferior insertion of the pectoralis major muscle. PMID:25859875

  7. The effect of time on EMG classification of hand motions in able-bodied and transradial amputees

    DEFF Research Database (Denmark)

    Waris, Asim; Niazi, Imran Khan; Jamil, Mohsin

    2018-01-01

    While several studies have demonstrated the short-term performance of pattern recognition systems, long-term investigations are very limited. In this study, we investigated changes in classification performance over time. Ten able-bodied individuals and six amputees took part in this study. EMG s...... difference between training and testing day increases. Furthermore, for iEMG, performance in amputees was directly proportional to the size of the residual limb.......While several studies have demonstrated the short-term performance of pattern recognition systems, long-term investigations are very limited. In this study, we investigated changes in classification performance over time. Ten able-bodied individuals and six amputees took part in this study. EMG...... was computed for all possible combinations between the days. For all subjects, surface sEMG (7.2 ± 7.6%), iEMG (11.9 ± 9.1%) and cEMG (4.6 ± 4.8%) were significantly different (P 

  8. Classification as a generic tool for characterising status and changes of regional scale groundwater systems

    Science.gov (United States)

    Barthel, Roland; Haaf, Ezra

    2016-04-01

    the behavior of groundwater systems. It is based on the hypothesis that similar groundwater systems respond similarly to similar impacts. At its core is the classification of (i) static hydrogeological characteristics (such as aquifer geometry and hydraulic properties), (ii) dynamic changes of the boundary conditions (such as recharge, water levels in surface waters), and (iii) dynamic groundwater system responses (groundwater head and chemical parameters). The dependencies of system responses on explanatory variables are used to map knowledge from observed locations to areas without measurements. Classification of static and dynamic system features combined with information about known system properties and their dependencies provide insight into system behavior that cannot be directly derived through the analysis of raw data. Classification and dependency analysis could finally lead to a new framework for groundwater system assessment on the regional scale as a replacement or supplement to numerical groundwater models and catchment scale hydrological models. This contribution focusses on the main hydrogeological concepts underlying the approach while another EGU contribution (Haaf and Barthel, 2016) explains the methodologies used to classify groundwater systems. References: Barthel, R., 2014. A call for more fundamental science in regional hydrogeology. Hydrogeol J, 22(3): 507-510. Barthel, R., Banzhaf, S., 2016. Groundwater and Surface Water Interaction at the Regional-scale - A Review with Focus on Regional Integrated Models. Water Resour Manag, 30(1): 1-32. Haaf, E., Barthel, R., 2016. An approach for classification of hydrogeological systems at the regional scale based on groundwater hydrographs. Abstract submitted to EGU General Assembly 2016, Vienna, Austria.

  9. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions.

    Science.gov (United States)

    Noirhomme, Quentin; Lesenfants, Damien; Gomez, Francisco; Soddu, Andrea; Schrouff, Jessica; Garraux, Gaëtan; Luxen, André; Phillips, Christophe; Laureys, Steven

    2014-01-01

    Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain-computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  10. Classification versus inference learning contrasted with real-world categories.

    Science.gov (United States)

    Jones, Erin L; Ross, Brian H

    2011-07-01

    Categories are learned and used in a variety of ways, but the research focus has been on classification learning. Recent work contrasting classification with inference learning of categories found important later differences in category performance. However, theoretical accounts differ on whether this is due to an inherent difference between the tasks or to the implementation decisions. The inherent-difference explanation argues that inference learners focus on the internal structure of the categories--what each category is like--while classification learners focus on diagnostic information to predict category membership. In two experiments, using real-world categories and controlling for earlier methodological differences, inference learners learned more about what each category was like than did classification learners, as evidenced by higher performance on a novel classification test. These results suggest that there is an inherent difference between learning new categories by classifying an item versus inferring a feature.

  11. Reliability and Validity of a New Test of Change-of-Direction Speed for Field-Based Sports: the Change-of-Direction and Acceleration Test (CODAT).

    Science.gov (United States)

    Lockie, Robert G; Schultz, Adrian B; Callaghan, Samuel J; Jeffriess, Matthew D; Berry, Simon P

    2013-01-01

    Field sport coaches must use reliable and valid tests to assess change-of-direction speed in their athletes. Few tests feature linear sprinting with acute change- of-direction maneuvers. The Change-of-Direction and Acceleration Test (CODAT) was designed to assess field sport change-of-direction speed, and includes a linear 5-meter (m) sprint, 45° and 90° cuts, 3- m sprints to the left and right, and a linear 10-m sprint. This study analyzed the reliability and validity of this test, through comparisons to 20-m sprint (0-5, 0-10, 0-20 m intervals) and Illinois agility run (IAR) performance. Eighteen Australian footballers (age = 23.83 ± 7.04 yrs; height = 1.79 ± 0.06 m; mass = 85.36 ± 13.21 kg) were recruited. Following familiarization, subjects completed the 20-m sprint, CODAT, and IAR in 2 sessions, 48 hours apart. Intra-class correlation coefficients (ICC) assessed relative reliability. Absolute reliability was analyzed through paired samples t-tests (p ≤ 0.05) determining between-session differences. Typical error (TE), coefficient of variation (CV), and differences between the TE and smallest worthwhile change (SWC), also assessed absolute reliability and test usefulness. For the validity analysis, Pearson's correlations (p ≤ 0.05) analyzed between-test relationships. Results showed no between-session differences for any test (p = 0.19-0.86). CODAT time averaged ~6 s, and the ICC and CV equaled 0.84 and 3.0%, respectively. The homogeneous sample of Australian footballers meant that the CODAT's TE (0.19 s) exceeded the usual 0.2 x standard deviation (SD) SWC (0.10 s). However, the CODAT is capable of detecting moderate performance changes (SWC calculated as 0.5 x SD = 0.25 s). There was a near perfect correlation between the CODAT and IAR (r = 0.92), and very large correlations with the 20-m sprint (r = 0.75-0.76), suggesting that the CODAT was a valid change-of-direction speed test. Due to movement specificity, the CODAT has value for field sport

  12. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    Science.gov (United States)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  13. Securing classification and regulatory approval for deepwater projects: management challenges in a global environment

    Energy Technology Data Exchange (ETDEWEB)

    Feijo, Luiz P.; Burton, Gareth C. [American Bureau of Shipping (ABS), Rio de Janeiro, RJ (Brazil)

    2008-07-01

    As the offshore industry continues to develop and move into increasingly deeper waters, technological boundaries are being pushed to new limits. Along with these advances, the design, fabrication and installation of deepwater oil and gas projects has become an increasingly global endeavor. After providing an overview of the history and role of Classification Societies, this paper reviews the challenges of securing classification and regulatory approval in a global environment. Operational, procedural and technological changes which one Classification Society; the American Bureau of Shipping, known as ABS, has implemented to address these challenges are presented. The result of the changes has been a more customized service aiming at faster and more streamlined classification approval process. (author)

  14. [A revolution postponed indefinitely.WHO classification of tumors of the breast 2012: the main changes compared to the 3rd edition (2003)].

    Science.gov (United States)

    Nenutil, Rudolf

    2015-01-01

    In 2012, the new classification of the fourth series WHO blue books of breast tumors was released. The current version represents a fluent evolution, compared to the third edition. Some limited changes regarding terminology, definitions and the inclusion of some diagnostic units were adopted. The information about the molecular biology and genetic background of breast carcinoma has been enriched substantially.

  15. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  16. Guidance on classification for reproductive toxicity under the globally harmonized system of classification and labelling of chemicals (GHS).

    Science.gov (United States)

    Moore, Nigel P; Boogaard, Peter J; Bremer, Susanne; Buesen, Roland; Edwards, James; Fraysse, Benoit; Hallmark, Nina; Hemming, Helena; Langrand-Lerche, Carole; McKee, Richard H; Meisters, Marie-Louise; Parsons, Paul; Politano, Valerie; Reader, Stuart; Ridgway, Peter; Hennes, Christa

    2013-11-01

    The Globally Harmonised System of Classification (GHS) is a framework within which the intrinsic hazards of substances may be determined and communicated. It is not a legislative instrument per se, but is enacted into national legislation with the appropriate legislative instruments. GHS covers many aspects of effects upon health and the environment, including adverse effects upon sexual function and fertility or on development. Classification for these effects is based upon observations in humans or from properly designed experiments in animals, although only the latter is covered herein. The decision to classify a substance based upon experimental data, and the category of classification ascribed, is determined by the level of evidence that is available for an adverse effect on sexual function and fertility or on development that does not arise as a secondary non-specific consequence of other toxic effect. This document offers guidance on the determination of level of concern as a measure of adversity, and the level of evidence to ascribe classification based on data from tests in laboratory animals.

  17. Evaluation of Current Approaches to Stream Classification and a Heuristic Guide to Developing Classifications of Integrated Aquatic Networks

    Science.gov (United States)

    Melles, S. J.; Jones, N. E.; Schmidt, B. J.

    2014-03-01

    Conservation and management of fresh flowing waters involves evaluating and managing effects of cumulative impacts on the aquatic environment from disturbances such as: land use change, point and nonpoint source pollution, the creation of dams and reservoirs, mining, and fishing. To assess effects of these changes on associated biotic communities it is necessary to monitor and report on the status of lotic ecosystems. A variety of stream classification methods are available to assist with these tasks, and such methods attempt to provide a systematic approach to modeling and understanding complex aquatic systems at various spatial and temporal scales. Of the vast number of approaches that exist, it is useful to group them into three main types. The first involves modeling longitudinal species turnover patterns within large drainage basins and relating these patterns to environmental predictors collected at reach and upstream catchment scales; the second uses regionalized hierarchical classification to create multi-scale, spatially homogenous aquatic ecoregions by grouping adjacent catchments together based on environmental similarities; and the third approach groups sites together on the basis of similarities in their environmental conditions both within and between catchments, independent of their geographic location. We review the literature with a focus on more recent classifications to examine the strengths and weaknesses of the different approaches. We identify gaps or problems with the current approaches, and we propose an eight-step heuristic process that may assist with development of more flexible and integrated aquatic classifications based on the current understanding, network thinking, and theoretical underpinnings.

  18. The 2017 World Health Organization classification of tumors of the pituitary gland: a summary.

    Science.gov (United States)

    Lopes, M Beatriz S

    2017-10-01

    The 4th edition of the World Health Organization (WHO) classification of endocrine tumors has been recently released. In this new edition, major changes are recommended in several areas of the classification of tumors of the anterior pituitary gland (adenophypophysis). The scope of the present manuscript is to summarize these recommended changes, emphasizing a few significant topics. These changes include the following: (1) a novel approach for classifying pituitary neuroendocrine tumors according to pituitary adenohypophyseal cell lineages; (2) changes to the histological grading of pituitary neuroendocrine tumors with the elimination of the term "atypical adenoma;" and (3) introduction of new entities like the pituitary blastoma and re-definition of old entities like the null-cell adenoma. This new classification is very practical and mostly based on immunohistochemistry for pituitary hormones, pituitary-specific transcription factors, and other immunohistochemical markers commonly used in pathology practice, not requiring routine ultrastructural analysis of the tumors. Evaluation of tumor proliferation potential, by mitotic count and Ki-67 labeling index, and tumor invasion is strongly recommended on individual case basis to identify clinically aggressive adenomas. In addition, the classification offers the treating clinical team information on tumor prognosis by identifying specific variants of adenomas associated with an elevated risk for recurrence. Changes in the classification of non-neuroendocrine tumors are also proposed, in particular those tumors arising in the posterior pituitary including pituicytoma, granular cell tumor of the posterior pituitary, and spindle cell oncocytoma. These changes endorse those previously published in the 2016 WHO classification of CNS tumors. Other tumors arising in the sellar region are also reviewed in detail including craniopharyngiomas, mesenchymal and stromal tumors, germ cell tumors, and hematopoietic tumors. It is

  19. Automated retinal vessel type classification in color fundus images

    Science.gov (United States)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  20. Classification of Herbaceous Vegetation Using Airborne Hyperspectral Imagery

    Directory of Open Access Journals (Sweden)

    Péter Burai

    2015-02-01

    Full Text Available Alkali landscapes hold an extremely fine-scale mosaic of several vegetation types, thus it seems challenging to separate these classes by remote sensing. Our aim was to test the applicability of different image classification methods of hyperspectral data in this complex situation. To reach the highest classification accuracy, we tested traditional image classifiers (maximum likelihood classifier—MLC, machine learning algorithms (support vector machine—SVM, random forest—RF and feature extraction (minimum noise fraction (MNF-transformation on training datasets of different sizes. Digital images were acquired from an AISA EAGLE II hyperspectral sensor of 128 contiguous bands (400–1000 nm, a spectral sampling of 5 nm bandwidth and a ground pixel size of 1 m. For the classification, we established twenty vegetation classes based on the dominant species, canopy height, and total vegetation cover. Image classification was applied to the original and MNF (minimum noise fraction transformed dataset with various training sample sizes between 10 and 30 pixels. In order to select the optimal number of the transformed features, we applied SVM, RF and MLC classification to 2–15 MNF transformed bands. In the case of the original bands, SVM and RF classifiers provided high accuracy irrespective of the number of the training pixels. We found that SVM and RF produced the best accuracy when using the first nine MNF transformed bands; involving further features did not increase classification accuracy. SVM and RF provided high accuracies with the transformed bands, especially in the case of the aggregated groups. Even MLC provided high accuracy with 30 training pixels (80.78%, but the use of a smaller training dataset (10 training pixels significantly reduced the accuracy of classification (52.56%. Our results suggest that in alkali landscapes, the application of SVM is a feasible solution, as it provided the highest accuracies compared to RF and MLC

  1. Description of comprehensive pump test change to ASME OM code, subsection ISTB

    International Nuclear Information System (INIS)

    Hartley, R.S.

    1994-01-01

    The American Society of Mechanical Engineers (ASME) Operations and Maintenance (OM) Main Committee and Board on Nuclear Codes and Standards (BNCS) recently approved changes to ASME OM Code-1990, Subsection ISTB, Inservice Testing of Pumps in Light-Water Reactor Power Plants. The changes will be included in the 1994 addenda to ISTB. The changes, designated as the comprehensive pump test, incorporate a new, improved philosophy for testing safety-related pumps in nuclear power plants. An important philosophical difference between the open-quotes old codeclose quotes inservice testing (IST) requirements and these changes is that the changes concentrate on less frequent, more meaningful testing while minimizing damaging and uninformative low-flow testing. The comprehensive pump test change establishes a more involved biannual test for all pumps and significantly reduces the rigor of the quarterly test for standby pumps. The increased rigor and cost of the biannual comprehensive tests are offset by the reduced cost of testing and potential damage to the standby pumps, which comprise a large portion of the safety-related pumps at most plants. This paper provides background on the pump testing requirements, discusses potential industry benefits of the change, describes the development of the comprehensive pump test, and gives examples and reasons for many of the specific changes. This paper also describes additional changes to ISTB that will be included in the 1994 addenda that are associated with, but not part of, the comprehensive pump test

  2. Building the United States National Vegetation Classification

    Science.gov (United States)

    Franklin, S.B.; Faber-Langendoen, D.; Jennings, M.; Keeler-Wolf, T.; Loucks, O.; Peet, R.; Roberts, D.; McKerrow, A.

    2012-01-01

    The Federal Geographic Data Committee (FGDC) Vegetation Subcommittee, the Ecological Society of America Panel on Vegetation Classification, and NatureServe have worked together to develop the United States National Vegetation Classification (USNVC). The current standard was accepted in 2008 and fosters consistency across Federal agencies and non-federal partners for the description of each vegetation concept and its hierarchical classification. The USNVC is structured as a dynamic standard, where changes to types at any level may be proposed at any time as new information comes in. But, because much information already exists from previous work, the NVC partners first established methods for screening existing types to determine their acceptability with respect to the 2008 standard. Current efforts include a screening process to assign confidence to Association and Group level descriptions, and a review of the upper three levels of the classification. For the upper levels especially, the expectation is that the review process includes international scientists. Immediate future efforts include the review of remaining levels and the development of a proposal review process.

  3. Correspondence between EQ-5D health state classifications and EQ VAS scores

    Directory of Open Access Journals (Sweden)

    Whynes David K

    2008-11-01

    Full Text Available Abstract Background The EQ-5D health-related quality of life instrument comprises a health state classification followed by a health evaluation using a visual analogue scale (VAS. The EQ-5D has been employed frequently in economic evaluations, yet the relationship between the two parts of the instrument remains ill-understood. In this paper, we examine the correspondence between VAS scores and health state classifications for a large sample, and identify variables which contribute to determining the VAS scores independently of the health states as classified. Methods A UK trial of management of low-grade abnormalities detected on screening for cervical pre-cancer (TOMBOLA provided EQ-5D data for over 3,000 women. Information on distress and multi-dimensional health locus of control had been collected using other instruments. A linear regression model was fitted, with VAS score as the dependent variable. Independent variables comprised EQ-5D health state classifications, distress, locus of control, and socio-demographic characteristics. Equivalent EQ-5D and distress data, collected at twelve months, were available for over 2,000 of the women, enabling us to predict changes in VAS score over time from changes in EQ-5D classification and distress. Results In addition to EQ-5D health state classification, VAS score was influenced by the subject's perceived locus of control, and by her age, educational attainment, ethnic origin and smoking behaviour. Although the EQ-5D classification includes a distress dimension, the independent measure of distress was an additional determinant of VAS score. Changes in VAS score over time were explained by changes in both EQ-5D severities and distress. Women allocated to the experimental management arm of the trial reported an increase in VAS score, independently of any changes in health state and distress. Conclusion In this sample, EQ VAS scores were predictable from the EQ-5D health state classification, although

  4. Reliability testing of two classification systems for osteoarthritis and post-traumatic arthritis of the elbow.

    Science.gov (United States)

    Amini, Michael H; Sykes, Joshua B; Olson, Stephen T; Smith, Richard A; Mauck, Benjamin M; Azar, Frederick M; Throckmorton, Thomas W

    2015-03-01

    The severity of elbow arthritis is one of many factors that surgeons must evaluate when considering treatment options for a given patient. Elbow surgeons have historically used the Broberg and Morrey (BM) and Hastings and Rettig (HR) classification systems to radiographically stage the severity of post-traumatic arthritis (PTA) and primary osteoarthritis (OA). We proposed to compare the intraobserver and interobserver reliability between systems for patients with either PTA or OA. The radiographs of 45 patients were evaluated at least 2 weeks apart by 6 evaluators of different levels of training. Intraobserver and interobserver reliability were calculated by Spearman correlation coefficients with 95% confidence intervals. Agreement was considered almost perfect for coefficients >0.80 and substantial for coefficients of 0.61 to 0.80. In patients with both PTA and OA, intraobserver reliability and interobserver reliability were substantial, with no difference between classification systems. There were no significant differences in intraobserver or interobserver reliability between attending physicians and trainees for either classification system (all P > .10). The presence of fracture implants did not affect reliability in the BM system but did substantially worsen reliability in the HR system (intraobserver P = .04 and interobserver P = .001). The BM and HR classifications both showed substantial intraobserver and interobserver reliability for PTA and OA. Training level differences did not affect reliability for either system. Both trainees and fellowship-trained surgeons may easily and reliably apply each classification system to the evaluation of primary elbow OA and PTA, although the HR system was less reliable in the presence of fracture implants. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  5. ACCUWIND - Methods for classification of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.

    2006-05-15

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  6. Application of In-Segment Multiple Sampling in Object-Based Classification

    Directory of Open Access Journals (Sweden)

    Nataša Đurić

    2014-12-01

    Full Text Available When object-based analysis is applied to very high-resolution imagery, pixels within the segments reveal large spectral inhomogeneity; their distribution can be considered complex rather than normal. When normality is violated, the classification methods that rely on the assumption of normally distributed data are not as successful or accurate. It is hard to detect normality violations in small samples. The segmentation process produces segments that vary highly in size; samples can be very big or very small. This paper investigates whether the complexity within the segment can be addressed using multiple random sampling of segment pixels and multiple calculations of similarity measures. In order to analyze the effect sampling has on classification results, statistics and probability value equations of non-parametric two-sample Kolmogorov-Smirnov test and parametric Student’s t-test are selected as similarity measures in the classification process. The performance of both classifiers was assessed on a WorldView-2 image for four land cover classes (roads, buildings, grass and trees and compared to two commonly used object-based classifiers—k-Nearest Neighbor (k-NN and Support Vector Machine (SVM. Both proposed classifiers showed a slight improvement in the overall classification accuracies and produced more accurate classification maps when compared to the ground truth image.

  7. Applications of Diagnostic Classification Models: A Literature Review and Critical Commentary

    Science.gov (United States)

    Sessoms, John; Henson, Robert A.

    2018-01-01

    Diagnostic classification models (DCMs) classify examinees based on the skills they have mastered given their test performance. This classification enables targeted feedback that can inform remedial instruction. Unfortunately, applications of DCMs have been criticized (e.g., no validity support). Generally, these evaluations have been brief and…

  8. Can Automatic Classification Help to Increase Accuracy in Data Collection?

    Directory of Open Access Journals (Sweden)

    Frederique Lang

    2016-09-01

    Full Text Available Purpose: The authors aim at testing the performance of a set of machine learning algorithms that could improve the process of data cleaning when building datasets. Design/methodology/approach: The paper is centered on cleaning datasets gathered from publishers and online resources by the use of specific keywords. In this case, we analyzed data from the Web of Science. The accuracy of various forms of automatic classification was tested here in comparison with manual coding in order to determine their usefulness for data collection and cleaning. We assessed the performance of seven supervised classification algorithms (Support Vector Machine (SVM, Scaled Linear Discriminant Analysis, Lasso and elastic-net regularized generalized linear models, Maximum Entropy, Regression Tree, Boosting, and Random Forest and analyzed two properties: accuracy and recall. We assessed not only each algorithm individually, but also their combinations through a voting scheme. We also tested the performance of these algorithms with different sizes of training data. When assessing the performance of different combinations, we used an indicator of coverage to account for the agreement and disagreement on classification between algorithms. Findings: We found that the performance of the algorithms used vary with the size of the sample for training. However, for the classification exercise in this paper the best performing algorithms were SVM and Boosting. The combination of these two algorithms achieved a high agreement on coverage and was highly accurate. This combination performs well with a small training dataset (10%, which may reduce the manual work needed for classification tasks. Research limitations: The dataset gathered has significantly more records related to the topic of interest compared to unrelated topics. This may affect the performance of some algorithms, especially in their identification of unrelated papers. Practical implications: Although the

  9. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  10. A Robust Geometric Model for Argument Classification

    Science.gov (United States)

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  11. Co-occurrence Models in Music Genre Classification

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan

    2005-01-01

    Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...... genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models....

  12. Soil classification using CPTu in Fort McMurray

    Energy Technology Data Exchange (ETDEWEB)

    Elbanna, M. [AMEC Earth and Environmental, Nanaimo, BC (Canada); El Sabbagh, M. [AMEC Earth and Environmental, Burnaby, BC (Canada); Sharp, J. [ConeTec Investigations Ltd., Richmond, BC (Canada)

    2009-07-01

    This paper evaluated 4 piezocone penetration testing (CPTu) classification methods using data from 3 different sites near Fort McMurray in northern Alberta. For comparative purposes, other in-situ tests, field observations, and laboratory tests were performed at all sites in close proximity to the CPTu soundings. The study evaluated pleistocene sand and sand till deposits with low fines content. Profiling these deposits is necessary because they are often used as filler material for earth retaining structures in many oilsands projects. The study also evaluated pleistocene clay and clay tills that are often used as low permeability material for seepage control. In thick layers, pleistocene clay is known to cause foundation problems. CPTu with dissipation data was shown to be a useful tool in geotechnical engineering practice to provide near continuous soil profiling and material properties. CPTu tip resistance and sleeve friction combined with pore pressure measurement provided useful evaluation of subsurface soil types. It was concluded that although all of the CPTu classification charts provided reasonable soil classification in typical soil conditions, local experience and understanding of soil behaviour is needed to make an appropriate selection of the most applicable charts in a given geological condition. 7 refs., 11 figs.

  13. IRIS COLOUR CLASSIFICATION SCALES – THEN AND NOW

    Science.gov (United States)

    Grigore, Mariana; Avram, Alina

    2015-01-01

    Eye colour is one of the most obvious phenotypic traits of an individual. Since the first documented classification scale developed in 1843, there have been numerous attempts to classify the iris colour. In the past centuries, iris colour classification scales has had various colour categories and mostly relied on comparison of an individual’s eye with painted glass eyes. Once photography techniques were refined, standard iris photographs replaced painted eyes, but this did not solve the problem of painted/ printed colour variability in time. Early clinical scales were easy to use, but lacked objectivity and were not standardised or statistically tested for reproducibility. The era of automated iris colour classification systems came with the technological development. Spectrophotometry, digital analysis of high-resolution iris images, hyper spectral analysis of the human real iris and the dedicated iris colour analysis software, all accomplished an objective, accurate iris colour classification, but are quite expensive and limited in use to research environment. Iris colour classification systems evolved continuously due to their use in a wide range of studies, especially in the fields of anthropology, epidemiology and genetics. Despite the wide range of the existing scales, up until present there has been no generally accepted iris colour classification scale. PMID:27373112

  14. Organizational Data Classification Based on the Importance Concept of Complex Networks.

    Science.gov (United States)

    Carneiro, Murillo Guimaraes; Zhao, Liang

    2017-08-01

    Data classification is a common task, which can be performed by both computers and human beings. However, a fundamental difference between them can be observed: computer-based classification considers only physical features (e.g., similarity, distance, or distribution) of input data; by contrast, brain-based classification takes into account not only physical features, but also the organizational structure of data. In this paper, we figure out the data organizational structure for classification using complex networks constructed from training data. Specifically, an unlabeled instance is classified by the importance concept characterized by Google's PageRank measure of the underlying data networks. Before a test data instance is classified, a network is constructed from vector-based data set and the test instance is inserted into the network in a proper manner. To this end, we also propose a measure, called spatio-structural differential efficiency, to combine the physical and topological features of the input data. Such a method allows for the classification technique to capture a variety of data patterns using the unique importance measure. Extensive experiments demonstrate that the proposed technique has promising predictive performance on the detection of heart abnormalities.

  15. Unspecific chronic low back pain - a simple functional classification tested in a case series of patients with spinal deformities.

    Science.gov (United States)

    Weiss, Hans-Rudolf; Werkmann, Mario

    2009-02-17

    Up to now, chronic low back pain without radicular symptoms is not classified and attributed in international literature as being "unspecific". For specific bracing of this patient group we use simple physical tests to predict the brace type the patient is most likely to benefit from. Based on these physical tests we have developed a simple functional classification of "unspecific" low back pain in patients with spinal deformities. Between January 2006 and July 2007 we have tested 130 patients (116 females and 14 males) with spinal deformities (average age 45 years, ranging from 14 years to 69) and chronic unspecific low back pain (pain for > 24 months) along with the indication for brace treatment for chronic unspecific low back pain. Some of the patients had symptoms of spinal claudication (n = 16). The "sagittal realignment test" (SRT) was applied, a lumbar hyperextension test, and the "sagittal delordosation test" (SDT). Additionally 3 female patients with spondylolisthesis were tested, including one female with symptoms of spinal claudication and 2 of these patients were 14 years of age and the other 43yrs old at the time of testing. 117 Patients reported significant pain release in the SRT and 13 in the SDT (> or = 2 steps in the Roland & Morris VRS). 3 Patients had no significant pain release in both of the tests (manual investigation we found hypermobility at L5/S1 or a spondylolisthesis at level L5/S1. In the other patients who responded well to the SRT loss of lumbar lordosis was the main issue, a finding which, according to scientific literature, correlates well with low back pain. The 3 patients who did not respond to either test had a fair pain reduction in a generally delordosing brace with an isolated small foam pad inserted at the level of L 2/3, leading to a lordosation at this region. With the exception of 3 patients (2.3%) a clear distribution to one of the two classes has been possible. 117 patients were supplied successfully with a sagittal

  16. Standard practice for classification of computed radiology systems

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This practice describes the evaluation and classification of a computed radiography (CR) system, a particular phosphor imaging plate (IP), system scanner and software, in combination with specified metal screens for industrial radiography. It is intended to ensure that the evaluation of image quality, as far as this is influenced by the scanner/IP system, meets the needs of users. 1.2 The practice defines system tests to be used to classify the systems of different suppliers and make them comparable for users. 1.3 The CR system performance is described by signal and noise parameters. For film systems, the signal is represented by gradient and the noise by granularity. The signal-to-noise ratio is normalized by the basic spatial resolution of the system and is part of classification. The normalization is given by the scanning aperture of 100 µm diameter for the micro-photometer, which is defined in Test Method E1815 for film system classification. This practice describes how the parameters shall be meas...

  17. Change Detection Algorithm for the Production of Land Cover Change Maps over the European Union Countries

    Directory of Open Access Journals (Sweden)

    Sebastian Aleksandrowicz

    2014-06-01

    Full Text Available Contemporary satellite Earth Observation systems provide growing amounts of very high spatial resolution data that can be used in various applications. An increasing number of sensors make it possible to monitor selected areas in great detail. However, in order to handle the volume of data, a high level of automation is required. The semi-automatic change detection methodology described in this paper was developed to annually update land cover maps prepared in the context of the Geoland2. The proposed algorithm was tailored to work with different very high spatial resolution images acquired over different European landscapes. The methodology is a fusion of various change detection methods ranging from: (1 layer arithmetic; (2 vegetation indices (NDVI differentiating; (3 texture calculation; and methods based on (4 canonical correlation analysis (multivariate alteration detection (MAD. User intervention during the production of the change map is limited to the selection of the input data, the size of initial segments and the threshold for texture classification (optionally. To achieve a high level of automation, statistical thresholds were applied in most of the processing steps. Tests showed an overall change recognition accuracy of 89%, and the change type classification methodology can accurately classify transitions between classes.

  18. Biased binomial assessment of cross-validated estimation of classification accuracies illustrated in diagnosis predictions

    Directory of Open Access Journals (Sweden)

    Quentin Noirhomme

    2014-01-01

    Full Text Available Multivariate classification is used in neuroimaging studies to infer brain activation or in medical applications to infer diagnosis. Their results are often assessed through either a binomial or a permutation test. Here, we simulated classification results of generated random data to assess the influence of the cross-validation scheme on the significance of results. Distributions built from classification of random data with cross-validation did not follow the binomial distribution. The binomial test is therefore not adapted. On the contrary, the permutation test was unaffected by the cross-validation scheme. The influence of the cross-validation was further illustrated on real-data from a brain–computer interface experiment in patients with disorders of consciousness and from an fMRI study on patients with Parkinson disease. Three out of 16 patients with disorders of consciousness had significant accuracy on binomial testing, but only one showed significant accuracy using permutation testing. In the fMRI experiment, the mental imagery of gait could discriminate significantly between idiopathic Parkinson's disease patients and healthy subjects according to the permutation test but not according to the binomial test. Hence, binomial testing could lead to biased estimation of significance and false positive or negative results. In our view, permutation testing is thus recommended for clinical application of classification with cross-validation.

  19. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  20. Atmospheric circulation classification comparison based on wildfires in Portugal

    Science.gov (United States)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  1. Lymphoma classification update: B-cell non-Hodgkin lymphomas.

    Science.gov (United States)

    Jiang, Manli; Bennani, N Nora; Feldman, Andrew L

    2017-05-01

    Lymphomas are classified based on the normal counterpart, or cell of origin, from which they arise. Because lymphocytes have physiologic immune functions that vary both by lineage and by stage of differentiation, the classification of lymphomas arising from these normal lymphoid populations is complex. Recent genomic data have contributed additional complexity. Areas covered: Lymphoma classification follows the World Health Organization (WHO) system, which reflects international consensus and is based on pathological, genetic, and clinical factors. A 2016 revision to the WHO classification of lymphoid neoplasms recently was reported. The present review focuses on B-cell non-Hodgkin lymphomas, the most common group of lymphomas, and summarizes recent changes most relevant to hematologists and other clinicians who care for lymphoma patients. Expert commentary: Lymphoma classification is a continually evolving field that needs to be responsive to new clinical, pathological, and molecular understanding of lymphoid neoplasia. Among the entities covered in this review, the 2016 revision of the WHO classification particularly impact the subclassification and genetic stratification of diffuse large B-cell lymphoma and high-grade B-cell lymphomas, and reflect evolving criteria and nomenclature for indolent B-cell lymphomas and lymphoproliferative disorders.

  2. FULLY CONVOLUTIONAL NETWORKS FOR GROUND CLASSIFICATION FROM LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Rizaldy

    2018-05-01

    Full Text Available Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs. In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN, a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher. The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  3. DTI measurements for Alzheimer’s classification

    Science.gov (United States)

    Maggipinto, Tommaso; Bellotti, Roberto; Amoroso, Nicola; Diacono, Domenico; Donvito, Giacinto; Lella, Eufemia; Monaco, Alfonso; Antonella Scelsi, Marzia; Tangaro, Sabina; Disease Neuroimaging Initiative, Alzheimer's.

    2017-03-01

    Diffusion tensor imaging (DTI) is a promising imaging technique that provides insight into white matter microstructure integrity and it has greatly helped identifying white matter regions affected by Alzheimer’s disease (AD) in its early stages. DTI can therefore be a valuable source of information when designing machine-learning strategies to discriminate between healthy control (HC) subjects, AD patients and subjects with mild cognitive impairment (MCI). Nonetheless, several studies have reported so far conflicting results, especially because of the adoption of biased feature selection strategies. In this paper we firstly analyzed DTI scans of 150 subjects from the Alzheimer’s disease neuroimaging initiative (ADNI) database. We measured a significant effect of the feature selection bias on the classification performance (p-value  informative content provided by DTI measurements for AD classification. Classification performances and biological insight, concerning brain regions related to the disease, provided by cross-validation analysis were both confirmed on the independent test.

  4. Neuromuscular disease classification system

    Science.gov (United States)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  5. Expert consensus statement to guide the evidence-based classification of Paralympic athletes with vision impairment: a Delphi study.

    Science.gov (United States)

    Ravensbergen, H J C Rianne; Mann, D L; Kamper, S J

    2016-04-01

    Paralympic sports are required to develop evidence-based systems that allocate athletes into 'classes' on the basis of the impact of their impairment on sport performance. However, sports for athletes with vision impairment (VI) classify athletes solely based on the WHO criteria for low vision and blindness. One key barrier to evidence-based classification is the absence of guidance on how to address classification issues unique to VI sport. The aim of this study was to reach expert consensus on how issues specific to VI sport should be addressed in evidence-based classification. A four-round Delphi study was conducted with 25 participants who had expertise as a coach, athlete, classifier and/or administrator in Paralympic sport for VI athletes. The experts agreed that the current method of classification does not fulfil the requirements of Paralympic classification, and that the system should be different for each sport to account for the sports' unique visual demands. Instead of relying only on tests of visual acuity and visual field, the panel agreed that additional tests are required to better account for the impact of impairment on sport performance. There was strong agreement that all athletes should not be required to wear a blindfold as a means of equalising the impairment during competition. There is strong support within the Paralympic movement to change the way that VI athletes are classified. This consensus statement provides clear guidance on how the most important issues specific to VI should be addressed, removing key barriers to the development of evidence-based classification. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    Science.gov (United States)

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  7. Classification differences and maternal mortality

    DEFF Research Database (Denmark)

    Salanave, B; Bouvier-Colle, M H; Varnoux, N

    1999-01-01

    OBJECTIVES: To compare the ways maternal deaths are classified in national statistical offices in Europe and to evaluate the ways classification affects published rates. METHODS: Data on pregnancy-associated deaths were collected in 13 European countries. Cases were classified by a European panel....... This change was substantial in three countries (P statistical offices appeared to attribute fewer deaths to obstetric causes. In the other countries, no differences were detected. According to official published data, the aggregated maternal mortality rate for participating countries was 7.7 per...... of experts into obstetric or non-obstetric causes. An ICD-9 code (International Classification of Diseases) was attributed to each case. These were compared to the codes given in each country. Correction indices were calculated, giving new estimates of maternal mortality rates. SUBJECTS: There were...

  8. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  9. Adaptive phase k-means algorithm for waveform classification

    Science.gov (United States)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  10. Music genre classification using temporal domain features

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. Jay

    2004-10-01

    Music genre provides an efficient way to index songs in the music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. In addition to other features, the temporal domain features of a music signal are exploited so as to increase the classification rate in this research. Three temporal techniques are examined in depth. First, the hidden Markov model (HMM) is used to emulate the time-varying properties of music signals. Second, to further increase the classification rate, we propose another feature set that focuses on the residual part of music signals. Third, the overall classification rate is enhanced by classifying smaller segments from a test material individually and making decision via majority voting. Experimental results are given to demonstrate the performance of the proposed techniques.

  11. Classification and pharmacological treatment of preschool wheezing: changes since 2008

    DEFF Research Database (Denmark)

    Brand, P. L. P.; Caudri, D.; Eber, E.

    2014-01-01

    Since the publication of the European Respiratory Society Task Force report in 2008, significant new evidence has become available on the classification and management of preschool wheezing disorders. In this report, an international consensus group reviews this new evidence and proposes some......, with scheduled close follow-up to monitor treatment effect. The group recommends discontinuing treatment if there is no benefit and taking favourable natural history into account when making decisions about long-term therapy. Oral corticosteroids are not indicated in mild-to-moderate acute wheeze episodes...

  12. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...... that call for inquiries into the theoretical foundation of bibliographic classification theory....

  13. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  14. Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis

    Science.gov (United States)

    Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku

    2018-02-01

    This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.

  15. Classification of brain tumours using short echo time 1H MR spectra

    Science.gov (United States)

    Devos, A.; Lukas, L.; Suykens, J. A. K.; Vanhamme, L.; Tate, A. R.; Howe, F. A.; Majós, C.; Moreno-Torres, A.; van der Graaf, M.; Arús, C.; Van Huffel, S.

    2004-09-01

    The purpose was to objectively compare the application of several techniques and the use of several input features for brain tumour classification using Magnetic Resonance Spectroscopy (MRS). Short echo time 1H MRS signals from patients with glioblastomas ( n = 87), meningiomas ( n = 57), metastases ( n = 39), and astrocytomas grade II ( n = 22) were provided by six centres in the European Union funded INTERPRET project. Linear discriminant analysis, least squares support vector machines (LS-SVM) with a linear kernel and LS-SVM with radial basis function kernel were applied and evaluated over 100 stratified random splittings of the dataset into training and test sets. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of binary classifiers, while the percentage of correct classifications was used to evaluate the multiclass classifiers. The influence of several factors on the classification performance has been tested: L2- vs. water normalization, magnitude vs. real spectra and baseline correction. The effect of input feature reduction was also investigated by using only the selected frequency regions containing the most discriminatory information, and peak integrated values. Using L2-normalized complete spectra the automated binary classifiers reached a mean test AUC of more than 0.95, except for glioblastomas vs. metastases. Similar results were obtained for all classification techniques and input features except for water normalized spectra, where classification performance was lower. This indicates that data acquisition and processing can be simplified for classification purposes, excluding the need for separate water signal acquisition, baseline correction or phasing.

  16. A simplified immunohistochemical classification of skeletal muscle fibres in mouse

    Directory of Open Access Journals (Sweden)

    M. Kammoun

    2014-06-01

    Full Text Available The classification of muscle fibres is of particular interest for the study of the skeletal muscle properties in a wide range of scientific fields, especially animal phenotyping. It is therefore important to define a reliable method for classifying fibre types. The aim of this study was to establish a simplified method for the immunohistochemical classification of fibres in mouse. To carry it out, we first tested a combination of several anti myosin heavy chain (MyHC antibodies in order to choose a minimum number of antibodies to implement a semi-automatic classification. Then, we compared the classification of fibres to the MyHC electrophoretic pattern on the same samples. Only two anti MyHC antibodies on serial sections with the fluorescent labeling of the Laminin were necessary to classify properly fibre types in Tibialis Anterior and Soleus mouse muscles in normal physiological conditions. This classification was virtually identical to the classification realized by the electrophoretic separation of MyHC. This immunohistochemical classification can be applied to the total area of Tibialis Anterior and Soleus mouse muscles. Thus, we provide here a useful, simple and time-efficient method for immunohistochemical classification of fibres, applicable for research in mouse

  17. PROGRESSIVE DENSIFICATION AND REGION GROWING METHODS FOR LIDAR DATA CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J. L. Pérez-García

    2012-07-01

    Full Text Available At present, airborne laser scanner systems are one of the most frequent methods used to obtain digital terrain elevation models. While having the advantage of direct measurement on the object, the point cloud obtained has the need for classification of their points according to its belonging to the ground. This need for classification of raw data has led to appearance of multiple filters focused LiDAR classification information. According this approach, this paper presents a classification method that combines LiDAR data segmentation techniques and progressive densification to carry out the location of the points belonging to the ground. The proposed methodology is tested on several datasets with different terrain characteristics and data availability. In all case, we analyze the advantages and disadvantages that have been obtained compared with the individual techniques application and, in a special way, the benefits derived from the integration of both classification techniques. In order to provide a more comprehensive quality control of the classification process, the obtained results have been compared with the derived from a manual procedure, which is used as reference classification. The results are also compared with other automatic classification methodologies included in some commercial software packages, highly contrasted by users for LiDAR data treatment.

  18. Lenke and King classification systems for adolescent idiopathic scoliosis: interobserver agreement and postoperative results

    Directory of Open Access Journals (Sweden)

    Hosseinpour-Feizi H

    2011-12-01

    Full Text Available Hojjat Hosseinpour-Feizi, Jafar Soleimanpour, Jafar Ganjpour Sales, Ali ArzroumchilarDepartment of Orthopedics, Shohada Hospital, Faculty of Medicine, Tabriz University of Medical Sciences, Tabriz, IranPurpose: The aim of this study was to investigate the interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis, and to compare the results of surgery performed based on classification of the scoliosis according to each of these classification systems.Methods: The study was conducted in Shohada Hospital in Tabriz, Iran, between 2009 and 2010. First, a reliability assessment was undertaken to assess interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis. Second, postoperative efficacy and safety of surgery performed based on the Lenke and King classifications were compared. Kappa coefficients of agreement were calculated to assess the agreement. Outcomes were compared using bivariate tests and repeated measures analysis of variance.Results: A low to moderate interobserver agreement was observed for the King classification; the Lenke classification yielded mostly high agreement coefficients. The outcome of surgery was not found to be substantially different between the two systems.Conclusion: Based on the results, the Lenke classification method seems advantageous. This takes into consideration the Lenke classification’s priority in providing details of curvatures in different anatomical surfaces to explain precise intensity of scoliosis, that it has higher interobserver agreement scores, and also that it leads to noninferior postoperative results compared with the King classification method.Keywords: test reliability, scoliosis classification, postoperative efficacy, adolescents

  19. CLASSIFICATION, DISTRIBUTION AND PRODUCTION OF KNOWLEDGE: THEORETICAL SUMMARY

    Directory of Open Access Journals (Sweden)

    R. A. Tchupin

    2013-01-01

    Full Text Available The paper is devoted to systemizing the main theoretical approaches to classification, distribution and production of knowledge in the global economy. The author focuses on F. Machlup’s knowledge classification and the concept of useful knowledge by J. Mokyr.The interpersonal and public channels of communication and acquisition of knowledge are observed taking into consideration the total changes caused by transition from industrial to postindustrial economy. The paper provides a comparative analysis of the given model and alternative concepts of knowledge generation: finalization of science, strategic research, post-normal science, academic capitalism, post-academic science, and the triple helix concept. The author maintains that the current concepts of knowledge generation reflect the fact of transformation of modern institutional technical environment due to the global technological changes, and increasing contribution of knowledge to the economic development. Accordingly, the roles of the main participants of the given process are changing along with the growing integration of education and science, state and businesses. 

  20. Problems of classification in the family Paramyxoviridae.

    Science.gov (United States)

    Rima, Bert; Collins, Peter; Easton, Andrew; Fouchier, Ron; Kurath, Gael; Lamb, Robert A; Lee, Benhur; Maisner, Andrea; Rota, Paul; Wang, Lin-Fa

    2018-05-01

    A number of unassigned viruses in the family Paramyxoviridae need to be classified either as a new genus or placed into one of the seven genera currently recognized in this family. Furthermore, numerous new paramyxoviruses continue to be discovered. However, attempts at classification have highlighted the difficulties that arise by applying historic criteria or criteria based on sequence alone to the classification of the viruses in this family. While the recent taxonomic change that elevated the previous subfamily Pneumovirinae into a separate family Pneumoviridae is readily justified on the basis of RNA dependent -RNA polymerase (RdRp or L protein) sequence motifs, using RdRp sequence comparisons for assignment to lower level taxa raises problems that would require an overhaul of the current criteria for assignment into genera in the family Paramyxoviridae. Arbitrary cut off points to delineate genera and species would have to be set if classification was based on the amino acid sequence of the RdRp alone or on pairwise analysis of sequence complementarity (PASC) of all open reading frames (ORFs). While these cut-offs cannot be made consistent with the current classification in this family, resorting to genus-level demarcation criteria with additional input from the biological context may afford a way forward. Such criteria would reflect the increasingly dynamic nature of virus taxonomy even if it would require a complete revision of the current classification.

  1. Field-Testing a PC Electronic Documentation System using the Clinical Care Classification© System with Nursing Students

    Directory of Open Access Journals (Sweden)

    Jennifer E. Mannino

    2011-01-01

    Full Text Available Schools of nursing are slow in training their students to keep up with the fast approaching era of electronic healthcare documentation. This paper discusses the importance of nursing documentation, and describes the field-testing of an electronic health record, the Sabacare Clinical Care Classification (CCC© system. The PC-CCC©, designed as a Microsoft Access® application, is an evidence-based electronic documentation system available via free download from the internet. A sample of baccalaureate nursing students from a mid-Atlantic private college used this program to document the nursing care they provided to patients during their sophomore level clinical experience. This paper summarizes the design, training, and evaluation of using the system in practice.

  2. NIM: A Node Influence Based Method for Cancer Classification

    Directory of Open Access Journals (Sweden)

    Yiwen Wang

    2014-01-01

    Full Text Available The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART.

  3. A risk informed safety classification for a Nordic NPP

    International Nuclear Information System (INIS)

    Jaenkaelae, K.

    2002-01-01

    The report describes a study to develop a safety classification proposal or classi- fication recommendations based on risks for selected equipment of a nuclear power plant. The application plant in this work is Loviisa NPP unit 1. The safety classification proposals are to be considered as an exercise in this pilot study and do not necessarily represent final proposals in a real situation. Comparisons to original safety classifications and technical specifications were made. The study concludes that it is possible to change safety classes or safety signifi- cances as considered in technical specifications and in in-service-inspections into both directions without endangering the safety or even by improving the safety. (au)

  4. Genetic parameters for type classification of Nelore cattle on central performance tests at pasture in Brazil.

    Science.gov (United States)

    Lima, Paulo Ricardo Martins; Paiva, Samuel Rezende; Cobuci, Jaime Araujo; Braccini Neto, José; Machado, Carlos Henrique Cavallari; McManus, Concepta

    2013-10-01

    The objective of this study was to characterize Nelore cattle on central performance tests in pasture, ranked by the visual classification method EPMURAS (structure, precocity, muscle, navel, breed, posture, and sexual characteristics), and to estimate genetic and phenotypic correlations between these parameters, including visual as well as production traits (initial and final weight on test, weight gain, and weight corrected for 550 days). The information used in the study was obtained on 21,032 Nelore bulls which were participants in the central performance test at pasture of the Brazilian Association for Zebu Breeders (ABCZ). Heritabilities obtained were from 0.19 to 0.50. Phenotypic correlations were positive from 0.70 to 0.97 between the weight traits, from 0.65 to 0.74 between visual characteristics, and from 0.29 to 0.47 between visual characteristics and weight traits. The genetic correlations were positive ranging from 0.80 to 0.98 between the characteristics of structure, precocity and musculature, from 0.13 to 0.64 between the growth characteristics, and from 0.41 to 0.97 between visual scores and weight gains. Heritability and genetic correlations indicate that the use of visual scores, along with the selection for growth characteristics, can bring positive results in selection of beef cattle for rearing on pasture.

  5. Hazard classification methodology

    International Nuclear Information System (INIS)

    Brereton, S.J.

    1996-01-01

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  6. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    DEFF Research Database (Denmark)

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    an algorithm based on Euclidean distance. Results show that random forest and SVM performed best for classification of P-gp inhibitors and non-inhibitors, correctly predicting 73/75 % of the external test set compounds. Classification based on the docking experiments using the scoring function Chem...

  7. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks.

    Science.gov (United States)

    Rajkomar, Alvin; Lingam, Sneha; Taylor, Andrew G; Blum, Michael; Mongan, John

    2017-02-01

    The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73-100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.

  8. Two Influential Primate Classifications Logically Aligned.

    Science.gov (United States)

    Franz, Nico M; Pier, Naomi M; Reeder, Deeann M; Chen, Mingmin; Yu, Shizhuo; Kianmajd, Parisa; Bowers, Shawn; Ludäscher, Bertram

    2016-07-01

    Classifications and phylogenies of perceived natural entities change in the light of new evidence. Taxonomic changes, translated into Code-compliant names, frequently lead to name:meaning dissociations across succeeding treatments. Classification standards such as the Mammal Species of the World (MSW) may experience significant levels of taxonomic change from one edition to the next, with potential costs to long-term, large-scale information integration. This circumstance challenges the biodiversity and phylogenetic data communities to express taxonomic congruence and incongruence in ways that both humans and machines can process, that is, to logically represent taxonomic alignments across multiple classifications. We demonstrate that such alignments are feasible for two classifications of primates corresponding to the second and third MSW editions. Our approach has three main components: (i) use of taxonomic concept labels, that is name sec. author (where sec. means according to), to assemble each concept hierarchy separately via parent/child relationships; (ii) articulation of select concepts across the two hierarchies with user-provided Region Connection Calculus (RCC-5) relationships; and (iii) the use of an Answer Set Programming toolkit to infer and visualize logically consistent alignments of these input constraints. Our use case entails the Primates sec. Groves (1993; MSW2-317 taxonomic concepts; 233 at the species level) and Primates sec. Groves (2005; MSW3-483 taxonomic concepts; 376 at the species level). Using 402 RCC-5 input articulations, the reasoning process yields a single, consistent alignment and 153,111 Maximally Informative Relations that constitute a comprehensive meaning resolution map for every concept pair in the Primates sec. MSW2/MSW3. The complete alignment, and various partitions thereof, facilitate quantitative analyses of name:meaning dissociation, revealing that nearly one in three taxonomic names are not reliable across treatments

  9. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  10. Out-of-Sample Generalizations for Supervised Manifold Learning for Classification.

    Science.gov (United States)

    Vural, Elif; Guillemot, Christine

    2016-03-01

    Supervised manifold learning methods for data classification map high-dimensional data samples to a lower dimensional domain in a structure-preserving way while increasing the separation between different classes. Most manifold learning methods compute the embedding only of the initially available data; however, the generalization of the embedding to novel points, i.e., the out-of-sample extension problem, becomes especially important in classification applications. In this paper, we propose a semi-supervised method for building an interpolation function that provides an out-of-sample extension for general supervised manifold learning algorithms studied in the context of classification. The proposed algorithm computes a radial basis function interpolator that minimizes an objective function consisting of the total embedding error of unlabeled test samples, defined as their distance to the embeddings of the manifolds of their own class, as well as a regularization term that controls the smoothness of the interpolation function in a direction-dependent way. The class labels of test data and the interpolation function parameters are estimated jointly with an iterative process. Experimental results on face and object images demonstrate the potential of the proposed out-of-sample extension algorithm for the classification of manifold-modeled data sets.

  11. Tweet-based Target Market Classification Using Ensemble Method

    Directory of Open Access Journals (Sweden)

    Muhammad Adi Khairul Anshary

    2016-09-01

    Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.

  12. Imaging of juvenile spondyloarthritis. Part I: Classifications and radiographs

    Directory of Open Access Journals (Sweden)

    Iwona Sudoł-Szopińska

    2017-09-01

    Full Text Available Juvenile spondyloarthropathies are manifested mainly by symptoms of peripheral arthritis and enthesitis. By contrast with adults, children rarely present with sacroiliitis and spondylitis. Imaging and laboratory tests allow early diagnosis and treatment. Conventional radiographs visualize late inflammatory lesions and post-inflammatory complications. Early diagnosis is possible with the use of ultrasonography and magnetic resonance imaging. The first part of the article presents classifications of juvenile spondyloarthropathies and discusses their radiographic presentation. Typical radiographic features of individual types of juvenile spondyloarthritis are listed (including ankylosing spondylitis, juvenile psoriatic arthritis, reactive arthritis and arthritis in the course of inflammatory bowel diseases. The second part will describe changes visible on ultrasonography and magnetic resonance imaging. In patients with juvenile spondyloarthropathies, these examinations are conducted to diagnose inflammatory lesions in peripheral joints, tendon sheaths, tendons and bursae. Moreover, magnetic resonance imaging also visualizes early inflammatory changes in the axial skeleton and subchondral bone marrow edema, which is considered an early sign of inflammation.

  13. Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform

    Directory of Open Access Journals (Sweden)

    Semih Dinç

    2013-01-01

    images. In experiment section, the improved performance of HSI classification technique, K-FKT, has been tested comparing other methods such as the classical FKT and three types of support vector machines (SVMs.

  14. [Correlation coefficient-based principle and method for the classification of jump degree in hydrological time series].

    Science.gov (United States)

    Wu, Zi Yi; Xie, Ping; Sang, Yan Fang; Gu, Hai Ting

    2018-04-01

    The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.

  15. Use of circulation types classifications to evaluate AR4 climate models over the Euro-Atlantic region

    Energy Technology Data Exchange (ETDEWEB)

    Pastor, M.A.; Casado, M.J. [Agencia Estatal de Meteorologia (AEMET), Madrid (Spain)

    2012-10-15

    This paper presents an evaluation of the multi-model simulations for the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) in terms of their ability to simulate the ERA40 circulation types over the Euro-Atlantic region in winter season. Two classification schemes, k-means and SANDRA, have been considered to test the sensitivity of the evaluation results to the classification procedure. The assessment allows establishing different rankings attending spatial and temporal features of the circulation types. Regarding temporal characteristics, in general, all AR4 models tend to underestimate the frequency of occurrence. The best model simulating spatial characteristics is the UKMO-HadGEM1 whereas CCSM3, UKMO-HadGEM1 and CGCM3.1(T63) are the best simulating the temporal features, for both classification schemes. This result agrees with the AR4 models ranking obtained when having analysed the ability of the same AR4 models to simulate Euro-Atlantic variability modes. This study has proved the utility of applying such a synoptic climatology approach as a diagnostic tool for models' assessment. The ability of the models to properly reproduce the position of ridges and troughs and the frequency of synoptic patterns, will therefore improve our confidence in the response of models to future climate changes. (orig.)

  16. Classifications of objects on hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    . In the present work a classification method that combines classic image classification approach and MIA is proposed. The basic idea is to group all pixels and calculate spectral properties of the pixel group to be used further as a vector of predictors for calibration and class prediction. The grouping can...... be done with mathematical morphology methods applied to a score image where objects are well separated. In the case of small overlapping a watershed transformation can be applied to disjoint the objects. The method has been tested on several simulated and real cases and showed good results and significant...... improvements in comparison with a standard MIA approach. The results as well as method details will be reported....

  17. AN ADABOOST OPTIMIZED CCFIS BASED CLASSIFICATION MODEL FOR BREAST CANCER DETECTION

    Directory of Open Access Journals (Sweden)

    CHANDRASEKAR RAVI

    2017-06-01

    Full Text Available Classification is a Data Mining technique used for building a prototype of the data behaviour, using which an unseen data can be classified into one of the defined classes. Several researchers have proposed classification techniques but most of them did not emphasis much on the misclassified instances and storage space. In this paper, a classification model is proposed that takes into account the misclassified instances and storage space. The classification model is efficiently developed using a tree structure for reducing the storage complexity and uses single scan of the dataset. During the training phase, Class-based Closed Frequent ItemSets (CCFIS were mined from the training dataset in the form of a tree structure. The classification model has been developed using the CCFIS and a similarity measure based on Longest Common Subsequence (LCS. Further, the Particle Swarm Optimization algorithm is applied on the generated CCFIS, which assigns weights to the itemsets and their associated classes. Most of the classifiers are correctly classifying the common instances but they misclassify the rare instances. In view of that, AdaBoost algorithm has been used to boost the weights of the misclassified instances in the previous round so as to include them in the training phase to classify the rare instances. This improves the accuracy of the classification model. During the testing phase, the classification model is used to classify the instances of the test dataset. Breast Cancer dataset from UCI repository is used for experiment. Experimental analysis shows that the accuracy of the proposed classification model outperforms the PSOAdaBoost-Sequence classifier by 7% superior to other approaches like Naïve Bayes Classifier, Support Vector Machine Classifier, Instance Based Classifier, ID3 Classifier, J48 Classifier, etc.

  18. Learning semantic histopathological representation for basal cell carcinoma classification

    Science.gov (United States)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  19. Classification Accuracy Increase Using Multisensor Data Fusion

    Science.gov (United States)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  20. The Performance of EEG-P300 Classification using Backpropagation Neural Networks

    Directory of Open Access Journals (Sweden)

    Arjon Turnip

    2013-12-01

    Full Text Available Electroencephalogram (EEG recordings signal provide an important function of brain-computer communication, but the accuracy of their classification is very limited in unforeseeable signal variations relating to artifacts. In this paper, we propose a classification method entailing time-series EEG-P300 signals using backpropagation neural networks to predict the qualitative properties of a subject’s mental tasks by extracting useful information from the highly multivariate non-invasive recordings of brain activity. To test the improvement in the EEG-P300 classification performance (i.e., classification accuracy and transfer rate with the proposed method, comparative experiments were conducted using Bayesian Linear Discriminant Analysis (BLDA. Finally, the result of the experiment showed that the average of the classification accuracy was 97% and the maximum improvement of the average transfer rate is 42.4%, indicating the considerable potential of the using of EEG-P300 for the continuous classification of mental tasks.

  1. Improving Cross-Day EEG-Based Emotion Classification Using Robust Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Yuan-Pin Lin

    2017-07-01

    Full Text Available Constructing a robust emotion-aware analytical framework using non-invasively recorded electroencephalogram (EEG signals has gained intensive attentions nowadays. However, as deploying a laboratory-oriented proof-of-concept study toward real-world applications, researchers are now facing an ecological challenge that the EEG patterns recorded in real life substantially change across days (i.e., day-to-day variability, arguably making the pre-defined predictive model vulnerable to the given EEG signals of a separate day. The present work addressed how to mitigate the inter-day EEG variability of emotional responses with an attempt to facilitate cross-day emotion classification, which was less concerned in the literature. This study proposed a robust principal component analysis (RPCA-based signal filtering strategy and validated its neurophysiological validity and machine-learning practicability on a binary emotion classification task (happiness vs. sadness using a five-day EEG dataset of 12 subjects when participated in a music-listening task. The empirical results showed that the RPCA-decomposed sparse signals (RPCA-S enabled filtering off the background EEG activity that contributed more to the inter-day variability, and predominately captured the EEG oscillations of emotional responses that behaved relatively consistent along days. Through applying a realistic add-day-in classification validation scheme, the RPCA-S progressively exploited more informative features (from 12.67 ± 5.99 to 20.83 ± 7.18 and improved the cross-day binary emotion-classification accuracy (from 58.31 ± 12.33% to 64.03 ± 8.40% as trained the EEG signals from one to four recording days and tested against one unseen subsequent day. The original EEG features (prior to RPCA processing neither achieved the cross-day classification (the accuracy was around chance level nor replicated the encouraging improvement due to the inter-day EEG variability. This result

  2. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  3. What should an ideal spinal injury classification system consist of? A methodological review and conceptual proposal for future classifications.

    NARCIS (Netherlands)

    Middendorp, J.J. van; Audige, L.; Hanson, B.; Chapman, J.R.; Hosman, A.J.F.

    2010-01-01

    Since Bohler published the first categorization of spinal injuries based on plain radiographic examinations in 1929, numerous classifications have been proposed. Despite all these efforts, however, only a few have been tested for reliability and validity. This methodological, conceptual review

  4. Polyp morphology: an interobserver evaluation for the Paris classification among international experts.

    Science.gov (United States)

    van Doorn, Sascha C; Hazewinkel, Y; East, James E; van Leerdam, Monique E; Rastogi, Amit; Pellisé, Maria; Sanduleanu-Dascalescu, Silvia; Bastiaansen, Barbara A J; Fockens, Paul; Dekker, Evelien

    2015-01-01

    The Paris classification is an international classification system for describing polyp morphology. Thus far, the validity and reproducibility of this classification have not been assessed. We aimed to determine the interobserver agreement for the Paris classification among seven Western expert endoscopists. A total of 85 short endoscopic video clips depicting polyps were created and assessed by seven expert endoscopists according to the Paris classification. After a digital training module, the same 85 polyps were assessed again. We calculated the interobserver agreement with a Fleiss kappa and as the proportion of pairwise agreement. The interobserver agreement of the Paris classification among seven experts was moderate with a Fleiss kappa of 0.42 and a mean pairwise agreement of 67%. The proportion of lesions assessed as "flat" by the experts ranged between 13 and 40% (Pagreement did not change (kappa 0.38, pairwise agreement 60%). Our study is the first to validate the Paris classification for polyp morphology. We demonstrated only a moderate interobserver agreement among international Western experts for this classification system. Our data suggest that, in its current version, the use of this classification system in daily practice is questionable and it is unsuitable for comparative endoscopic research. We therefore suggest introduction of a simplification of the classification system.

  5. Influence of nuclei segmentation on breast cancer malignancy classification

    Science.gov (United States)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  6. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  7. The World Health Organization Classification of dontogenic Lesions: A Summary of the Changes of the 2017 (4th Edition

    Directory of Open Access Journals (Sweden)

    Merva SOLUK-TEKKEŞİN

    2018-01-01

    Full Text Available The 4th edition of the World Health Organization (WHO Classification of Head and Neck Tumors was published in January 2017. The edition serves to provide an updated classification scheme, and extended genetic and molecular data that are useful as diagnostic tools for the lesions of the head and neck region. This review focuses on the most current update of odontogenic cysts and tumors based on the 2017 WHO edition. The updated classification has some important differences from the 3rd edition (2005, including a new classification of odontogenic cysts, ‘reclassified’ odontogenic tumors, and some new entities.

  8. A NEW CLASSIFICATION OF SMES IN THE DIGITAL ECONOMY CONTEXT

    Directory of Open Access Journals (Sweden)

    Maximilian ROBU

    2013-06-01

    Full Text Available In a highly dynamic and competitive environment as the online one, SMEs need to adapt and change their behavior, requiring a rethinking of classification criteria. Social media is changing the way people interact, but also changing organizations and how they operate. Social networks are no longer just a simple tool to create a network of friends; they have become a destination for business. We can also talk of a new world of business, a new way of working; the freelancers have a share of the increasingly significant. Cloud computing is the support for all changes in the current environment, providing the tools necessary to conduct activities anywhere. Given the mentioned arguments, we consider that the classification of SMEs according to a new set of criteria: operating environment, geographical area, type of employees in the company or how they organize marketing activities.

  9. Mapping Plant Functional Types over Broad Mountainous Regions: A Hierarchical Soft Time-Space Classification Applied to the Tibetan Plateau

    Directory of Open Access Journals (Sweden)

    Danlu Cai

    2014-04-01

    Full Text Available Research on global climate change requires plant functional type (PFT products. Although several PFT mapping procedures for remote sensing imagery are being used, none of them appears to be specifically designed to map and evaluate PFTs over broad mountainous areas which are highly relevant regions to identify and analyze the response of natural ecosystems. We present a methodology for generating soft classifications of PFTs from remotely sensed time series that are based on a hierarchical strategy by integrating time varying integrated NDVI and phenological information with topography: (i Temporal variability: a Fourier transform of a vegetation index (MODIS NDVI, 2006 to 2010. (ii Spatial partitioning: a primary image segmentation based on a small number of thresholds applied to the Fourier amplitude. (iii Classification by a supervised soft classification step is based on a normalized distance metric constructed from a subset of Fourier coefficients and complimentary altitude data from a digital elevation model. Applicability and effectiveness is tested for the eastern Tibetan Plateau. A classification nomenclature is determined from temporally stable pixels in the MCD12Q1 time series. Overall accuracy statistics of the resulting classification reveal a gain of about 7% from 64.4% compared to 57.7% by the MODIS PFT products.

  10. Classification of radioactive self-luminous light sources - approved 1975. NBS Handbook 116

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    The standard establishes the classification of certain radioactive self-luminous light sources according to radionuclide, type of source, activity, and performance requirements. The objectives are to establish minimum prototype testing requirements for radioactive self-luminous light sources, to promote uniformity of marking such sources, and to establish minimum physical performance for such sources. The standard is primarily directed toward assuring adequate containment of the radioactive material. Testing procedures and classification designations are specified for discoloration, temperature, thermal shock, reduced pressure, impact, vibration, and immersion. A range of test requirements is presented according to intended usage and source activity

  11. The value of laparoscopic classifications in decision on definitive ...

    African Journals Online (AJOL)

    The value of laparoscopic classifications in decision on definitive surgery in patients with nonpalpable testes: our ... present our clinical experience with the laparoscopic approach in patients with nonpalpable testes (NPTs) and .... decision making during the procedure. Gatti and. Ostlie [3] have pointed out that laparoscopic ...

  12. Comparison of Danish dichotomous and BI-RADS classifications of mammographic density.

    Science.gov (United States)

    Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My; Vejborg, Ilse; Andersen, Zorana Jovanovic

    2014-06-01

    In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. To compare the Danish dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having scattered fibroglandular (BI-RADS code 2), 57 (47.5%) as having heterogeneously dense (BI-RADS 3), and five (4.2%) as having extremely dense (BI-RADS code 4) mammographic density. The inter-rater variability assessed by weighted kappa statistic showed a substantial agreement (0.75). The dichotomous mammographic density classification system utilized in early years of Copenhagen's mammographic screening program (1991-2001) agreed well with the BI-RADS density classification system.

  13. Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.

    Science.gov (United States)

    Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter

    2018-04-18

    There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.

  14. Standard classification: Physics

    International Nuclear Information System (INIS)

    1977-01-01

    This is a draft standard classification of physics. The conception is based on the physics part of the systematic catalogue of the Bayerische Staatsbibliothek and on the classification given in standard textbooks. The ICSU-AB classification now used worldwide by physics information services was not taken into account. (BJ) [de

  15. Active Learning of Classification Models with Likert-Scale Feedback.

    Science.gov (United States)

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.

  16. Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification

    Directory of Open Access Journals (Sweden)

    Xinzheng Zhang

    2016-09-01

    Full Text Available Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.

  17. Crop Type Classification Using Vegetation Indices of RapidEye Imagery

    Science.gov (United States)

    Ustuner, M.; Sanli, F. B.; Abdikan, S.; Esetlili, M. T.; Kurucu, Y.

    2014-09-01

    Cutting-edge remote sensing technology has a significant role for managing the natural resources as well as the any other applications about the earth observation. Crop monitoring is the one of these applications since remote sensing provides us accurate, up-to-date and cost-effective information about the crop types at the different temporal and spatial resolution. In this study, the potential use of three different vegetation indices of RapidEye imagery on crop type classification as well as the effect of each indices on classification accuracy were investigated. The Normalized Difference Vegetation Index (NDVI), the Green Normalized Difference Vegetation Index (GNDVI), and the Normalized Difference Red Edge Index (NDRE) are the three vegetation indices used in this study since all of these incorporated the near-infrared (NIR) band. RapidEye imagery is highly demanded and preferred for agricultural and forestry applications since it has red-edge and NIR bands. The study area is located in Aegean region of Turkey. Radial Basis Function (RBF) kernel was used here for the Support Vector Machines (SVMs) classification. Original bands of RapidEye imagery were excluded and classification was performed with only three vegetation indices. The contribution of each indices on image classification accuracy was also tested with single band classification. Highest classification accuracy of 87, 46 % was obtained using three vegetation indices. This obtained classification accuracy is higher than the classification accuracy of any dual-combination of these vegetation indices. Results demonstrate that NDRE has the highest contribution on classification accuracy compared to the other vegetation indices and the RapidEye imagery can get satisfactory results of classification accuracy without original bands.

  18. Subordinate-level object classification reexamined.

    Science.gov (United States)

    Biederman, I; Subramaniam, S; Bar, M; Kalocsai, P; Fiser, J

    1999-01-01

    The classification of a table as round rather than square, a car as a Mazda rather than a Ford, a drill bit as 3/8-inch rather than 1/4-inch, and a face as Tom have all been regarded as a single process termed "subordinate classification." Despite the common label, the considerable heterogeneity of the perceptual processing required to achieve such classifications requires, minimally, a more detailed taxonomy. Perceptual information relevant to subordinate-level shape classifications can be presumed to vary on continua of (a) the type of distinctive information that is present, nonaccidental or metric, (b) the size of the relevant contours or surfaces, and (c) the similarity of the to-be-discriminated features, such as whether a straight contour has to be distinguished from a contour of low curvature versus high curvature. We consider three, relatively pure cases. Case 1 subordinates may be distinguished by a representation, a geon structural description (GSD), specifying a nonaccidental characterization of an object's large parts and the relations among these parts, such as a round table versus a square table. Case 2 subordinates are also distinguished by GSDs, except that the distinctive GSDs are present at a small scale in a complex object so the location and mapping of the GSDs are contingent on an initial basic-level classification, such as when we use a logo to distinguish various makes of cars. Expertise for Cases 1 and 2 can be easily achieved through specification, often verbal, of the GSDs. Case 3 subordinates, which have furnished much of the grist for theorizing with "view-based" template models, require fine metric discriminations. Cases 1 and 2 account for the overwhelming majority of shape-based basic- and subordinate-level object classifications that people can and do make in their everyday lives. These classifications are typically made quickly, accurately, and with only modest costs of viewpoint changes. Whereas the activation of an array of

  19. Tongue Images Classification Based on Constrained High Dispersal Network

    Directory of Open Access Journals (Sweden)

    Dan Meng

    2017-01-01

    Full Text Available Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM. However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN, we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  20. A specialist-generalist classification of the arable flora and its response to changes in agricultural practices

    Science.gov (United States)

    2010-01-01

    Background Theory in ecology points out the potential link between the degree of specialisation of organisms and their responses to disturbances and suggests that this could be a key element for understanding the assembly of communities. We evaluated this question for the arable weed flora as this group has scarcely been the focus of ecological studies so far and because weeds are restricted to habitats characterised by very high degrees of disturbance. As such, weeds offer a case study to ask how specialization relates to abundance and distribution of species in relation to the varying disturbance regimes occurring in arable crops. Results We used data derived from an extensive national monitoring network of approximately 700 arable fields scattered across France to quantify the degree of specialisation of 152 weed species using six different ecological methods. We then explored the impact of the level of disturbance occurring in arable fields by comparing the degree of specialisation of weed communities in contrasting field situations. The classification of species as specialist or generalist was consistent between different ecological indices. When applied on a large-scale data set across France, this classification highlighted that monoculture harbour significantly more specialists than crop rotations, suggesting that crop rotation increases abundance of generalist species rather than sets of species that are each specialised to the individual crop types grown in the rotation. Applied to a diachronic dataset, the classification also shows that the proportion of specialist weed species has significantly decreased in cultivated fields over the last 30 years which suggests a biotic homogenization of agricultural landscapes. Conclusions This study shows that the concept of generalist/specialist species is particularly relevant to understand the effect of anthropogenic disturbances on the evolution of plant community composition and that ecological theories

  1. Comprehensive Application of the International Classification of Headache Disorders Third Edition, Beta Version

    OpenAIRE

    Kim, Byung-Kun; Cho, Soo-Jin; Kim, Byung-Su; Sohn, Jong-Hee; Kim, Soo-Kyoung; Cha, Myoung-Jin; Song, Tae-Jin; Kim, Jae-Moon; Park, Jeong Wook; Chu, Min Kyung; Park, Kwang-Yeol; Moon, Heui-Soo

    2015-01-01

    The purpose of this study was to test the feasibility and usefulness of the International Classification of Headache Disorders, third edition, beta version (ICHD-3?), and compare the differences with the International Classification of Headache Disorders, second edition (ICHD-2). Consecutive first-visit patients were recruited from 11 headache clinics in Korea. Headache classification was performed in accordance with ICHD-3?. The characteristics of headaches were analyzed and the feasibility ...

  2. WHO/ISUP classification of the urothelial tumors of the urinary bladder

    Directory of Open Access Journals (Sweden)

    Zdenka Ovčak

    2005-09-01

    Full Text Available Background: The authors present the current classification of urothelial neoplasms of the urinary bladder. The classification of urothelial tumors of the urinary bladder of 1973 was despite some imperfection relatively successfuly used for more than thirty years. The three grade classification of papillary urothelial tumors without invasion has been based on evaluation of variations in architecture of covering epithelium and tumor cell anaplasia. As reccomended by the International Society of Urological Pathologists (ISUP, the World Health Organisation (WHO accepted the new WHO/ ISUP classification in 1998 that was revised in 2002 and finally published in 2004. With intention to avoid unnecessary diagnosis of cancer in patients having papillary urothelial tumors with rare invasive or metastastatic growth, this classification introduced a new entity, the papillary urothelial neoplasia of low malignant potential (PUNLMP. The additional change in classification was the division of invasive urothelial neoplasms only to low and high grade urothelial carcinomas.Conclusions: The authors’ opinion is that although the old classification is not recommended for use anymore the new one is not solving the elementary reproaches to previous classification such as terminological unsuitability and insufficient scientific reasoning. Our proposed solution in classification of papillary urothelial neoplasms would be the application of criteria analogous to that used in diagnostics of papillary noninvasive tumors of the head and neck or alimentary tract.

  3. Review of Land Use and Land Cover Change research progress

    Science.gov (United States)

    Chang, Yue; Hou, Kang; Li, Xuxiang; Zhang, Yunwei; Chen, Pei

    2018-02-01

    Land Use and Land Cover Change (LUCC) can reflect the pattern of human land use in a region, and plays an important role in space soil and water conservation. The study on the change of land use patterns in the world is of great significance to cope with global climate change and sustainable development. This paper reviews the main research progress of LUCC at home and abroad, and suggests that land use change has been shifted from land use planning and management to land use change impact and driving factors. The development of remote sensing technology provides the basis and data for LUCC with dynamic monitoring and quantitative analysis. However, there is no uniform standard for land use classification at present, which brings a lot of inconvenience to the collection and analysis of land cover data. Globeland30 is an important milestone contribution to the study of international LUCC system. More attention should be paid to the accuracy and results contrasting test of land use classification obtained by remote sensing technology.

  4. Testing for change in structural elements of forest inventories

    Science.gov (United States)

    Melinda Vokoun; David Wear; Robert Abt

    2009-01-01

    In this article we develop a methodology to test for changes in the underlying relationships between measures of forest productivity (structural elements) and site characteristics, herein referred to as structural changes, using standard forest inventories. Changes in measures of forest growing stock volume and number of trees for both...

  5. Inductive classification of operating data from a fluidized bed calciner

    International Nuclear Information System (INIS)

    O'Brien, B.H.

    1990-01-01

    A process flowsheet expert system for a fluidized bed calciner which solidifies high-level radioactive liquid waste was developed from pilot-plant data using a commercial, inductive classification program. After initial classification of the data, the resulting rules were inspected and adjusted to match existing knowledge of process chemistry. The final expert system predicts performance of process flowsheets based upon the chemical composition of the calciner feed and has been successfully used to identify potential operational problems prior to calciner pilot-plant testing of new flowsheets and to provide starting parameters for pilot-plant tests. By using inductive classification techniques to develop the initial rules from the calciner pilot-plant data and using existing process knowledge to verify the accuracy of these rules, an effective expert system was developed with a minimum amount of effort. This method may be applied for developing expert systems for other processes where numerous operating data are available and only general process chemistry effects are known

  6. Full-motion video analysis for improved gender classification

    Science.gov (United States)

    Flora, Jeffrey B.; Lochtefeld, Darrell F.; Iftekharuddin, Khan M.

    2014-06-01

    The ability of computer systems to perform gender classification using the dynamic motion of the human subject has important applications in medicine, human factors, and human-computer interface systems. Previous works in motion analysis have used data from sensors (including gyroscopes, accelerometers, and force plates), radar signatures, and video. However, full-motion video, motion capture, range data provides a higher resolution time and spatial dataset for the analysis of dynamic motion. Works using motion capture data have been limited by small datasets in a controlled environment. In this paper, we explore machine learning techniques to a new dataset that has a larger number of subjects. Additionally, these subjects move unrestricted through a capture volume, representing a more realistic, less controlled environment. We conclude that existing linear classification methods are insufficient for the gender classification for larger dataset captured in relatively uncontrolled environment. A method based on a nonlinear support vector machine classifier is proposed to obtain gender classification for the larger dataset. In experimental testing with a dataset consisting of 98 trials (49 subjects, 2 trials per subject), classification rates using leave-one-out cross-validation are improved from 73% using linear discriminant analysis to 88% using the nonlinear support vector machine classifier.

  7. Parametric change point estimation, testing and confidence interval ...

    African Journals Online (AJOL)

    In many applications like finance, industry and medicine, it is important to consider that the model parameters may undergo changes at unknown moment in time. This paper deals with estimation, testing and confidence interval of a change point for a univariate variable which is assumed to be normally distributed. To detect ...

  8. Classification of bacterial contamination using image processing and distributed computing.

    Science.gov (United States)

    Ahmed, W M; Bayraktar, B; Bhunia, A; Hirleman, E D; Robinson, J P; Rajwa, B

    2013-01-01

    Disease outbreaks due to contaminated food are a major concern not only for the food-processing industry but also for the public at large. Techniques for automated detection and classification of microorganisms can be a great help in preventing outbreaks and maintaining the safety of the nations food supply. Identification and classification of foodborne pathogens using colony scatter patterns is a promising new label-free technique that utilizes image-analysis and machine-learning tools. However, the feature-extraction tools employed for this approach are computationally complex, and choosing the right combination of scatter-related features requires extensive testing with different feature combinations. In the presented work we used computer clusters to speed up the feature-extraction process, which enables us to analyze the contribution of different scatter-based features to the overall classification accuracy. A set of 1000 scatter patterns representing ten different bacterial strains was used. Zernike and Chebyshev moments as well as Haralick texture features were computed from the available light-scatter patterns. The most promising features were first selected using Fishers discriminant analysis, and subsequently a support-vector-machine (SVM) classifier with a linear kernel was used. With extensive testing we were able to identify a small subset of features that produced the desired results in terms of classification accuracy and execution speed. The use of distributed computing for scatter-pattern analysis, feature extraction, and selection provides a feasible mechanism for large-scale deployment of a light scatter-based approach to bacterial classification.

  9. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    Science.gov (United States)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  10. Evolving cancer classification in the era of personalized medicine: A primer for radiologists

    Energy Technology Data Exchange (ETDEWEB)

    O' Neill, Alibhe C.; Jagannathan, Jyothi P.; Ramaiya, Nikhil H. [Dept. of of Imaging, Dana Farber Cancer Institute, Boston (United States)

    2017-01-15

    Traditionally tumors were classified based on anatomic location but now specific genetic mutations in cancers are leading to treatment of tumors with molecular targeted therapies. This has led to a paradigm shift in the classification and treatment of cancer. Tumors treated with molecular targeted therapies often show morphological changes rather than change in size and are associated with class specific and drug specific toxicities, different from those encountered with conventional chemotherapeutic agents. It is important for the radiologists to be familiar with the new cancer classification and the various treatment strategies employed, in order to effectively communicate and participate in the multi-disciplinary care. In this paper we will focus on lung cancer as a prototype of the new molecular classification.

  11. Application of ant colony optimization in NPP classification fault location

    International Nuclear Information System (INIS)

    Xie Chunli; Liu Yongkuo; Xia Hong

    2009-01-01

    Nuclear Power Plant is a highly complex structural system with high safety requirements. Fault location appears to be particularly important to enhance its safety. Ant Colony Optimization is a new type of optimization algorithm, which is used in the fault location and classification of nuclear power plants in this paper. Taking the main coolant system of the first loop as the study object, using VB6.0 programming technology, the NPP fault location system is designed, and is tested against the related data in the literature. Test results show that the ant colony optimization can be used in the accurate classification fault location in the nuclear power plants. (authors)

  12. The paradox of atheoretical classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may...... be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural...

  13. Analysis of the Carnegie Classification of Community Engagement: Patterns and Impact on Institutions

    Science.gov (United States)

    Driscoll, Amy

    2014-01-01

    This chapter describes the impact that participation in the Carnegie Classification for Community Engagement had on the institutions of higher learning that applied for the classification. This is described in terms of changes in direct community engagement, monitoring and reporting on community engagement, and levels of student and professor…

  14. Coefficient of variation for use in crop area classification across multiple climates

    Science.gov (United States)

    Whelen, Tracy; Siqueira, Paul

    2018-05-01

    In this study, the coefficient of variation (CV) is introduced as a unitless statistical measurement for the classification of croplands using synthetic aperture radar (SAR) data. As a measurement of change, the CV is able to capture changing backscatter responses caused by cycles of planting, growing, and harvesting, and thus is able to differentiate these areas from a more static forest or urban area. Pixels with CV values above a given threshold are classified as crops, and below the threshold are non-crops. This paper uses cross-polarized L-band SAR data from the ALOS PALSAR satellite to classify eleven regions across the United States, covering a wide range of major crops and climates. Two separate sets of classification were done, with the first targeting the optimum classification thresholds for each dataset, and the second using a generalized threshold for all datasets to simulate a large-scale operationalized situation. Overall accuracies for the first phase of classification ranged from 66%-81%, and 62%-84% for the second phase. Visual inspection of the results shows numerous possibilities for improving the classifications while still using the same classification method, including increasing the number and temporal frequency of input images in order to better capture phenological events and mitigate the effects of major precipitation events, as well as more accurate ground truth data. These improvements would make the CV method a viable tool for monitoring agriculture throughout the year on a global scale.

  15. Dynamic classification system in large-scale supervision of energy efficiency in buildings

    International Nuclear Information System (INIS)

    Kiluk, S.

    2014-01-01

    Highlights: • Rough set approximation of classification improves energy efficiency prediction. • Dynamic features of diagnostic classification allow for its precise prediction. • Indiscernibility in large population enhances identification of process features. • Diagnostic information can be refined by dynamic references to local neighbourhood. • We introduce data exploration validation based on system dynamics and uncertainty. - Abstract: Data mining and knowledge discovery applied to the billing data provide the diagnostic instruments for the evaluation of energy use in buildings connected to a district heating network. To ensure the validity of an algorithm-based classification system, the dynamic properties of a sequence of partitions for consecutive detected events were investigated. The information regarding the dynamic properties of the classification system refers to the similarities between the supervised objects and migrations that originate from the changes in the building energy use and loss similarity to their neighbourhood and thus represents the refinement of knowledge. In this study, we demonstrate that algorithm-based diagnostic knowledge has dynamic properties that can be exploited with a rough set predictor to evaluate whether the implementation of classification for supervision of energy use aligns with the dynamics of changes of district heating-supplied building properties. Moreover, we demonstrate the refinement of the current knowledge with the previous findings and we present the creation of predictive diagnostic systems based on knowledge dynamics with a satisfactory level of classification errors, even for non-stationary data

  16. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  17. A Data Mining Classification Approach for Behavioral Malware Detection

    Directory of Open Access Journals (Sweden)

    Monire Norouzi

    2016-01-01

    Full Text Available Data mining techniques have numerous applications in malware detection. Classification method is one of the most popular data mining techniques. In this paper we present a data mining classification approach to detect malware behavior. We proposed different classification methods in order to detect malware based on the feature and behavior of each malware. A dynamic analysis method has been presented for identifying the malware features. A suggested program has been presented for converting a malware behavior executive history XML file to a suitable WEKA tool input. To illustrate the performance efficiency as well as training data and test, we apply the proposed approaches to a real case study data set using WEKA tool. The evaluation results demonstrated the availability of the proposed data mining approach. Also our proposed data mining approach is more efficient for detecting malware and behavioral classification of malware can be useful to detect malware in a behavioral antivirus.

  18. Signal classification for acoustic neutrino detection

    International Nuclear Information System (INIS)

    Neff, M.; Anton, G.; Enzenhöfer, A.; Graf, K.; Hößl, J.; Katz, U.; Lahmann, R.; Richardt, C.

    2012-01-01

    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of 1% is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  19. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    Energy Technology Data Exchange (ETDEWEB)

    Jürgens, Björn, E-mail: bjurgens@agenciaidea.es [Agency of Innovation and Development of Andalusia, CITPIA PATLIB Centre (Spain); Herrero-Solana, Victor, E-mail: victorhs@ugr.es [University of Granada, SCImago-UGR (SEJ036) (Spain)

    2017-04-15

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  20. Monitoring nanotechnology using patent classifications: an overview and comparison of nanotechnology classification schemes

    International Nuclear Information System (INIS)

    Jürgens, Björn; Herrero-Solana, Victor

    2017-01-01

    Patents are an essential information source used to monitor, track, and analyze nanotechnology. When it comes to search nanotechnology-related patents, a keyword search is often incomplete and struggles to cover such an interdisciplinary discipline. Patent classification schemes can reveal far better results since they are assigned by experts who classify the patent documents according to their technology. In this paper, we present the most important classifications to search nanotechnology patents and analyze how nanotechnology is covered in the main patent classification systems used in search systems nowadays: the International Patent Classification (IPC), the United States Patent Classification (USPC), and the Cooperative Patent Classification (CPC). We conclude that nanotechnology has a significantly better patent coverage in the CPC since considerable more nanotechnology documents were retrieved than by using other classifications, and thus, recommend its use for all professionals involved in nanotechnology patent searches.

  1. International Classification of Headache Disorders 3rd edition beta-based field testing of vestibular migraine in China: Demographic, clinical characteristics, audiometric findings and diagnosis statues.

    Science.gov (United States)

    Zhang, Yixin; Kong, Qingtao; Chen, Jinjin; Li, Lunxi; Wang, Dayan; Zhou, Jiying

    2016-03-01

    This study explored the clinical characteristics of vestibular migraine in Chinese subjects and performed a field test of the criteria of the International Classification of Headache Disorders 3rd edition beta version. Consecutive patients with vestibular migraine were surveyed and registered in a headache clinic during the study period. The diagnosis of vestibular migraine was made according to International Classification of Headache Disorders 3rd edition beta version. Assessments included standardized neuro-otology bedside examination, pure-tone audiogram, bithermal caloric testing, neurological imaging, cervical X-ray or magnetic resonance imaging, Doppler ultrasound of cerebral arteries and laboratory tests. A total of 67 patients (62 female/five male, 47.8 ± 10.3 years old) were enrolled in this study. The mean ages of migraine and vertigo onset were 32.2 ± 11.5 and 37.9 ± 10.1 years, respectively. The most common migraine subtype was migraine without aura (79%), followed by migraine with aura (12%) and chronic migraine (9%). The duration of vertigo attacks varied from seconds to days and 25% of patients had attacks that lasted less than 5 minutes. Among the patients with short-lasting attacks, 75% of these patients had ≥5 attacks per day within 72 hours. Auditory symptoms were reported in 36% of the patients. Migraine prophylactic treatments were effective in 77% of the patients. Our study showed that the clinical features of vestibular migraine in China were similar to those of Western studies. The definition of vertigo episodes and migraine subtypes of vestibular migraine in International Classification of Headache Disorders 3rd edition beta version might be modified further. More than five vertigo attacks per day within 72 hours might be helpful as far as identifying vestibular migraine patients with short-lasting attacks. © International Headache Society 2015.

  2. Analysis of composition-based metagenomic classification.

    Science.gov (United States)

    Higashi, Susan; Barreto, André da Motta Salles; Cantão, Maurício Egidio; de Vasconcelos, Ana Tereza Ribeiro

    2012-01-01

    An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in

  3. Classification of light sources and their interaction with active and passive environments

    Science.gov (United States)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2011-03-01

    Emission from a molecular light source depends on its optical and chemical environment. This dependence is different for various sources. We present a general classification in terms of constant-amplitude and constant-power sources. Using this classification, we have described the response to both changes in the local density of states and stimulated emission. The unforeseen consequences of this classification are illustrated for photonic studies by random laser experiments and are in good agreement with our correspondingly developed theory. Our results require a revision of studies on sources in complex media.

  4. Classification of light sources and their interaction with active and passive environments

    International Nuclear Information System (INIS)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2011-01-01

    Emission from a molecular light source depends on its optical and chemical environment. This dependence is different for various sources. We present a general classification in terms of constant-amplitude and constant-power sources. Using this classification, we have described the response to both changes in the local density of states and stimulated emission. The unforeseen consequences of this classification are illustrated for photonic studies by random laser experiments and are in good agreement with our correspondingly developed theory. Our results require a revision of studies on sources in complex media.

  5. ICF-based classification and measurement of functioning.

    Science.gov (United States)

    Stucki, G; Kostanjsek, N; Ustün, B; Cieza, A

    2008-09-01

    If we aim towards a comprehensive understanding of human functioning and the development of comprehensive programs to optimize functioning of individuals and populations we need to develop suitable measures. The approval of the International Classification, Disability and Health (ICF) in 2001 by the 54th World Health Assembly as the first universally shared model and classification of functioning, disability and health marks, therefore an important step in the development of measurement instruments and ultimately for our understanding of functioning, disability and health. The acceptance and use of the ICF as a reference framework and classification has been facilitated by its development in a worldwide, comprehensive consensus process and the increasing evidence regarding its validity. However, the broad acceptance and use of the ICF as a reference framework and classification will also depend on the resolution of conceptual and methodological challenges relevant for the classification and measurement of functioning. This paper therefore describes first how the ICF categories can serve as building blocks for the measurement of functioning and then the current state of the development of ICF based practical tools and international standards such as the ICF Core Sets. Finally it illustrates how to map the world of measures to the ICF and vice versa and the methodological principles relevant for the transformation of information obtained with a clinical test or a patient-oriented instrument to the ICF as well as the development of ICF-based clinical and self-reported measurement instruments.

  6. Extending an emergency classification expert system to the real-time environment

    International Nuclear Information System (INIS)

    Greene, K.R.; Robinson, A.H.

    1990-01-01

    The process of determining emergency action level (EAL) during real or simulated emergencies at the Trojan nuclear power plant was automated in 1988 with development of the EM-CLASS expert system. This system serves to replace the manual flip-chart method of determining the EAL. While the task of performing the classification is more reliable when using EM-CLASS, it still takes as long to determine the appropriate EAL with EM-CLASS as it does with the flowchart tracing method currently in use. During a plant emergency, an environment will exist where there are not enough resources to complete all of the desired tasks. To change this condition, some tasks must be accomplished with greater efficiency. The EM-CLASS application may be improved by taking advantage of the fact that most of the responses to the questions in the emergency classification procedure, EP-001, are available directly from plant measurements. This information could be passed to the expert system electronically. A prototype demonstration of a real-time emergency classification expert system has been developed. It repetitively performs the consultation, acquiring the necessary data electronically when possible and from the user when electronic data are unavailable. The expert system is being tested with scenarios from the drills and graded exercises that have taken place at the Trojan nuclear power plant. The goal of this project is to install the system on the plant simulator and/or the plant computer

  7. Using ecological zones to increase the detail of Landsat classifications

    Science.gov (United States)

    Fox, L., III; Mayer, K. E.

    1981-01-01

    Changes in classification detail of forest species descriptions were made for Landsat data on 2.2 million acres in northwestern California. Because basic forest canopy structures may exhibit very similar E-M energy reflectance patterns in different environmental regions, classification labels based on Landsat spectral signatures alone become very generalized when mapping large heterogeneous ecological regions. By adding a seven ecological zone stratification, a 167% improvement in classification detail was made over the results achieved without it. The seven zone stratification is a less costly alternative to the inclusion of complex collateral information, such as terrain data and soil type, into the Landsat data base when making inventories of areas greater than 500,000 acres.

  8. The ability of current statistical classifications to separate services and manufacturing

    DEFF Research Database (Denmark)

    Christensen, Jesper Lindgaard

    2013-01-01

    This paper explores the performance of current statistical classification systems in classifying firms and, in particular, their ability to distinguish between firms that provide services and firms that provide manufacturing. We find that a large share of firms, almost 20%, are not classified...... as expected based on a comparison of their statements of activities with the assigned industry codes. This result is robust to analyses on different levels of aggregation and is validated in an additional survey. It is well known from earlier literature that industry classification systems are not perfect....... This paper provides a quantification of the flaws in classifications of firms. Moreover, it is explained why the classifications of firms are imprecise. The increasing complexity of production, inertia in changes to statistical systems and the increasing integration of manufacturing products and services...

  9. Reliability of McConnell's classification of patellar orientation in symptomatic and asymptomatic subjects.

    Science.gov (United States)

    Watson, C J; Propps, M; Galt, W; Redding, A; Dobbs, D

    1999-07-01

    Test-retest reliability study with blinded testers. To determine the intratester reliability of the McConnell classification system and to determine whether the intertester reliability of this system would be improved by one-on-one training of the testers, increasing the variability and numbers of subjects, blinding the testers to the absence or presence of patellofemoral pain syndrome, and adhering to the McConnell classification system as it is taught in the "McConnell Patellofemoral Treatment Plan" continuing education course. The McConnell classification system is currently used by physical therapy clinicians to quantify static patellar orientation. The measurements generated from this system purportedly guide the therapist in the application of patellofemoral tape and in assessment of the efficacy of treatment interventions on changing patellar orientation. Fifty-six subjects (age range, 21-65 years) provided a total of 101 knees for assessment. Seventy-six knees did not produce symptoms. A researcher who did not participate in the measuring process determined that 17 subjects had patellofemoral pain syndrome in 25 knees. Two testers concurrently measured static patellar orientation (anterior/posterior and medial/lateral tilt, medial/lateral glide, and patellar rotation) on subjects, using the McConnell classification system. Repeat measures were performed 3-7 days later. A kappa (kappa) statistic was used to assess the degree of agreement within each tester and between testers. The kappa coefficients for intratester reliability varied from -0.06 to 0.35. Intertester reliability ranged from -0.03 to 0.19. The McConnell classification system, in its current form, does not appear to be very reliable. Intratester reliability ranged from poor to fair, and intertester reliability was poor to slight. This system should not be used as a measurement tool or as a basis for treatment decisions.

  10. Accurate crop classification using hierarchical genetic fuzzy rule-based systems

    Science.gov (United States)

    Topaloglou, Charalampos A.; Mylonas, Stelios K.; Stavrakoudis, Dimitris G.; Mastorocostas, Paris A.; Theocharis, John B.

    2014-10-01

    This paper investigates the effectiveness of an advanced classification system for accurate crop classification using very high resolution (VHR) satellite imagery. Specifically, a recently proposed genetic fuzzy rule-based classification system (GFRBCS) is employed, namely, the Hierarchical Rule-based Linguistic Classifier (HiRLiC). HiRLiC's model comprises a small set of simple IF-THEN fuzzy rules, easily interpretable by humans. One of its most important attributes is that its learning algorithm requires minimum user interaction, since the most important learning parameters affecting the classification accuracy are determined by the learning algorithm automatically. HiRLiC is applied in a challenging crop classification task, using a SPOT5 satellite image over an intensively cultivated area in a lake-wetland ecosystem in northern Greece. A rich set of higher-order spectral and textural features is derived from the initial bands of the (pan-sharpened) image, resulting in an input space comprising 119 features. The experimental analysis proves that HiRLiC compares favorably to other interpretable classifiers of the literature, both in terms of structural complexity and classification accuracy. Its testing accuracy was very close to that obtained by complex state-of-the-art classification systems, such as the support vector machines (SVM) and random forest (RF) classifiers. Nevertheless, visual inspection of the derived classification maps shows that HiRLiC is characterized by higher generalization properties, providing more homogeneous classifications that the competitors. Moreover, the runtime requirements for producing the thematic map was orders of magnitude lower than the respective for the competitors.

  11. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    Science.gov (United States)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  12. Small-scale classification schemes

    DEFF Research Database (Denmark)

    Hertzum, Morten

    2004-01-01

    Small-scale classification schemes are used extensively in the coordination of cooperative work. This study investigates the creation and use of a classification scheme for handling the system requirements during the redevelopment of a nation-wide information system. This requirements...... classification inherited a lot of its structure from the existing system and rendered requirements that transcended the framework laid out by the existing system almost invisible. As a result, the requirements classification became a defining element of the requirements-engineering process, though its main...... effects remained largely implicit. The requirements classification contributed to constraining the requirements-engineering process by supporting the software engineers in maintaining some level of control over the process. This way, the requirements classification provided the software engineers...

  13. Classification of high resolution imagery based on fusion of multiscale texture features

    International Nuclear Information System (INIS)

    Liu, Jinxiu; Liu, Huiping; Lv, Ying; Xue, Xiaojuan

    2014-01-01

    In high resolution data classification process, combining texture features with spectral bands can effectively improve the classification accuracy. However, the window size which is difficult to choose is regarded as an important factor influencing overall classification accuracy in textural classification and current approaches to image texture analysis only depend on a single moving window which ignores different scale features of various land cover types. In this paper, we propose a new method based on the fusion of multiscale texture features to overcome these problems. The main steps in new method include the classification of fixed window size spectral/textural images from 3×3 to 15×15 and comparison of all the posterior possibility values for every pixel, as a result the biggest probability value is given to the pixel and the pixel belongs to a certain land cover type automatically. The proposed approach is tested on University of Pavia ROSIS data. The results indicate that the new method improve the classification accuracy compared to results of methods based on fixed window size textural classification

  14. Optimization of Neuro-Fuzzy System Using Genetic Algorithm for Chromosome Classification

    Directory of Open Access Journals (Sweden)

    M. Sarosa

    2013-09-01

    Full Text Available Neuro-fuzzy system has been shown to provide a good performance on chromosome classification but does not offer a simple method to obtain the accurate parameter values required to yield the best recognition rate. This paper presents a neuro-fuzzy system where its parameters can be automatically adjusted using genetic algorithms. The approach combines the advantages of fuzzy logic theory, neural networks, and genetic algorithms. The structure consists of a four layer feed-forward neural network that uses a GBell membership function as the output function. The proposed methodology has been applied and tested on banded chromosome classification from the Copenhagen Chromosome Database. Simulation result showed that the proposed neuro-fuzzy system optimized by genetic algorithms offers advantages in setting the parameter values, improves the recognition rate significantly and decreases the training/testing time which makes genetic neuro-fuzzy system suitable for chromosome classification.

  15. A structuralist approach in the study of evolution and classification

    NARCIS (Netherlands)

    Hammen, van der L.

    1985-01-01

    A survey is given of structuralism as a method that can be applied in the study of evolution and classification. The results of a structuralist approach are illustrated by examples from the laws underlying numerical changes, from the laws underlying changes in the chelicerate life-cycle, and from

  16. Testing for structural changes in large portfolios

    OpenAIRE

    Posch, Peter N.; Ullmann, Daniel; Wied, Dominik

    2015-01-01

    Model free tests for constant parameters often fail to detect structural changes in high dimensions. In practice, this corresponds to a portfolio with many assets and a reasonable long time series. We reduce the dimensionality of the problem by looking a compressed panel of time series obtained by cluster analysis and the principal components of the data. Using our methodology we are able to extend a test for a constant correlation matrix from a sub portfolio to whole indices a...

  17. Gynecomastia Classification for Surgical Management: A Systematic Review and Novel Classification System.

    Science.gov (United States)

    Waltho, Daniel; Hatchell, Alexandra; Thoma, Achilleas

    2017-03-01

    Gynecomastia is a common deformity of the male breast, where certain cases warrant surgical management. There are several surgical options, which vary depending on the breast characteristics. To guide surgical management, several classification systems for gynecomastia have been proposed. A systematic review was performed to (1) identify all classification systems for the surgical management of gynecomastia, and (2) determine the adequacy of these classification systems to appropriately categorize the condition for surgical decision-making. The search yielded 1012 articles, and 11 articles were included in the review. Eleven classification systems in total were ascertained, and a total of 10 unique features were identified: (1) breast size, (2) skin redundancy, (3) breast ptosis, (4) tissue predominance, (5) upper abdominal laxity, (6) breast tuberosity, (7) nipple malposition, (8) chest shape, (9) absence of sternal notch, and (10) breast skin elasticity. On average, classification systems included two or three of these features. Breast size and ptosis were the most commonly included features. Based on their review of the current classification systems, the authors believe the ideal classification system should be universal and cater to all causes of gynecomastia; be surgically useful and easy to use; and should include a comprehensive set of clinically appropriate patient-related features, such as breast size, breast ptosis, tissue predominance, and skin redundancy. None of the current classification systems appears to fulfill these criteria.

  18. Audio stream classification for multimedia database search

    Science.gov (United States)

    Artese, M.; Bianco, S.; Gagliardi, I.; Gasparini, F.

    2013-03-01

    Search and retrieval of huge archives of Multimedia data is a challenging task. A classification step is often used to reduce the number of entries on which to perform the subsequent search. In particular, when new entries of the database are continuously added, a fast classification based on simple threshold evaluation is desirable. In this work we present a CART-based (Classification And Regression Tree [1]) classification framework for audio streams belonging to multimedia databases. The database considered is the Archive of Ethnography and Social History (AESS) [2], which is mainly composed of popular songs and other audio records describing the popular traditions handed down generation by generation, such as traditional fairs, and customs. The peculiarities of this database are that it is continuously updated; the audio recordings are acquired in unconstrained environment; and for the non-expert human user is difficult to create the ground truth labels. In our experiments, half of all the available audio files have been randomly extracted and used as training set. The remaining ones have been used as test set. The classifier has been trained to distinguish among three different classes: speech, music, and song. All the audio files in the dataset have been previously manually labeled into the three classes above defined by domain experts.

  19. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  20. Comparison of feature selection and classification for MALDI-MS data

    Directory of Open Access Journals (Sweden)

    Yang Mary

    2009-07-01

    Full Text Available Abstract Introduction In the classification of Mass Spectrometry (MS proteomics data, peak detection, feature selection, and learning classifiers are critical to classification accuracy. To better understand which methods are more accurate when classifying data, some publicly available peak detection algorithms for Matrix assisted Laser Desorption Ionization Mass Spectrometry (MALDI-MS data were recently compared; however, the issue of different feature selection methods and different classification models as they relate to classification performance has not been addressed. With the application of intelligent computing, much progress has been made in the development of feature selection methods and learning classifiers for the analysis of high-throughput biological data. The main objective of this paper is to compare the methods of feature selection and different learning classifiers when applied to MALDI-MS data and to provide a subsequent reference for the analysis of MS proteomics data. Results We compared a well-known method of feature selection, Support Vector Machine Recursive Feature Elimination (SVMRFE, and a recently developed method, Gradient based Leave-one-out Gene Selection (GLGS that effectively performs microarray data analysis. We also compared several learning classifiers including K-Nearest Neighbor Classifier (KNNC, Naïve Bayes Classifier (NBC, Nearest Mean Scaled Classifier (NMSC, uncorrelated normal based quadratic Bayes Classifier recorded as UDC, Support Vector Machines, and a distance metric learning for Large Margin Nearest Neighbor classifier (LMNN based on Mahanalobis distance. To compare, we conducted a comprehensive experimental study using three types of MALDI-MS data. Conclusion Regarding feature selection, SVMRFE outperformed GLGS in classification. As for the learning classifiers, when classification models derived from the best training were compared, SVMs performed the best with respect to the expected testing

  1. Information gathering for CLP classification

    Directory of Open Access Journals (Sweden)

    Ida Marcello

    2011-01-01

    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  2. Promoting consistent use of the communication function classification system (CFCS).

    Science.gov (United States)

    Cunningham, Barbara Jane; Rosenbaum, Peter; Hidecker, Mary Jo Cooley

    2016-01-01

    We developed a Knowledge Translation (KT) intervention to standardize the way speech-language pathologists working in Ontario Canada's Preschool Speech and Language Program (PSLP) used the Communication Function Classification System (CFCS). This tool was being used as part of a provincial program evaluation and standardizing its use was critical for establishing reliability and validity within the provincial dataset. Two theoretical foundations - Diffusion of Innovations and the Communication Persuasion Matrix - were used to develop and disseminate the intervention to standardize use of the CFCS among a cohort speech-language pathologists. A descriptive pre-test/post-test study was used to evaluate the intervention. Fifty-two participants completed an electronic pre-test survey, reviewed intervention materials online, and then immediately completed an electronic post-test survey. The intervention improved clinicians' understanding of how the CFCS should be used, their intentions to use the tool in the standardized way, and their abilities to make correct classifications using the tool. Findings from this work will be shared with representatives of the Ontario PSLP. The intervention may be disseminated to all speech-language pathologists working in the program. This study can be used as a model for developing and disseminating KT interventions for clinicians in paediatric rehabilitation. The Communication Function Classification System (CFCS) is a new tool that allows speech-language pathologists to classify children's skills into five meaningful levels of function. There is uncertainty and inconsistent practice in the field about the methods for using this tool. This study used combined two theoretical frameworks to develop an intervention to standardize use of the CFCS among a cohort of speech-language pathologists. The intervention effectively increased clinicians' understanding of the methods for using the CFCS, ability to make correct classifications, and

  3. A simple and robust method for automated photometric classification of supernovae using neural networks

    Science.gov (United States)

    Karpenka, N. V.; Feroz, F.; Hobson, M. P.

    2013-02-01

    A method is presented for automated photometric classification of supernovae (SNe) as Type Ia or non-Ia. A two-step approach is adopted in which (i) the SN light curve flux measurements in each observing filter are fitted separately to an analytical parametrized function that is sufficiently flexible to accommodate virtually all types of SNe and (ii) the fitted function parameters and their associated uncertainties, along with the number of flux measurements, the maximum-likelihood value of the fit and Bayesian evidence for the model, are used as the input feature vector to a classification neural network that outputs the probability that the SN under consideration is of Type Ia. The method is trained and tested using data released following the Supernova Photometric Classification Challenge (SNPCC), consisting of light curves for 20 895 SNe in total. We consider several random divisions of the data into training and testing sets: for instance, for our sample D_1 (D_4), a total of 10 (40) per cent of the data are involved in training the algorithm and the remainder used for blind testing of the resulting classifier; we make no selection cuts. Assigning a canonical threshold probability of pth = 0.5 on the network output to class an SN as Type Ia, for the sample D_1 (D_4) we obtain a completeness of 0.78 (0.82), purity of 0.77 (0.82) and SNPCC figure of merit of 0.41 (0.50). Including the SN host-galaxy redshift and its uncertainty as additional inputs to the classification network results in a modest 5-10 per cent increase in these values. We find that the quality of the classification does not vary significantly with SN redshift. Moreover, our probabilistic classification method allows one to calculate the expected completeness, purity and figure of merit (or other measures of classification quality) as a function of the threshold probability pth, without knowing the true classes of the SNe in the testing sample, as is the case in the classification of real SNe

  4. The psychosomatic disorders pertaining to dental practice with revised working type classification.

    Science.gov (United States)

    Shamim, Thorakkal

    2014-01-01

    Psychosomatic disorders are defined as disorders characterized by physiological changes that originate partially from emotional factors. This article aims to discuss the psychosomatic disorders of the oral cavity with a revised working type classification. The author has added one more subset to the existing classification, i.e., disorders caused by altered perception of dentofacial form and function, which include body dysmorphic disorder. The author has also inserted delusional halitosis under the miscellaneous disorders classification of psychosomatic disorders and revised the already existing classification proposed for the psychosomatic disorders pertaining to dental practice. After the inclusion of the subset (disorders caused by altered perception of dentofacial form and function), the terminology "psychosomatic disorders of the oral cavity" is modified to "psychosomatic disorders pertaining to dental practice".

  5. Color-discrimination threshold determination using pseudoisochromatic test plates

    Directory of Open Access Journals (Sweden)

    Kaiva eJurasevska

    2014-11-01

    Full Text Available We produced a set of pseudoisochromatic plates for determining individual color-difference thresholds to assess test performance and test properties, and analyzed the results. We report a high test validity and classification ability for the deficiency type and severity level (comparable to that of the fourth edition of the Hardy–Rand–Rittler (HRR test. We discuss changes of the acceptable chromatic shifts from the protan and deutan confusion lines along the CIE xy diagram, and the high correlation of individual color-difference thresholds and the red–green discrimination index. Color vision was tested using an Oculus HMC anomaloscope, a Farnsworth D15, and an HRR test on 273 schoolchildren, and 57 other subjects with previously diagnosed red–green color-vision deficiency.

  6. ASIST SIG/CR Classification Workshop 2000: Classification for User Support and Learning.

    Science.gov (United States)

    Soergel, Dagobert

    2001-01-01

    Reports on papers presented at the 62nd Annual Meeting of ASIST (American Society for Information Science and Technology) for the Special Interest Group in Classification Research (SIG/CR). Topics include types of knowledge; developing user-oriented classifications, including domain analysis; classification in the user interface; and automatic…

  7. Couinaud's classification v.s. Cho's classification. Their feasibility in the right hepatic lobe

    International Nuclear Information System (INIS)

    Shioyama, Yasukazu; Ikeda, Hiroaki; Sato, Motohito; Yoshimi, Fuyo; Kishi, Kazushi; Sato, Morio; Kimura, Masashi

    2008-01-01

    The objective of this study was to investigate if the new classification system proposed by Cho is feasible to clinical usage comparing with the classical Couinaud's one. One hundred consecutive cases of abdominal CT were studied using a 64 or an 8 slice multislice CT and created three dimensional portal vein images for analysis by the Workstation. We applied both Cho's classification and the classical Couinaud's one for each cases according to their definitions. Three diagnostic radiologists assessed their feasibility as category one (unable to classify) to five (clear to classify with total suit with the original classification criteria). And in each cases, we tried to judge whether Cho's or the classical Couinaud' classification could more easily transmit anatomical information. Analyzers could classified portal veins clearly (category 5) in 77 to 80% of cases and clearly (category 5) or almost clearly (category 4) in 86-93% along with both classifications. In the feasibility of classification, there was no statistically significant difference between two classifications. In 15 cases we felt that using Couinaud's classification is more convenient for us to transmit anatomical information to physicians than using Cho's one, because in these cases we noticed two large portal veins ramify from right main portal vein cranialy and caudaly and then we could not classify P5 as a branch of antero-ventral segment (AVS). Conversely in 17 cases we felt Cho's classification is more convenient because we could not divide right posterior branch as P6 and P7 and in these cases the right posterior portal vein ramified to several small branches. The anterior fissure vein was clearly noticed in only 60 cases. Comparing the classical Couinaud's classification and Cho's one in feasility of classification, there was no statistically significant difference. We propose we routinely report hepatic anatomy with the classical Couinauds classification and in the preoperative cases we

  8. Three-dimensional textural features of conventional MRI improve diagnostic classification of childhood brain tumours.

    Science.gov (United States)

    Fetit, Ahmed E; Novak, Jan; Peet, Andrew C; Arvanitits, Theodoros N

    2015-09-01

    The aim of this study was to assess the efficacy of three-dimensional texture analysis (3D TA) of conventional MR images for the classification of childhood brain tumours in a quantitative manner. The dataset comprised pre-contrast T1 - and T2-weighted MRI series obtained from 48 children diagnosed with brain tumours (medulloblastoma, pilocytic astrocytoma and ependymoma). 3D and 2D TA were carried out on the images using first-, second- and higher order statistical methods. Six supervised classification algorithms were trained with the most influential 3D and 2D textural features, and their performances in the classification of tumour types, using the two feature sets, were compared. Model validation was carried out using the leave-one-out cross-validation (LOOCV) approach, as well as stratified 10-fold cross-validation, in order to provide additional reassurance. McNemar's test was used to test the statistical significance of any improvements demonstrated by 3D-trained classifiers. Supervised learning models trained with 3D textural features showed improved classification performances to those trained with conventional 2D features. For instance, a neural network classifier showed 12% improvement in area under the receiver operator characteristics curve (AUC) and 19% in overall classification accuracy. These improvements were statistically significant for four of the tested classifiers, as per McNemar's tests. This study shows that 3D textural features extracted from conventional T1 - and T2-weighted images can improve the diagnostic classification of childhood brain tumours. Long-term benefits of accurate, yet non-invasive, diagnostic aids include a reduction in surgical procedures, improvement in surgical and therapy planning, and support of discussions with patients' families. It remains necessary, however, to extend the analysis to a multicentre cohort in order to assess the scalability of the techniques used. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Reliability of classification for post-traumatic ankle osteoarthritis.

    Science.gov (United States)

    Claessen, Femke M A P; Meijer, Diederik T; van den Bekerom, Michel P J; Gevers Deynoot, Barend D J; Mallee, Wouter H; Doornberg, Job N; van Dijk, C Niek

    2016-04-01

    The purpose of this study was to identify the most reliable classification system for clinical outcome studies to categorize post-traumatic-fracture-osteoarthritis. A total of 118 orthopaedic surgeons and residents-gathered in the Ankle Platform Study Collaborative Science of Variation Group-evaluated 128 anteroposterior and lateral radiographs of patients after a bi- or trimalleolar ankle fracture on a Web-based platform in order to rate post-traumatic osteoarthritis according to the classification systems coined by (1) van Dijk, (2) Kellgren, and (3) Takakura. Reliability was evaluated with the use of the Siegel and Castellan's multirater kappa measure. Differences between classification systems were compared using the two-sample Z-test. Interobserver agreement of surgeons who participated in the survey was fair for the van Dijk osteoarthritis scale (k = 0.24), and poor for the Takakura (k = 0.19) and the Kellgren systems (k = 0.18) according to the categorical rating of Landis and Koch. This difference in one categorical rating was found to be significant (p osteoarthritis scale, and poor interobserver agreement for the Takakura and Kellgren osteoarthritis classification systems. Because of the low interobserver agreement for the van Dijk, Kellgren, and Takakura classification systems, those systems cannot be used for clinical decision-making. Development of diagnostic criteria on basis of consecutive patients, Level II.

  10. Text mining in the classification of digital documents

    Directory of Open Access Journals (Sweden)

    Marcial Contreras Barrera

    2016-11-01

    Full Text Available Objective: Develop an automated classifier for the classification of bibliographic material by means of the text mining. Methodology: The text mining is used for the development of the classifier, based on a method of type supervised, conformed by two phases; learning and recognition, in the learning phase, the classifier learns patterns across the analysis of bibliographical records, of the classification Z, belonging to library science, information sciences and information resources, recovered from the database LIBRUNAM, in this phase is obtained the classifier capable of recognizing different subclasses (LC. In the recognition phase the classifier is validated and evaluates across classification tests, for this end bibliographical records of the classification Z are taken randomly, classified by a cataloguer and processed by the automated classifier, in order to obtain the precision of the automated classifier. Results: The application of the text mining achieved the development of the automated classifier, through the method classifying documents supervised type. The precision of the classifier was calculated doing the comparison among the assigned topics manually and automated obtaining 75.70% of precision. Conclusions: The application of text mining facilitated the creation of automated classifier, allowing to obtain useful technology for the classification of bibliographical material with the aim of improving and speed up the process of organizing digital documents.

  11. Etiological classifications of transient ischemic attacks: subtype classification by TOAST, CCS and ASCO--a pilot study.

    Science.gov (United States)

    Amort, Margareth; Fluri, Felix; Weisskopf, Florian; Gensicke, Henrik; Bonati, Leo H; Lyrer, Philippe A; Engelter, Stefan T

    2012-01-01

    In patients with transient ischemic attacks (TIA), etiological classification systems are not well studied. The Trial of ORG 10172 in Acute Stroke Treatment (TOAST), the Causative Classification System (CCS), and the Atherosclerosis Small Vessel Disease Cardiac Source Other Cause (ASCO) classification may be useful to determine the underlying etiology. We aimed at testing the feasibility of each of the 3 systems. Furthermore, we studied and compared their prognostic usefulness. In a single-center TIA registry prospectively ascertained over 2 years, we applied 3 etiological classification systems. We compared the distribution of underlying etiologies, the rates of patients with determined versus undetermined etiology, and studied whether etiological subtyping distinguished TIA patients with versus without subsequent stroke or TIA within 3 months. The 3 systems were applicable in all 248 patients. A determined etiology with the highest level of causality was assigned similarly often with TOAST (35.9%), CCS (34.3%), and ASCO (38.7%). However, the frequency of undetermined causes differed significantly between the classification systems and was lowest for ASCO (TOAST: 46.4%; CCS: 37.5%; ASCO: 18.5%; p CCS, and ASCO, cardioembolism (19.4/14.5/18.5%) was the most common etiology, followed by atherosclerosis (11.7/12.9/14.5%). At 3 months, 33 patients (13.3%, 95% confidence interval 9.3-18.2%) had recurrent cerebral ischemic events. These were strokes in 13 patients (5.2%; 95% confidence interval 2.8-8.8%) and TIAs in 20 patients (8.1%, 95% confidence interval 5.0-12.2%). Patients with a determined etiology (high level of causality) had higher rates of subsequent strokes than those without a determined etiology [TOAST: 6.7% (95% confidence interval 2.5-14.1%) vs. 4.4% (95% confidence interval 1.8-8.9%); CSS: 9.3% (95% confidence interval 4.1-17.5%) vs. 3.1% (95% confidence interval 1.0-7.1%); ASCO: 9.4% (95% confidence interval 4.4-17.1%) vs. 2.6% (95% confidence interval

  12. Reliability of a treatment-based classification system for subgrouping people with low back pain.

    Science.gov (United States)

    Henry, Sharon M; Fritz, Julie M; Trombley, Andrea R; Bunn, Janice Y

    2012-09-01

    Observational, cross-sectional reliability study. To examine the interrater reliability of novice raters in their use of the treatment-based classification (TBC) system for low back pain and to explore the patterns of disagreement in classification errors. Although the interrater reliability of individual test items in the TBC system is moderate to good, some error persists in classification decision making. Understanding which classification errors are common could direct further refinement of the TBC system. Using previously recorded patient data (n = 24), 12 novice raters classified patients according to the TBC schema. These classification results were combined with those of 7 other raters, allowing examination of the overall agreement using the kappa statistic, as well as agreement/disagreement among pairwise comparisons in classification assignments. A chi-square test examined differences in percent agreement between the novice and more experienced raters and differences in classification distributions between these 2 groups of raters. Among 12 novice raters, there was 80.9% agreement in the pairs of classification (κ = 0.62; 95% confidence interval: 0.59, 0.65) and an overall 75.5% agreement (κ = 0.57; 95% confidence interval: 0.55, 0.69) for the combined data set. Raters were least likely to agree on a classification of stabilization (77.5% agreement). The overall percentage of pairwise classification judgments that disagreed was 24.5%, with the most common disagreement being between manipulation and stabilization (11.0%), followed by a mismatch between stabilization and specific exercise (8.2%). Additional refinement is needed to reduce rater disagreement that persists in the TBC decision-making algorithm, particularly in the stabilization category. J Orthop Sports Phys Ther 2012;42(9):797-805, Epub 7 June 2012. doi:10.2519/jospt.2012.4078.

  13. A summary of recent developments in transportation hazard classification activities for ammonium perchlorate

    Science.gov (United States)

    Koller, A. M., Jr.; Hannum, J. A. E.

    1983-01-01

    The transportation hazard classification of Ammonium Perchlorate is discussed. A test program was completed and data were forwarded to retain a Class 5.1 designation (oxidizer) for AP which is shipped internationally. As a follow-on to the initial team effort to conduct AP tests existing data were examined and a matrix which catalogs test parameters and findings was compiled. A collection of test protocols is developed to standardize test methods for energetic materials of all types. The actions to date are summarized; the participating organizations and their roles as presently understood; specific findings on AP (matrix); and issues, lessons learned, and potential actions of particular interest to the propulsion community which may evolve as a result of future U.N. propellant transportation classification activities.

  14. [Nosological classification and assessment of muscle dysmorphia].

    Science.gov (United States)

    Babusa, Bernadett; Túry, Ferenc

    2011-01-01

    Muscle dysmorphia is a recently described psychiatric disorder, characterized by a pathological preoccupation with muscle size. In spite of their huge muscles, muscle dysmorphia sufferers believe that they are insufficiently large and muscular therefore would like to be bigger and more muscular. Male bodybuilders are at high-risk for the disorder. The nosological classification of muscle dysmorphia has been changed over the years. However, consensus has not emerged so far. Most of the ongoing debate has conceptualized muscle dysmorphia as an eating disorder, obsessive-compulsive disorder and body dysmorphic disorder. There are a number of arguments for and againts. In the present study the authors do not take a position on the diagnostic classification of muscle dysmorphia. The purpose of the study is to review the present approaches relating to the diagnostic classification of muscle dysmporphia. Many different questionnaires were developed for the assessment of muscle dysmorphia. Currently, there is a lack of assessment methods measuring muscle dysmorphia symptoms in Hungary. As a secondary purpose the study also presents the Hungarian version of the Muscle Appearance Satisfaction Scale (Mayville et al., 2002).

  15. Hindi vowel classification using QCN-MFCC features

    Directory of Open Access Journals (Sweden)

    Shipra Mishra

    2016-09-01

    Full Text Available In presence of environmental noise, speakers tend to emphasize their vocal effort to improve the audibility of voice. This involuntary adjustment is known as Lombard effect (LE. Due to LE the signal to noise ratio of speech increases, but at the same time the loudness, pitch and duration of phonemes changes. Hence, accuracy of automatic speech recognition systems degrades. In this paper, the effect of unsupervised equalization of Lombard effect is investigated for Hindi vowel classification task using Hindi database designed at TIFR Mumbai, India. Proposed Quantile-based Dynamic Cepstral Normalization MFCC (QCN-MFCC along with baseline MFCC features have been used for vowel classification. Hidden Markov Model (HMM is used as classifier. It is observed that QCN-MFCC features have given a maximum improvement of 5.97% and 5% over MFCC features for context-dependent and context-independent cases respectively. It is also observed that QCN-MFCC features have given improvement of 13% and 11.5% over MFCC features for context-dependent and context-independent classification of mid vowels.

  16. Coping with Changes in International Classifications of Sectors and Occupations: Application in Skills Forecasting. Research Paper No 43

    Science.gov (United States)

    Kvetan, Vladimir, Ed.

    2014-01-01

    Reliable and consistent time series are essential to any kind of economic forecasting. Skills forecasting needs to combine data from national accounts and labour force surveys, with the pan-European dimension of Cedefop's skills supply and demand forecasts, relying on different international classification standards. Sectoral classification (NACE)…

  17. Classification of sports types from tracklets

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    Automatic analysis of video is important in order to process and exploit large amounts of data, e.g. for sports analysis. Classification of sports types is one of the first steps to- wards a fully automatic analysis of the activities performed at sports arenas. In this work we test the idea...... that sports types can be classified from features extracted from short trajectories of the players. From tracklets created by a Kalman filter tracker we extract four robust features; Total distance, lifespan, distance span and mean speed. For clas- sification we use a quadratic discriminant analysis. In our...... experiments we use 30 2-minutes thermal video sequences from each of five different sports types. By applying a 10- fold cross validation we obtain a correct classification rate of 94.5 %....

  18. Testing for structural change in the presence of long memory

    OpenAIRE

    Krämer, Walter; Sibbertsen, Philipp

    2000-01-01

    We derive the limiting null distributions of the standard and OLS-based CUSUM-tests for structural change of the coefficients of a linear regression model in the context of long memory disturbances. We show that both tests behave fundamentally different in a long memory environment, as compared to short memory, and that long memory is easily mistaken for structural change when standard critical values are employed.

  19. IMPROVING CLASSIFICATIONS OF ECONOMIC SCIENCES IN A THESAURUS

    Directory of Open Access Journals (Sweden)

    Sergey Vladimirovich Lesnikov

    2013-09-01

    Full Text Available The goal is to study thesaurus as an instrument to define the classification of economic sciences, to adapt their classification to the increased information flow, to increase accuracy of allocation of information resources with consideration of the users’ needs, to suggest making alterations in the classification of economic sciences made by the Institute of Scientific Information for Social Sciences of the Russian Academy of Sciences (INION RAN in 2001.The authors see the classification of economic sciences as a product of social communications theory – a differentiated aspect of social research. Modern science is subdivided into various aspects with varied subjects and methods. The latter overlap and form a hierarchy of concepts in science within the same research subject. The authors stress the importance of information retrieval systems for developing scientific knowledge. Information retrieval systems can immediately deliver data from different areas of science to the user who can then integrate the information and obtain a vivid picture of the research subject. Search engines and rubricators are becoming increasingly important as there is a tendency to isolated thinking with many Internet users.The authors have devised a certain approach to using the thesaurus as the means of sciences classification and as a hyper language of science. The suggested methodological approach to structuring terms and notions via thesaurus have been tested at Syktyvkar State University and Syktyvkar branch of Saint-Petersburg Economic University.Methods: deduction, induction, analysis, synthesis, abstraction technique, classification.Results: there have been defined stages and main sections of the information-retrieval thesaurus of the hyperlanguage of economic science on the basis of existing classification systems of scientific knowledge.Scope of application of results: library services, information technology, education.DOI: http://dx.doi.org/10.12731/2218-7405-2013-8-22

  20. Insensitive Munitions Testing

    Data.gov (United States)

    Federal Laboratory Consortium — Insensitive Munitions Testing at RTC is conducted (IAW MILSTD-2105) at Test Area 4. Our engineers and technicians obtain data for hazards classification and safety...

  1. Validity range of centrifuges for the regulation of nanomaterials: from classification to as-tested coronas

    Science.gov (United States)

    Wohlleben, Wendel

    2012-12-01

    Granulometry is the regulatory category where the differences between traditional materials and nanomaterials culminate. Reported herein is a careful validation of methods for the quantification of dispersability and size distribution in relevant media, and for the classification according to the EC nanodefinition recommendation. Suspension-based techniques can assess the nanodefinition only if the material in question is reasonably well dispersed. Using dispersed material of several chemical compositions (organic, metal, metal-oxide) as test cases we benchmark analytical ultracentrifugation (AUC), dynamic light scattering (DLS), hydrodynamic chromatography, nanoparticle tracking analysis (NTA) against the known content of bimodal suspensions in the commercially relevant range between 20 nm and a few microns. The results validate fractionating techniques, especially AUC, which successfully identifies any dispersed nanoparticle content from 14 to 99.9 nb% with less than 5 nb% deviation. In contrast, our screening casts severe doubt over the reliability of ensemble (scattering) techniques and highlights the potential of NTA to develop into a counting upgrade of DLS. The unique asset of centrifuges with interference, X-ray or absorption detectors—to quantify the dispersed solid content for each size interval from proteins over individualized nanoparticles up to agglomerates, while accounting for their loose packing—addresses also the adsorption/depletion of proteins and (de-)agglomeration of nanomaterials under cell culture conditions as tested for toxicological endpoints.

  2. Validity range of centrifuges for the regulation of nanomaterials: from classification to as-tested coronas

    International Nuclear Information System (INIS)

    Wohlleben, Wendel

    2012-01-01

    Granulometry is the regulatory category where the differences between traditional materials and nanomaterials culminate. Reported herein is a careful validation of methods for the quantification of dispersability and size distribution in relevant media, and for the classification according to the EC nanodefinition recommendation. Suspension-based techniques can assess the nanodefinition only if the material in question is reasonably well dispersed. Using dispersed material of several chemical compositions (organic, metal, metal-oxide) as test cases we benchmark analytical ultracentrifugation (AUC), dynamic light scattering (DLS), hydrodynamic chromatography, nanoparticle tracking analysis (NTA) against the known content of bimodal suspensions in the commercially relevant range between 20 nm and a few microns. The results validate fractionating techniques, especially AUC, which successfully identifies any dispersed nanoparticle content from 14 to 99.9 nb% with less than 5 nb% deviation. In contrast, our screening casts severe doubt over the reliability of ensemble (scattering) techniques and highlights the potential of NTA to develop into a counting upgrade of DLS. The unique asset of centrifuges with interference, X-ray or absorption detectors—to quantify the dispersed solid content for each size interval from proteins over individualized nanoparticles up to agglomerates, while accounting for their loose packing—addresses also the adsorption/depletion of proteins and (de-)agglomeration of nanomaterials under cell culture conditions as tested for toxicological endpoints.

  3. Renal graft survival according to Banff 2013 classification in indication biopsies

    Directory of Open Access Journals (Sweden)

    Carlos Arias-Cabrales

    2016-11-01

    Conclusions: The Banff 2013 classification facilitates a histological diagnosis in 95% of indication biopsies. While diagnostic category 6 is the most common, a change in the predominant histopathology was observed according to time elapsed since transplantation. Antibody-mediated changes are associated with worse graft survival.

  4. Value of multi-slice CT in the classification diagnosis of hilar cholangiocarcinoma

    International Nuclear Information System (INIS)

    Qian Yi; Zeng Mengsu; Ling Zhiqing; Rao Shengxiang; Liu Yalan

    2008-01-01

    Objective: To evaluate the value of multi-slice CT (MSCT) classification in the assessment of the hilar cholangiocarcinoma resectability. Methods: Thirty patients with surgically and histopathologically proved hilar cholangiocarcinomas who underwent preoperative MSCT and were diagnosed correctly were included in present study. Transverse images and reconstructed MPR images were reviewed for Bismuth-Corlette classification and morphological classification of hilar cholangiocarcinoma. Then MSCT classification was compared with findings of surgery and histopathology. Curative resectabilty of different types according to Bismuth-Corlette classification and morphological classification were analyzed with chi-square test. Results: In 30 cases, the numbers of Type I, II, IIIa, IIIb and IV according to Bismuth-Corlette classification were 1, 3, 4, 5 and 17. Seventeen patients underwent curative resections, among which 1, 2, 1, 4 and 9 belonged to Type I, II, IIIa, IIIb and IV respectively. However, there was no significant difference in curative resectability among different types of Bismuth-Corlette classification (χ 2 = 0.9875, P>0.05). In present study, the accuracy of MSCT in Bismuth-Corlette classification reached 86.7% (26/30). The numbers of periductal infiltrating, mass forming and intraductal growing type were 13, 13 and 4, while 6, 8 and 3 cases of each type underwent curative resections. There was no significant difference in curative resectability among different types of morphological classification (χ 2 =1.2583, P>0.05). The accuracy of MSCT in morphological classification was 100% (30/30) in this study group. Conclusion: MSCT can make accurate diagnosis of Bismuth-Corlette classification and morphological classification, which is helpful in preoperative respectability assessment of hilar cholangiocarcinoma. (authors)

  5. Effective Packet Number for 5G IM WeChat Application at Early Stage Traffic Classification

    Directory of Open Access Journals (Sweden)

    Muhammad Shafiq

    2017-01-01

    Full Text Available Accurate network traffic classification at early stage is very important for 5G network applications. During the last few years, researchers endeavored hard to propose effective machine learning model for classification of Internet traffic applications at early stage with few packets. Nevertheless, this essential problem still needs to be studied profoundly to find out effective packet number as well as effective machine learning (ML model. In this paper, we tried to solve the above-mentioned problem. For this purpose, five Internet traffic datasets are utilized. Initially, we extract packet size of 20 packets and then mutual information analysis is carried out to find out the mutual information of each packet on n flow type. Thereafter, we execute 10 well-known machine learning algorithms using crossover classification method. Two statistical analysis tests, Friedman and Wilcoxon pairwise tests, are applied for the experimental results. Moreover, we also apply the statistical tests for classifiers to find out effective ML classifier. Our experimental results show that 13–19 packets are the effective packet numbers for 5G IM WeChat application at early stage network traffic classification. We also find out effective ML classifier, where Random Forest ML classifier is effective classifier at early stage Internet traffic classification.

  6. Cattle behaviour classification from collar, halter, and ear tag sensors

    Directory of Open Access Journals (Sweden)

    A. Rahman

    2018-03-01

    Full Text Available In this paper, we summarise the outcome of a set of experiments aimed at classifying cattle behaviour based on sensor data. Each animal carried sensors generating time series accelerometer data placed on a collar on the neck at the back of the head, on a halter positioned at the side of the head behind the mouth, or on the ear using a tag. The purpose of the study was to determine how sensor data from different placement can classify a range of typical cattle behaviours. Data were collected and animal behaviours (grazing, standing or ruminating were observed over a common time frame. Statistical features were computed from the sensor data and machine learning algorithms were trained to classify each behaviour. Classification accuracies were computed on separate independent test sets. The analysis based on behaviour classification experiments revealed that different sensor placement can achieve good classification accuracy if the feature space (representing motion patterns between the training and test animal is similar. The paper will discuss these analyses in detail and can act as a guide for future studies.

  7. Benchmarking protein classification algorithms via supervised cross-validation

    NARCIS (Netherlands)

    Kertész-Farkas, A.; Dhir, S.; Sonego, P.; Pacurar, M.; Netoteia, S.; Nijveen, H.; Kuzniar, A.; Leunissen, J.A.M.; Kocsor, A.; Pongor, S.

    2008-01-01

    Development and testing of protein classification algorithms are hampered by the fact that the protein universe is characterized by groups vastly different in the number of members, in average protein size, similarity within group, etc. Datasets based on traditional cross-validation (k-fold,

  8. Woven fabric defects detection based on texture classification algorithm

    International Nuclear Information System (INIS)

    Ben Salem, Y.; Nasri, S.

    2011-01-01

    In this paper we have compared two famous methods in texture classification to solve the problem of recognition and classification of defects occurring in a textile manufacture. We have compared local binary patterns method with co-occurrence matrix. The classifier used is the support vector machines (SVM). The system has been tested using TILDA database. The results obtained are interesting and show that LBP is a good method for the problems of recognition and classifcation defects, it gives a good running time especially for the real time applications.

  9. Deep learning for EEG-Based preference classification

    Science.gov (United States)

    Teo, Jason; Hou, Chew Lin; Mountstephens, James

    2017-10-01

    Electroencephalogram (EEG)-based emotion classification is rapidly becoming one of the most intensely studied areas of brain-computer interfacing (BCI). The ability to passively identify yet accurately correlate brainwaves with our immediate emotions opens up truly meaningful and previously unattainable human-computer interactions such as in forensic neuroscience, rehabilitative medicine, affective entertainment and neuro-marketing. One particularly useful yet rarely explored areas of EEG-based emotion classification is preference recognition [1], which is simply the detection of like versus dislike. Within the limited investigations into preference classification, all reported studies were based on musically-induced stimuli except for a single study which used 2D images. The main objective of this study is to apply deep learning, which has been shown to produce state-of-the-art results in diverse hard problems such as in computer vision, natural language processing and audio recognition, to 3D object preference classification over a larger group of test subjects. A cohort of 16 users was shown 60 bracelet-like objects as rotating visual stimuli on a computer display while their preferences and EEGs were recorded. After training a variety of machine learning approaches which included deep neural networks, we then attempted to classify the users' preferences for the 3D visual stimuli based on their EEGs. Here, we show that that deep learning outperforms a variety of other machine learning classifiers for this EEG-based preference classification task particularly in a highly challenging dataset with large inter- and intra-subject variability.

  10. Underwater object classification using scattering transform of sonar signals

    Science.gov (United States)

    Saito, Naoki; Weber, David S.

    2017-08-01

    In this paper, we apply the scattering transform (ST)-a nonlinear map based off of a convolutional neural network (CNN)-to classification of underwater objects using sonar signals. The ST formalizes the observation that the filters learned by a CNN have wavelet-like structure. We achieve effective binary classification both on a real dataset of Unexploded Ordinance (UXOs), as well as synthetically generated examples. We also explore the effects on the waveforms with respect to changes in the object domain (e.g., translation, rotation, and acoustic impedance, etc.), and examine the consequences coming from theoretical results for the scattering transform. We show that the scattering transform is capable of excellent classification on both the synthetic and real problems, thanks to having more quasi-invariance properties that are well-suited to translation and rotation of the object.

  11. Classification of Flotation Frothers

    Directory of Open Access Journals (Sweden)

    Jan Drzymala

    2018-02-01

    Full Text Available In this paper, a scheme of flotation frothers classification is presented. The scheme first indicates the physical system in which a frother is present and four of them i.e., pure state, aqueous solution, aqueous solution/gas system and aqueous solution/gas/solid system are distinguished. As a result, there are numerous classifications of flotation frothers. The classifications can be organized into a scheme described in detail in this paper. The frother can be present in one of four physical systems, that is pure state, aqueous solution, aqueous solution/gas and aqueous solution/gas/solid system. It results from the paper that a meaningful classification of frothers relies on choosing the physical system and next feature, trend, parameter or parameters according to which the classification is performed. The proposed classification can play a useful role in characterizing and evaluation of flotation frothers.

  12. Supervised Classification Processes for the Characterization of Heritage Elements, Case Study: Cuenca-Ecuador

    Science.gov (United States)

    Briones, J. C.; Heras, V.; Abril, C.; Sinchi, E.

    2017-08-01

    The proper control of built heritage entails many challenges related to the complexity of heritage elements and the extent of the area to be managed, for which the available resources must be efficiently used. In this scenario, the preventive conservation approach, based on the concept that prevent is better than cure, emerges as a strategy to avoid the progressive and imminent loss of monuments and heritage sites. Regular monitoring appears as a key tool to identify timely changes in heritage assets. This research demonstrates that the supervised learning model (Support Vector Machines - SVM) is an ideal tool that supports the monitoring process detecting visible elements in aerial images such as roofs structures, vegetation and pavements. The linear, gaussian and polynomial kernel functions were tested; the lineal function provided better results over the other functions. It is important to mention that due to the high level of segmentation generated by the classification procedure, it was necessary to apply a generalization process through opening a mathematical morphological operation, which simplified the over classification for the monitored elements.

  13. Motor Oil Classification using Color Histograms and Pattern Recognition Techniques.

    Science.gov (United States)

    Ahmadi, Shiva; Mani-Varnosfaderani, Ahmad; Habibi, Biuck

    2018-04-20

    Motor oil classification is important for quality control and the identification of oil adulteration. In thiswork, we propose a simple, rapid, inexpensive and nondestructive approach based on image analysis and pattern recognition techniques for the classification of nine different types of motor oils according to their corresponding color histograms. For this, we applied color histogram in different color spaces such as red green blue (RGB), grayscale, and hue saturation intensity (HSI) in order to extract features that can help with the classification procedure. These color histograms and their combinations were used as input for model development and then were statistically evaluated by using linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM) techniques. Here, two common solutions for solving a multiclass classification problem were applied: (1) transformation to binary classification problem using a one-against-all (OAA) approach and (2) extension from binary classifiers to a single globally optimized multilabel classification model. In the OAA strategy, LDA, QDA, and SVM reached up to 97% in terms of accuracy, sensitivity, and specificity for both the training and test sets. In extension from binary case, despite good performances by the SVM classification model, QDA and LDA provided better results up to 92% for RGB-grayscale-HSI color histograms and up to 93% for the HSI color map, respectively. In order to reduce the numbers of independent variables for modeling, a principle component analysis algorithm was used. Our results suggest that the proposed method is promising for the identification and classification of different types of motor oils.

  14. Sentiment classification of Roman-Urdu opinions using Naïve Bayesian, Decision Tree and KNN classification techniques

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal

    2016-07-01

    Full Text Available Sentiment mining is a field of text mining to determine the attitude of people about a particular product, topic, politician in newsgroup posts, review sites, comments on facebook posts twitter, etc. There are many issues involved in opinion mining. One important issue is that opinions could be in different languages (English, Urdu, Arabic, etc.. To tackle each language according to its orientation is a challenging task. Most of the research work in sentiment mining has been done in English language. Currently, limited research is being carried out on sentiment classification of other languages like Arabic, Italian, Urdu and Hindi. In this paper, three classification models are used for text classification using Waikato Environment for Knowledge Analysis (WEKA. Opinions written in Roman-Urdu and English are extracted from a blog. These extracted opinions are documented in text files to prepare a training dataset containing 150 positive and 150 negative opinions, as labeled examples. Testing data set is supplied to three different models and the results in each case are analyzed. The results show that Naïve Bayesian outperformed Decision Tree and KNN in terms of more accuracy, precision, recall and F-measure.

  15. A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data

    Science.gov (United States)

    Gajda, Agnieszka; Wójtowicz-Nowakowska, Anna

    2013-04-01

    A comparison of the accuracy of pixel based and object based classifications of integrated optical and LiDAR data Land cover maps are generally produced on the basis of high resolution imagery. Recently, LiDAR (Light Detection and Ranging) data have been brought into use in diverse applications including land cover mapping. In this study we attempted to assess the accuracy of land cover classification using both high resolution aerial imagery and LiDAR data (airborne laser scanning, ALS), testing two classification approaches: a pixel-based classification and object-oriented image analysis (OBIA). The study was conducted on three test areas (3 km2 each) in the administrative area of Kraków, Poland, along the course of the Vistula River. They represent three different dominating land cover types of the Vistula River valley. Test site 1 had a semi-natural vegetation, with riparian forests and shrubs, test site 2 represented a densely built-up area, and test site 3 was an industrial site. Point clouds from ALS and ortophotomaps were both captured in November 2007. Point cloud density was on average 16 pt/m2 and it contained additional information about intensity and encoded RGB values. Ortophotomaps had a spatial resolution of 10 cm. From point clouds two raster maps were generated: intensity (1) and (2) normalised Digital Surface Model (nDSM), both with the spatial resolution of 50 cm. To classify the aerial data, a supervised classification approach was selected. Pixel based classification was carried out in ERDAS Imagine software. Ortophotomaps and intensity and nDSM rasters were used in classification. 15 homogenous training areas representing each cover class were chosen. Classified pixels were clumped to avoid salt and pepper effect. Object oriented image object classification was carried out in eCognition software, which implements both the optical and ALS data. Elevation layers (intensity, firs/last reflection, etc.) were used at segmentation stage due to

  16. The Future of Classification in Wheelchair Sports; Can Data Science and Technological Advancement Offer an Alternative Point of View?

    Science.gov (United States)

    van der Slikke, Rienk M A; Bregman, Daan J J; Berger, Monique A M; de Witte, Annemarie M H; Veeger, Dirk-Jan H E J

    2017-11-01

    Classification is a defining factor for competition in wheelchair sports, but it is a delicate and time-consuming process with often questionable validity. 1 New inertial sensor based measurement methods applied in match play and field tests, allow for more precise and objective estimates of the impairment effect on wheelchair mobility performance. It was evaluated if these measures could offer an alternative point of view for classification. Six standard wheelchair mobility performance outcomes of different classification groups were measured in match play (n=29), as well as best possible performance in a field test (n=47). In match-results a clear relationship between classification and performance level is shown, with increased performance outcomes in each adjacent higher classification group. Three outcomes differed significantly between the low and mid-class groups, and one between the mid and high-class groups. In best performance (field test), a split between the low and mid-class groups shows (5 out of 6 outcomes differed significantly) but hardly any difference between the mid and high-class groups. This observed split was confirmed by cluster analysis, revealing the existence of only two performance based clusters. The use of inertial sensor technology to get objective measures of wheelchair mobility performance, combined with a standardized field-test, brought alternative views for evidence based classification. The results of this approach provided arguments for a reduced number of classes in wheelchair basketball. Future use of inertial sensors in match play and in field testing could enhance evaluation of classification guidelines as well as individual athlete performance.

  17. DOE LLW classification rationale

    International Nuclear Information System (INIS)

    Flores, A.Y.

    1991-01-01

    This report was about the rationale which the US Department of Energy had with low-level radioactive waste (LLW) classification. It is based on the Nuclear Regulatory Commission's classification system. DOE site operators met to review the qualifications and characteristics of the classification systems. They evaluated performance objectives, developed waste classification tables, and compiled dose limits on the waste. A goal of the LLW classification system was to allow each disposal site the freedom to develop limits to radionuclide inventories and concentrations according to its own site-specific characteristics. This goal was achieved with the adoption of a performance objectives system based on a performance assessment, with site-specific environmental conditions and engineered disposal systems

  18. On Internet Traffic Classification: A Two-Phased Machine Learning Approach

    Directory of Open Access Journals (Sweden)

    Taimur Bakhshi

    2016-01-01

    Full Text Available Traffic classification utilizing flow measurement enables operators to perform essential network management. Flow accounting methods such as NetFlow are, however, considered inadequate for classification requiring additional packet-level information, host behaviour analysis, and specialized hardware limiting their practical adoption. This paper aims to overcome these challenges by proposing two-phased machine learning classification mechanism with NetFlow as input. The individual flow classes are derived per application through k-means and are further used to train a C5.0 decision tree classifier. As part of validation, the initial unsupervised phase used flow records of fifteen popular Internet applications that were collected and independently subjected to k-means clustering to determine unique flow classes generated per application. The derived flow classes were afterwards used to train and test a supervised C5.0 based decision tree. The resulting classifier reported an average accuracy of 92.37% on approximately 3.4 million test cases increasing to 96.67% with adaptive boosting. The classifier specificity factor which accounted for differentiating content specific from supplementary flows ranged between 98.37% and 99.57%. Furthermore, the computational performance and accuracy of the proposed methodology in comparison with similar machine learning techniques lead us to recommend its extension to other applications in achieving highly granular real-time traffic classification.

  19. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    Science.gov (United States)

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  20. Evaluation of feature selection algorithms for classification in temporal lobe epilepsy based on MR images

    Science.gov (United States)

    Lai, Chunren; Guo, Shengwen; Cheng, Lina; Wang, Wensheng; Wu, Kai

    2017-02-01

    It's very important to differentiate the temporal lobe epilepsy (TLE) patients from healthy people and localize the abnormal brain regions of the TLE patients. The cortical features and changes can reveal the unique anatomical patterns of brain regions from the structural MR images. In this study, structural MR images from 28 normal controls (NC), 18 left TLE (LTLE), and 21 right TLE (RTLE) were acquired, and four types of cortical feature, namely cortical thickness (CTh), cortical surface area (CSA), gray matter volume (GMV), and mean curvature (MCu), were explored for discriminative analysis. Three feature selection methods, the independent sample t-test filtering, the sparse-constrained dimensionality reduction model (SCDRM), and the support vector machine-recursive feature elimination (SVM-RFE), were investigated to extract dominant regions with significant differences among the compared groups for classification using the SVM classifier. The results showed that the SVM-REF achieved the highest performance (most classifications with more than 92% accuracy), followed by the SCDRM, and the t-test. Especially, the surface area and gray volume matter exhibited prominent discriminative ability, and the performance of the SVM was improved significantly when the four cortical features were combined. Additionally, the dominant regions with higher classification weights were mainly located in temporal and frontal lobe, including the inferior temporal, entorhinal cortex, fusiform, parahippocampal cortex, middle frontal and frontal pole. It was demonstrated that the cortical features provided effective information to determine the abnormal anatomical pattern and the proposed method has the potential to improve the clinical diagnosis of the TLE.

  1. Domain Adaptation for Opinion Classification: A Self-Training Approach

    Directory of Open Access Journals (Sweden)

    Yu, Ning

    2013-03-01

    Full Text Available Domain transfer is a widely recognized problem for machine learning algorithms because models built upon one data domain generally do not perform well in another data domain. This is especially a challenge for tasks such as opinion classification, which often has to deal with insufficient quantities of labeled data. This study investigates the feasibility of self-training in dealing with the domain transfer problem in opinion classification via leveraging labeled data in non-target data domain(s and unlabeled data in the target-domain. Specifically, self-training is evaluated for effectiveness in sparse data situations and feasibility for domain adaptation in opinion classification. Three types of Web content are tested: edited news articles, semi-structured movie reviews, and the informal and unstructured content of the blogosphere. Findings of this study suggest that, when there are limited labeled data, self-training is a promising approach for opinion classification, although the contributions vary across data domains. Significant improvement was demonstrated for the most challenging data domain-the blogosphere-when a domain transfer-based self-training strategy was implemented.

  2. Asteroid taxonomic classifications

    International Nuclear Information System (INIS)

    Tholen, D.J.

    1989-01-01

    This paper reports on three taxonomic classification schemes developed and applied to the body of available color and albedo data. Asteroid taxonomic classifications according to two of these schemes are reproduced

  3. Classification of Farmland Landscape Structure in Multiple Scales

    Science.gov (United States)

    Jiang, P.; Cheng, Q.; Li, M.

    2017-12-01

    Farmland is one of the basic terrestrial resources that support the development and survival of human beings and thus plays a crucial role in the national security of every country. Pattern change is the intuitively spatial representation of the scale and quality variation of farmland. Through the characteristic development of spatial shapes as well as through changes in system structures, functions and so on, farmland landscape patterns may indicate the landscape health level. Currently, it is still difficult to perform positioning analyses of landscape pattern changes that reflect the landscape structure variations of farmland with an index model. Depending on a number of spatial properties such as locations and adjacency relations, distance decay, fringe effect, and on the model of patch-corridor-matrix that is applied, this study defines a type system of farmland landscape structure on the national, provincial, and city levels. According to such a definition, the classification model of farmland landscape-structure type at the pixel scale is developed and validated based on mathematical-morphology concepts and on spatial-analysis methods. Then, the laws that govern farmland landscape-pattern change in multiple scales are analyzed from the perspectives of spatial heterogeneity, spatio-temporal evolution, and function transformation. The result shows that the classification model of farmland landscape-structure type can reflect farmland landscape-pattern change and its effects on farmland production function. Moreover, farmland landscape change in different scales displayed significant disparity in zonality, both within specific regions and in urban-rural areas.

  4. Improving the Computational Performance of Ontology-Based Classification Using Graph Databases

    Directory of Open Access Journals (Sweden)

    Thomas J. Lampoltshammer

    2015-07-01

    Full Text Available The increasing availability of very high-resolution remote sensing imagery (i.e., from satellites, airborne laser scanning, or aerial photography represents both a blessing and a curse for researchers. The manual classification of these images, or other similar geo-sensor data, is time-consuming and leads to subjective and non-deterministic results. Due to this fact, (semi- automated classification approaches are in high demand in affected research areas. Ontologies provide a proper way of automated classification for various kinds of sensor data, including remotely sensed data. However, the processing of data entities—so-called individuals—is one of the most cost-intensive computational operations within ontology reasoning. Therefore, an approach based on graph databases is proposed to overcome the issue of a high time consumption regarding the classification task. The introduced approach shifts the classification task from the classical Protégé environment and its common reasoners to the proposed graph-based approaches. For the validation, the authors tested the approach on a simulation scenario based on a real-world example. The results demonstrate a quite promising improvement of classification speed—up to 80,000 times faster than the Protégé-based approach.

  5. Long range echo classification for minehunting sonars

    NARCIS (Netherlands)

    Theije, P.A.M. de; Groen, J.; Sabel, J.C.

    2006-01-01

    This paper focesus on single-ping classification of sea mines, at a range of about 400 m, and combining a hull mounted sonar (HMS) and a propelled variable-depth sonar (PDVS). The deleoped classifier is trained and tested on a set of simulated realistic echoes of mines and non-mines. As the mines

  6. Classification schemes for knowledge translation interventions: a practical resource for researchers.

    Science.gov (United States)

    Slaughter, Susan E; Zimmermann, Gabrielle L; Nuspl, Megan; Hanson, Heather M; Albrecht, Lauren; Esmail, Rosmin; Sauro, Khara; Newton, Amanda S; Donald, Maoliosa; Dyson, Michele P; Thomson, Denise; Hartling, Lisa

    2017-12-06

    As implementation science advances, the number of interventions to promote the translation of evidence into healthcare, health systems, or health policy is growing. Accordingly, classification schemes for these knowledge translation (KT) interventions have emerged. A recent scoping review identified 51 classification schemes of KT interventions to integrate evidence into healthcare practice; however, the review did not evaluate the quality of the classification schemes or provide detailed information to assist researchers in selecting a scheme for their context and purpose. This study aimed to further examine and assess the quality of these classification schemes of KT interventions, and provide information to aid researchers when selecting a classification scheme. We abstracted the following information from each of the original 51 classification scheme articles: authors' objectives; purpose of the scheme and field of application; socioecologic level (individual, organizational, community, system); adaptability (broad versus specific); target group (patients, providers, policy-makers), intent (policy, education, practice), and purpose (dissemination versus implementation). Two reviewers independently evaluated the methodological quality of the development of each classification scheme using an adapted version of the AGREE II tool. Based on these assessments, two independent reviewers reached consensus about whether to recommend each scheme for researcher use, or not. Of the 51 original classification schemes, we excluded seven that were not specific classification schemes, not accessible or duplicates. Of the remaining 44 classification schemes, nine were not recommended. Of the 35 recommended classification schemes, ten focused on behaviour change and six focused on population health. Many schemes (n = 29) addressed practice considerations. Fewer schemes addressed educational or policy objectives. Twenty-five classification schemes had broad applicability

  7. Combining Blink, Pupil, and Response Time Measures in a Concealed Knowledge Test

    Directory of Open Access Journals (Sweden)

    Travis eSeymour

    2013-02-01

    Full Text Available The response time (RT based Concealed Knowledge Test (CKT has been shown to accurately detect participants’ knowledge of mock-crime related information. Tests based on ocular measures such as pupil size and blink rate have sometimes resulted in poor classification, or lacked detailed classification analyses. The present study examines the fitness of multiple pupil and blink related responses in the CKT paradigm. To maximize classification efficiency, participants’ concealed knowledge was assessed using both individual test measures and combinations of test measures. Results show that individual pupil-size, pupil-slope, and pre-response blink-rate measures produce efficient classifications. Combining pupil and blink measures yielded more accuracy classifications than individual ocular measures. Although RT-based tests proved efficient, combining RT with ocular measures had little incremental benefit. It is argued that covertly assessing ocular measures during RT-based tests may guard against effective countermeasure use in applied settings. A compound classification procedure was used to categorize individual participants and yielded high hit rates and low false-alarm rates without the need for adjustments between test paradigms or subject populations. We conclude that with appropriate test paradigms and classification analyses, ocular measures may prove as effective as other indices, though additional research is needed.

  8. Deteksi Penyakit Dengue Hemorrhagic Fever dengan Pendekatan One Class Classification

    Directory of Open Access Journals (Sweden)

    Zida Ziyan Azkiya

    2017-10-01

    Full Text Available Two class classification problem maps input into two target classes. In certain cases, training data is available only in the form of a single class, as in the case of Dengue Hemorrhagic Fever (DHF patients, where only data of positive patients is available. In this paper, we report our experiment in building a classification model for detecting DHF infection using One Class Classification (OCC approach. Data from this study is sourced from laboratory tests of patients with dengue fever. The OCC methods compared are One-Class Support Vector Machine and One-Class K-Means. The result shows SVM method obtained precision value = 1.0, recall = 0.993, f-1 score = 0.997, and accuracy of 99.7% while the K-Means method obtained precision value = 0.901, recall = 0.973, f- 1 score = 0.936, and accuracy of 93.3%. This indicates that the SVM method is slightly superior to K-Means for One-Class Classification of DHF patients.

  9. Dynamic testing in schizophrenia: does training change the construct validity of a test?

    Science.gov (United States)

    Wiedl, Karl H; Schöttke, Henning; Green, Michael F; Nuechterlein, Keith H

    2004-01-01

    Dynamic testing typically involves specific interventions for a test to assess the extent to which test performance can be modified, beyond level of baseline (static) performance. This study used a dynamic version of the Wisconsin Card Sorting Test (WCST) that is based on cognitive remediation techniques within a test-training-test procedure. From results of previous studies with schizophrenia patients, we concluded that the dynamic and static versions of the WCST should have different construct validity. This hypothesis was tested by examining the patterns of correlations with measures of executive functioning, secondary verbal memory, and verbal intelligence. Results demonstrated a specific construct validity of WCST dynamic (i.e., posttest) scores as an index of problem solving (Tower of Hanoi) and secondary verbal memory and learning (Auditory Verbal Learning Test), whereas the impact of general verbal capacity and selective attention (Verbal IQ, Stroop Test) was reduced. It is concluded that the construct validity of the test changes with dynamic administration and that this difference helps to explain why the dynamic version of the WCST predicts functional outcome better than the static version.

  10. Understanding recovery: changes in the relationships of the International Classification of Functioning (ICF) components over time.

    Science.gov (United States)

    Davis, A M; Perruccio, A V; Ibrahim, S; Hogg-Johnson, S; Wong, R; Badley, E M

    2012-12-01

    The International Classification of Functioning, Disability and Health framework describes human functioning through body structure and function, activity and participation in the context of a person's social and physical environment. This work tested the temporal relationships of these components. Our hypotheses were: 1) there would be associations among physical impairment, activity limitations and participation restrictions within time; 2) prior status of a component would be associated with future status; 3) prior status of one component would influence status of a second component (e.g. prior activity limitations would be associated with current participation restrictions); and, 4) the magnitude of the within time relationships of the components would vary over time. Participants from Canada with primary hip or knee joint replacement (n = 931), an intervention with predictable improvement in pain and disability, completed standardized outcome measures pre-surgery and five times in the first year post-surgery. These included physical impairment (pain), activity limitations and participation restrictions. ICF component relationships were evaluated cross-sectionally and longitudinally using path analysis adjusting for age, sex, BMI, hip vs. knee, low back pain and mood. All component scores improved significantly over time. The path coefficients supported the hypotheses in that both within and across time, physical impairment was associated with activity limitation and activity limitation was associated with participation restriction; prior status and change in a component were associated with current status in another component; and, the magnitude of the path coefficients varied over time with stronger associations among components to three months post surgery than later in recovery with the exception of the association between impairment and participation restrictions which was of similar magnitude at all times. This work enhances understanding of the

  11. LOCAL WEATHER CLASSIFICATIONS FOR ENVIRONMENTAL APPLICATIONS

    Directory of Open Access Journals (Sweden)

    Katarzyna PIOTROWICZ

    2013-03-01

    Full Text Available Two approaches of local weather type definitions are presented and illustrated for selected stations of Poland and Hungary. The subjective classification, continuing long traditions, especially in Poland, relies on diurnal values of local weather elements. The main types are defined according to temperature with some sub-types considering relative sunshine duration, diurnal precipitation totals, relative humidity and wind speed. The classification does not make a difference between the seasons of the year, but the occurrence of the classes obviously reflects the annual cycle. Another important feature of this classification is that only a minor part of the theoretically possible combination of the various types and sub-types occurs in all stations of both countries. The objective version of the classification starts from ten possible weather element which are reduced to four according to factor analysis, based on strong correlation between the elements. This analysis yields 3 to 4 factors depending on the specific criteria of selection. The further cluster analysis uses four selected weather elements belonging to different rotated factors. They are the diurnal mean values of temperature, of relative humidity, of cloudiness and of wind speed. From the possible ways of hierarchical cluster analysis (i.e. no a priori assumption on the number of classes, the method of furthest neighbours is selected, indicating the arguments of this decision in the paper. These local weather types are important tools in understanding the role of weather in various environmental indicators, in climatic generalisation of short samples by stratified sampling and in interpretation of the climate change.

  12. Time Series of Images to Improve Tree Species Classification

    Science.gov (United States)

    Miyoshi, G. T.; Imai, N. N.; de Moraes, M. V. A.; Tommaselli, A. M. G.; Näsi, R.

    2017-10-01

    Tree species classification provides valuable information to forest monitoring and management. The high floristic variation of the tree species appears as a challenging issue in the tree species classification because the vegetation characteristics changes according to the season. To help to monitor this complex environment, the imaging spectroscopy has been largely applied since the development of miniaturized sensors attached to Unmanned Aerial Vehicles (UAV). Considering the seasonal changes in forests and the higher spectral and spatial resolution acquired with sensors attached to UAV, we present the use of time series of images to classify four tree species. The study area is an Atlantic Forest area located in the western part of São Paulo State. Images were acquired in August 2015 and August 2016, generating three data sets of images: only with the image spectra of 2015; only with the image spectra of 2016; with the layer stacking of images from 2015 and 2016. Four tree species were classified using Spectral angle mapper (SAM), Spectral information divergence (SID) and Random Forest (RF). The results showed that SAM and SID caused an overfitting of the data whereas RF showed better results and the use of the layer stacking improved the classification achieving a kappa coefficient of 18.26 %.

  13. Link prediction boosted psychiatry disorder classification for functional connectivity network

    Science.gov (United States)

    Li, Weiwei; Mei, Xue; Wang, Hao; Zhou, Yu; Huang, Jiashuang

    2017-02-01

    Functional connectivity network (FCN) is an effective tool in psychiatry disorders classification, and represents cross-correlation of the regional blood oxygenation level dependent signal. However, FCN is often incomplete for suffering from missing and spurious edges. To accurate classify psychiatry disorders and health control with the incomplete FCN, we first `repair' the FCN with link prediction, and then exact the clustering coefficients as features to build a weak classifier for every FCN. Finally, we apply a boosting algorithm to combine these weak classifiers for improving classification accuracy. Our method tested by three datasets of psychiatry disorder, including Alzheimer's Disease, Schizophrenia and Attention Deficit Hyperactivity Disorder. The experimental results show our method not only significantly improves the classification accuracy, but also efficiently reconstructs the incomplete FCN.

  14. Prognostic classification of MDS is improved by the inclusion of FISH panel testing with conventional cytogenetics.

    Science.gov (United States)

    Kokate, Prajakta; Dalvi, Rupa; Koppaka, Neeraja; Mandava, Swarna

    2017-10-01

    Cytogenetics is a critical independent prognostic factor in myelodysplastic syndromes (MDS). Conventional cytogenetics (CC) and Fluorescence in situ hybridization (FISH) Panel Testing are extensively used for the prognostic stratification of MDS, although the FISH test is not yet a bona fide component of the International Prognostic Scoring System (IPSS). The present study compares the utility of CC and FISH to detect chromosomal anomalies and in prognostic categorization. GTG-Banding and FISH Panel Testing specifically for -5/-5q, -7/-7q, +8 and -20q was performed on whole blood or bone marrow samples from 136 patients with MDS. Chromosomal anomalies were found in 40 cases by CC, including three novel translocations. FISH identified at least one anomaly in 54/136 (39.7%) cases. More than one anomaly was found in 18/54 (33.3%) cases, therefore, overall FISH identified 75 anomalies of which 32 (42.6%) were undetected by CC. FISH provided additional information in cases with CC failure and in cases with a normal karyotype. Further, in ten cases with an abnormal karyotype, FISH could identify additional anomalies, increasing the number of abnormalities per patient. Although CC is the gold standard in the cytogenetic profiling of MDS, FISH has proven to be an asset in identifying additional abnormalities. The number of anomalies per patient can predict the prognosis in MDS and hence, FISH contributed towards prognostic re-categorization. The FISH Panel testing should be used as an adjunct to CC, irrespective of the adequacy of the number of metaphases in CC, as it improves the prognostic classification of MDS. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    Science.gov (United States)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  16. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    Science.gov (United States)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  17. Significance and Application of Digital Breast Tomosynthesis for the BI-RADS Classification of Breast Cancer.

    Science.gov (United States)

    Cai, Si-Qing; Yan, Jian-Xiang; Chen, Qing-Shi; Huang, Mei-Ling; Cai, Dong-Lu

    2015-01-01

    Full-field digital mammography (FFDM) with dense breasts has a high rate of missed diagnosis, and digital breast tomosynthesis (DBT) could reduce organization overlapping and provide more reliable images for BI-RADS classification. This study aims to explore application of COMBO (FFDM+DBT) for effect and significance of BI-RADS classification of breast cancer. In this study, we selected 832 patients who had been treated from May 2013 to November 2013. Classify FFDM and COMBO examination according to BI-RADS separately and compare the differences for glands in the image of the same patient in judgment, mass characteristics display and indirect signs. Employ Paired Wilcoxon rank sum test was used in 79 breast cancer patients to find differences between two examine methods. The results indicated that COMBO pattern is able to observe more details in distribution of glands when estimating content. Paired Wilcoxon rank sum test showed that overall classification level of COMBO is higher significantly compared to FFDM to BI-RADS diagnosis and classification of breast (PBI-RADS classification in breast cancer in clinical.

  18. Statistical methods for segmentation and classification of images

    DEFF Research Database (Denmark)

    Rosholm, Anders

    1997-01-01

    The central matter of the present thesis is Bayesian statistical inference applied to classification of images. An initial review of Markov Random Fields relates to the modeling aspect of the indicated main subject. In that connection, emphasis is put on the relatively unknown sub-class of Pickard...... with a Pickard Random Field modeling of a considered (categorical) image phenomemon. An extension of the fast PRF based classification technique is presented. The modification introduces auto-correlation into the model of an involved noise process, which previously has been assumed independent. The suitability...... of the extended model is documented by tests on controlled image data containing auto-correlated noise....

  19. Exploiting the systematic review protocol for classification of medical abstracts.

    Science.gov (United States)

    Frunza, Oana; Inkpen, Diana; Matwin, Stan; Klement, William; O'Blenis, Peter

    2011-01-01

    To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers. The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload. For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%. The per-question method that combines classifiers following the specific protocol of the review leads to better

  20. CLASSIFICATION OF SPECIALTIES AND QUALIFICATIONS IN REPUBLIC OF BELARUS: TENDENCIES AND PROSPECTS

    Directory of Open Access Journals (Sweden)

    O. A. Oleks

    2016-01-01

    Full Text Available In the present publication short data on the system of specialties and qualifications functioning in Republic of Belarus, her features, scope of application are given. The purpose and problems of the revision of the National classifier of the Republic of Belarus «Specialties and qualifications», its orientation to reduce the gap between the content of education and content of activity of graduates of establishments of education are described. The main tendencies of change of the operating classification – on the basis of types of economic activity and international standard classification of education taking into account requirements of employers, minimization of economic expenses for education, including due to minimization of classification units, rapprochement of positions with educational systems of other states are revealed. Prospects of development of national system of specialties and qualifications are disclosed. Tendencies and prospects of the expected changes are shown on the examples of the certain specialties offered by BNTU (Belarusian National Technical University.

  1. Unsupervised feature learning for autonomous rock image classification

    Science.gov (United States)

    Shu, Lei; McIsaac, Kenneth; Osinski, Gordon R.; Francis, Raymond

    2017-09-01

    Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.

  2. A Novel Vehicle Classification Using Embedded Strain Gauge Sensors

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2008-11-01

    Full Text Available Abstract: This paper presents a new vehicle classification and develops a traffic monitoring detector to provide reliable vehicle classification to aid traffic management systems. The basic principle of this approach is based on measuring the dynamic strain caused by vehicles across pavement to obtain the corresponding vehicle parameters – wheelbase and number of axles – to then accurately classify the vehicle. A system prototype with five embedded strain sensors was developed to validate the accuracy and effectiveness of the classification method. According to the special arrangement of the sensors and the different time a vehicle arrived at the sensors one can estimate the vehicle’s speed accurately, corresponding to the estimated vehicle wheelbase and number of axles. Because of measurement errors and vehicle characteristics, there is a lot of overlap between vehicle wheelbase patterns. Therefore, directly setting up a fixed threshold for vehicle classification often leads to low-accuracy results. Using the machine learning pattern recognition method to deal with this problem is believed as one of the most effective tools. In this study, support vector machines (SVMs were used to integrate the classification features extracted from the strain sensors to automatically classify vehicles into five types, ranging from small vehicles to combination trucks, along the lines of the Federal Highway Administration vehicle classification guide. Test bench and field experiments will be introduced in this paper. Two support vector machines classification algorithms (one-against-all, one-against-one are used to classify single sensor data and multiple sensor combination data. Comparison of the two classification method results shows that the classification accuracy is very close using single data or multiple data. Our results indicate that using multiclass SVM-based fusion multiple sensor data significantly improves

  3. Automated classification of cell morphology by coherence-controlled holographic microscopy

    Science.gov (United States)

    Strbkova, Lenka; Zicha, Daniel; Vesely, Pavel; Chmelik, Radim

    2017-08-01

    In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity.

  4. Deep Galaxy: Classification of Galaxies based on Deep Convolutional Neural Networks

    OpenAIRE

    Khalifa, Nour Eldeen M.; Taha, Mohamed Hamed N.; Hassanien, Aboul Ella; Selim, I. M.

    2017-01-01

    In this paper, a deep convolutional neural network architecture for galaxies classification is presented. The galaxy can be classified based on its features into main three categories Elliptical, Spiral, and Irregular. The proposed deep galaxies architecture consists of 8 layers, one main convolutional layer for features extraction with 96 filters, followed by two principles fully connected layers for classification. It is trained over 1356 images and achieved 97.272% in testing accuracy. A c...

  5. Fundamental Frequency Extraction Method using Central Clipping and its Importance for the Classification of Emotional State

    Directory of Open Access Journals (Sweden)

    Pavol Partila

    2012-01-01

    Full Text Available The paper deals with a classification of emotional state. We implemented a method for extracting the fundamental speech signal frequency by means of a central clipping and examined a correlation between emotional state and fundamental speech frequency. For this purpose, we applied an approach of exploratory data analysis. The ANOVA (Analysis of variance test confirmed that a modification in the speaker's emotional state changes the fundamental frequency of human vocal tract. The main contribution of the paper lies in investigation, of central clipping method by the ANOVA.

  6. 32 CFR 2001.15 - Classification guides.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification guides. 2001.15 Section 2001.15..., NATIONAL ARCHIVES AND RECORDS ADMINISTRATION CLASSIFIED NATIONAL SECURITY INFORMATION Classification § 2001.15 Classification guides. (a) Preparation of classification guides. Originators of classification...

  7. Progressive Classification Using Support Vector Machines

    Science.gov (United States)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  8. FACET CLASSIFICATIONS OF E-LEARNING TOOLS

    Directory of Open Access Journals (Sweden)

    Olena Yu. Balalaieva

    2013-12-01

    Full Text Available The article deals with the classification of e-learning tools based on the facet method, which suggests the separation of the parallel set of objects into independent classification groups; at the same time it is not assumed rigid classification structure and pre-built finite groups classification groups are formed by a combination of values taken from the relevant facets. An attempt to systematize the existing classification of e-learning tools from the standpoint of classification theory is made for the first time. Modern Ukrainian and foreign facet classifications of e-learning tools are described; their positive and negative features compared to classifications based on a hierarchical method are analyzed. The original author's facet classification of e-learning tools is proposed.

  9. Potential of Different Optical and SAR Data in Forest and Land Cover Classification to Support REDD+ MRV

    Directory of Open Access Journals (Sweden)

    Laura Sirro

    2018-06-01

    Full Text Available The applicability of optical and synthetic aperture radar (SAR data for land cover classification to support REDD+ (Reducing Emissions from Deforestation and Forest Degradation MRV (measuring, reporting and verification services was tested on a tropical to sub-tropical test site. The 100 km by 100 km test site was situated in the State of Chiapas in Mexico. Land cover classifications were computed using RapidEye and Landsat TM optical satellite images and ALOS PALSAR L-band and Envisat ASAR C-band images. Identical sample plot data from Kompsat-2 imagery of one-metre spatial resolution were used for the accuracy assessment. The overall accuracy for forest and non-forest classification varied between 95% for the RapidEye classification and 74% for the Envisat ASAR classification. For more detailed land cover classification, the accuracies varied between 89% and 70%, respectively. A combination of Landsat TM and ALOS PALSAR data sets provided only 1% improvement in the overall accuracy. The biases were small in most classifications, varying from practically zero for the Landsat TM based classification to a 7% overestimation of forest area in the Envisat ASAR classification. Considering the pros and cons of the data types, we recommend optical data of 10 m spatial resolution as the primary data source for REDD MRV purposes. The results with L-band SAR data were nearly as accurate as the optical data but considering the present maturity of the imaging systems and image analysis methods, the L-band SAR is recommended as a secondary data source. The C-band SAR clearly has poorer potential than the L-band but it is applicable in stratification for a statistical sampling when other image types are unavailable.

  10. Maxillectomy defects: a suggested classification scheme.

    Science.gov (United States)

    Akinmoladun, V I; Dosumu, O O; Olusanya, A A; Ikusika, O F

    2013-06-01

    The term "maxillectomy" has been used to describe a variety of surgical procedures for a spectrum of diseases involving a diverse anatomical site. Hence, classifications of maxillectomy defects have often made communication difficult. This article highlights this problem, emphasises the need for a uniform system of classification and suggests a classification system which is simple and comprehensive. Articles related to this subject, especially those with specified classifications of maxillary surgical defects were sourced from the internet through Google, Scopus and PubMed using the search terms maxillectomy defects classification. A manual search through available literature was also done. The review of the materials revealed many classifications and modifications of classifications from the descriptive, reconstructive and prosthodontic perspectives. No globally acceptable classification exists among practitioners involved in the management of diseases in the mid-facial region. There were over 14 classifications of maxillary defects found in the English literature. Attempts made to address the inadequacies of previous classifications have tended to result in cumbersome and relatively complex classifications. A single classification that is based on both surgical and prosthetic considerations is most desirable and is hereby proposed.

  11. Classification of hydration status using electrocardiogram and machine learning

    Science.gov (United States)

    Kaveh, Anthony; Chung, Wayne

    2013-10-01

    The electrocardiogram (ECG) has been used extensively in clinical practice for decades to non-invasively characterize the health of heart tissue; however, these techniques are limited to time domain features. We propose a machine classification system using support vector machines (SVM) that uses temporal and spectral information to classify health state beyond cardiac arrhythmias. Our method uses single lead ECG to classify volume depletion (or dehydration) without the lengthy and costly blood analysis tests traditionally used for detecting dehydration status. Our method builds on established clinical ECG criteria for identifying electrolyte imbalances and lends to automated, computationally efficient implementation. The method was tested on the MIT-BIH PhysioNet database to validate this purely computational method for expedient disease-state classification. The results show high sensitivity, supporting use as a cost- and time-effective screening tool.

  12. Ultrasonic Testing

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyeong Jun; Kuk, Jeong Han

    2002-02-15

    This book introduces ultrasonic testing, which tells of outline of ultrasonic testing, principle of ultrasonic testing, prosperities of ultrasonic waves, radiographic test and ultrasonic test, basic theory on ultrasonic testing, mode conversion, transmission and diffraction, ultrasonic flaw detection and probe, standard test piece and reference test piece, like KS(JIS) ASME and ASTM, classification and properties of ultrasonic testing, straight beam method, angle beam method, ASME SEC.V.Art.5 ASTMA 388 and KS B 0817 Korean industrial standard.

  13. The new UN international framework classification for reserves/resources and its relation to uranium resource classification

    International Nuclear Information System (INIS)

    Barthel, F.H.; Kelter, D.

    2001-01-01

    to facilitate investments. The UN Framework Classification provides information about: the stage of geological assessment, subdivided into: Reconnaissance, Prospecting, General Exploration and Detailed Exploration; the stage of feasibility assessment, subdivided into: Geological Study, Prefeasibility Study and Feasibility Study/Mining Report; the degree of economic viability, subdivided into: Economic, Potentially Economic and Intrinsically Economic. The Mineral Reserve is defined as the economically extractable part of the Total Mineral Resource, demonstrated by feasibility assessment. A numerical codification of the eight resource classes available was introduced to facilitate the application. Due to many similarities to the classification of uranium resources used by the NEA and IAEA the new UN Framework Classification can be used to classify uranium resources. In general Reasonably Assured Resources of the lowest cost category (presently economically extractable amounts) are consistent with the UN term Proved Reserve. It is therefore hoped that the UN Framework, which now will be tested internationally for three years, will be accepted by all countries and for all mineral commodities including uranium. (author)

  14. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    Science.gov (United States)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  15. Constructing criticality by classification

    DEFF Research Database (Denmark)

    Machacek, Erika

    2017-01-01

    " in the bureaucratic practice of classification: Experts construct material criticality in assessments as they allot information on the materials to the parameters of the assessment framework. In so doing, they ascribe a new set of connotations to the materials, namely supply risk, and their importance to clean energy......, legitimizing a criticality discourse.Specifically, the paper introduces a typology delineating the inferences made by the experts from their produced recommendations in the classification of rare earth element criticality. The paper argues that the classification is a specific process of constructing risk....... It proposes that the expert bureaucratic practice of classification legitimizes (i) the valorisation that was made in the drafting of the assessment framework for the classification, and (ii) political operationalization when enacted that might have (non-)distributive implications for the allocation of public...

  16. 12 CFR 403.4 - Derivative classification.

    Science.gov (United States)

    2010-01-01

    ... SAFEGUARDING OF NATIONAL SECURITY INFORMATION § 403.4 Derivative classification. (a) Use of derivative classification. (1) Unlike original classification which is an initial determination, derivative classification... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Derivative classification. 403.4 Section 403.4...

  17. Unspecific chronic low back pain – a simple functional classification tested in a case series of patients with spinal deformities

    Directory of Open Access Journals (Sweden)

    Werkmann Mario

    2009-02-01

    Full Text Available Abstract Background Up to now, chronic low back pain without radicular symptoms is not classified and attributed in international literature as being "unspecific". For specific bracing of this patient group we use simple physical tests to predict the brace type the patient is most likely to benefit from. Based on these physical tests we have developed a simple functional classification of "unspecific" low back pain in patients with spinal deformities. Methods Between January 2006 and July 2007 we have tested 130 patients (116 females and 14 males with spinal deformities (average age 45 years, ranging from 14 years to 69 and chronic unspecific low back pain (pain for > 24 months along with the indication for brace treatment for chronic unspecific low back pain. Some of the patients had symptoms of spinal claudication (n = 16. The "sagittal realignment test" (SRT was applied, a lumbar hyperextension test, and the "sagittal delordosation test" (SDT. Additionally 3 female patients with spondylolisthesis were tested, including one female with symptoms of spinal claudication and 2 of these patients were 14 years of age and the other 43yrs old at the time of testing. Results 117 Patients reported significant pain release in the SRT and 13 in the SDT (>/= 2 steps in the Roland & Morris VRS. 3 Patients had no significant pain release in both of the tests ( Pain intensity was high (3,29 before performing the physical tests (VRS-scale 0–5 and low (1,37 while performing the physical test for the whole sample of patients. The differences where highly significant in the Wilcoxon test (z = -3,79; p In the 16 patients who did not respond to the SRT in the manual investigation we found hypermobility at L5/S1 or a spondylolisthesis at level L5/S1. In the other patients who responded well to the SRT loss of lumbar lordosis was the main issue, a finding which, according to scientific literature, correlates well with low back pain. The 3 patients who did not

  18. Supernova Photometric Lightcurve Classification

    Science.gov (United States)

    Zaidi, Tayeb; Narayan, Gautham

    2016-01-01

    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  19. Lenke and King classification systems for adolescent idiopathic scoliosis: interobserver agreement and postoperative results.

    Science.gov (United States)

    Hosseinpour-Feizi, Hojjat; Soleimanpour, Jafar; Sales, Jafar Ganjpour; Arzroumchilar, Ali

    2011-01-01

    The aim of this study was to investigate the interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis, and to compare the results of surgery performed based on classification of the scoliosis according to each of these classification systems. The study was conducted in Shohada Hospital in Tabriz, Iran, between 2009 and 2010. First, a reliability assessment was undertaken to assess interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis. Second, postoperative efficacy and safety of surgery performed based on the Lenke and King classifications were compared. Kappa coefficients of agreement were calculated to assess the agreement. Outcomes were compared using bivariate tests and repeated measures analysis of variance. A low to moderate interobserver agreement was observed for the King classification; the Lenke classification yielded mostly high agreement coefficients. The outcome of surgery was not found to be substantially different between the two systems. Based on the results, the Lenke classification method seems advantageous. This takes into consideration the Lenke classification's priority in providing details of curvatures in different anatomical surfaces to explain precise intensity of scoliosis, that it has higher interobserver agreement scores, and also that it leads to noninferior postoperative results compared with the King classification method.

  20. Project implementation : classification of organic soils and classification of marls - training of INDOT personnel.

    Science.gov (United States)

    2012-09-01

    This is an implementation project for the research completed as part of the following projects: SPR3005 Classification of Organic Soils : and SPR3227 Classification of Marl Soils. The methods developed for the classification of both soi...

  1. Gender differences of athletes in different classification groups of sports and sport disciplines

    Directory of Open Access Journals (Sweden)

    Olena Tarasevych

    2016-04-01

    Full Text Available Purpose: to identify the percentage of masculine, androgynous and feminine figures in different classification groups, sports and sports disciplines, depending on the sport qualification. Material & Methods: the study was conducted on the basis of the Kharkiv State Academy of Physical Culture among students – representatives of different sports that have different athletic skills using analysis and compilation of scientific and methodical literature, survey, testing the procedure S. Bam "Masculinity / femininity "Processing and statistical data. Results: based on the testing method established S. Bam percentage masculine, androgynous and feminine personalities among athletes and athletes in various sports classification groups depending on their athletic skills. Conclusions: among sportsmen and women in a variety of classification groups of sports is not revealed feminine personalities; masculine identity, among both men and women predominate in sports; androgyny attitude towards men and women are different.

  2. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  3. 45 CFR 601.5 - Derivative classification.

    Science.gov (United States)

    2010-10-01

    ... CLASSIFICATION AND DECLASSIFICATION OF NATIONAL SECURITY INFORMATION § 601.5 Derivative classification. Distinct... 45 Public Welfare 3 2010-10-01 2010-10-01 false Derivative classification. 601.5 Section 601.5... classification guide, need not possess original classification authority. (a) If a person who applies derivative...

  4. Examination of a size-change test for photovoltaic encapsulation materials

    Science.gov (United States)

    Miller, David C.; Gu, Xiaohong; Ji, Liang; Kelly, George; Nickel, Nichole; Norum, Paul; Shioda, Tsuyoshi; Tamizhmani, Govindasamy; Wohlgemuth, John H.

    2012-10-01

    We examine a proposed test standard that can be used to evaluate the maximum representative change in linear dimensions of sheet encapsulation products for photovoltaic modules (resulting from their thermal processing). The proposed protocol is part of a series of material-level tests being developed within Working Group 2 of the Technical Committee 82 of the International Electrotechnical Commission. The characterization tests are being developed to aid module design (by identifying the essential characteristics that should be communicated on a datasheet), quality control (via internal material acceptance and process control), and failure analysis. Discovery and interlaboratory experiments were used to select particular parameters for the size-change test. The choice of a sand substrate and aluminum carrier is explored relative to other options. The temperature uniformity of +/-5°C for the substrate was confirmed using thermography. Considerations related to the heating device (hot-plate or oven) are explored. The time duration of 5 minutes was identified from the time-series photographic characterization of material specimens (EVA, ionomer, PVB, TPO, and TPU). The test procedure was revised to account for observed effects of size and edges. The interlaboratory study identified typical size-change characteristics, and also verified the absolute reproducibility of +/-5% between laboratories.

  5. Application of Convolution Neural Network to the forecasts of flare classification and occurrence using SOHO MDI data

    Science.gov (United States)

    Park, Eunsu; Moon, Yong-Jae

    2017-08-01

    A Convolutional Neural Network(CNN) is one of the well-known deep-learning methods in image processing and computer vision area. In this study, we apply CNN to two kinds of flare forecasting models: flare classification and occurrence. For this, we consider several pre-trained models (e.g., AlexNet, GoogLeNet, and ResNet) and customize them by changing several options such as the number of layers, activation function, and optimizer. Our inputs are the same number of SOHO)/MDI images for each flare class (None, C, M and X) at 00:00 UT from Jan 1996 to Dec 2010 (total 1600 images). Outputs are the results of daily flare forecasting for flare class and occurrence. We build, train, and test the models on TensorFlow, which is well-known machine learning software library developed by Google. Our major results from this study are as follows. First, most of the models have accuracies more than 0.7. Second, ResNet developed by Microsoft has the best accuracies : 0.86 for flare classification and 0.84 for flare occurrence. Third, the accuracies of these models vary greatly with changing parameters. We discuss several possibilities to improve the models.

  6. Effect of e-learning program on risk assessment and pressure ulcer classification - A randomized study.

    Science.gov (United States)

    Bredesen, Ida Marie; Bjøro, Karen; Gunningberg, Lena; Hofoss, Dag

    2016-05-01

    Pressure ulcers (PUs) are a problem in health care. Staff competency is paramount to PU prevention. Education is essential to increase skills in pressure ulcer classification and risk assessment. Currently, no pressure ulcer learning programs are available in Norwegian. Develop and test an e-learning program for assessment of pressure ulcer risk and pressure ulcer classification. Forty-four nurses working in acute care hospital wards or nursing homes participated and were assigned randomly into two groups: an e-learning program group (intervention) and a traditional classroom lecture group (control). Data was collected immediately before and after training, and again after three months. The study was conducted at one nursing home and two hospitals between May and December 2012. Accuracy of risk assessment (five patient cases) and pressure ulcer classification (40 photos [normal skin, pressure ulcer categories I-IV] split in two sets) were measured by comparing nurse evaluations in each of the two groups to a pre-established standard based on ratings by experts in pressure ulcer classification and risk assessment. Inter-rater reliability was measured by exact percent agreement and multi-rater Fleiss kappa. A Mann-Whitney U test was used for continuous sum score variables. An e-learning program did not improve Braden subscale scoring. For pressure ulcer classification, however, the intervention group scored significantly higher than the control group on several of the categories in post-test immediately after training. However, after three months there were no significant differences in classification skills between the groups. An e-learning program appears to have a greater effect on the accuracy of pressure ulcer classification than classroom teaching in the short term. For proficiency in Braden scoring, no significant effect of educational methods on learning results was detected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Learning features for tissue classification with the classification restricted Boltzmann machine

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2014-01-01

    Performance of automated tissue classification in medical imaging depends on the choice of descriptive features. In this paper, we show how restricted Boltzmann machines (RBMs) can be used to learn features that are especially suited for texture-based tissue classification. We introduce the convo...... outperform conventional RBM-based feature learning, which is unsupervised and uses only a generative learning objective, as well as often-used filter banks. We show that a mixture of generative and discriminative learning can produce filters that give a higher classification accuracy....

  8. Classification of innovations: approaches and consequences

    Directory of Open Access Journals (Sweden)

    Jakub Tabas

    2011-01-01

    Full Text Available Currently, innovations are perceived as a life blood of businesses. The inevitable fact is that even if the innovations have a potential to transform the companies or all the industries, the innovations are high risky. Even though, the second fact is that in order to companies’ development and their survival on the markets, the innovations have become the necessity. In the theory, it is rather difficult to find a comprehensive definition of innovation, and to settle down a general definition of innovation becomes more and more difficult with the growing number of domains where the innovations, or possible innovations start to appear in a form of added value to something that already exist. Definition of innovation has come through a long process of development; from early definition of Schumpeter who has connected innovation especially with changes in products or production processes, to recent definitions based on the added value for a society. One of possible approaches to define the content of innovation is to base the definition on classification of innovation. In the article, the authors provide the analysis of existing classifications of innovations in order to find, respectively in order to define the general content of innovation that would confirm (or reject their definition of innovation derived in the frame of their previous work where they state that innovation is a change that leads to gaining profit for an individual, for business entity, or for society, while the profit is not only the accounting one, but it is the economic profit.The article is based especially on the secondary research while the authors employ the method of analysis with the aim to confront various classification-based definitions of innovation. Then the methods used are especially comparison, analysis and synthesis.

  9. Track classification within wireless sensor network

    Science.gov (United States)

    Doumerc, Robin; Pannetier, Benjamin; Moras, Julien; Dezert, Jean; Canevet, Loic

    2017-05-01

    In this paper, we present our study on track classification by taking into account environmental information and target estimated states. The tracker uses several motion model adapted to different target dynamics (pedestrian, ground vehicle and SUAV, i.e. small unmanned aerial vehicle) and works in centralized architecture. The main idea is to explore both: classification given by heterogeneous sensors and classification obtained with our fusion module. The fusion module, presented in his paper, provides a class on each track according to track location, velocity and associated uncertainty. To model the likelihood on each class, a fuzzy approach is used considering constraints on target capability to move in the environment. Then the evidential reasoning approach based on Dempster-Shafer Theory (DST) is used to perform a time integration of this classifier output. The fusion rules are tested and compared on real data obtained with our wireless sensor network.In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of this system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).

  10. Feasibility of using training cases from International Spinal Cord Injury Core Data Set for testing of International Standards for Neurological Classification of Spinal Cord Injury items

    DEFF Research Database (Denmark)

    Liu, N; Hu, Z W; Zhou, M W

    2014-01-01

    STUDY DESIGN: Descriptive comparison analysis. OBJECTIVE: To evaluate whether five training cases of International Spinal Cord Injury Core Data Set (ISCICDS) are appropriate for testing the facts within the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI...... include information about zone of partial preservation, sensory score or motor score. CONCLUSION: Majority of the facts related to SL, ML and AIS are included in the five training cases of ISCICDS. Thus, using these training cases, it is feasible to test the above facts within the ISNCSCI. It is suggested...

  11. Latent Partially Ordered Classification Models and Normal Mixtures

    Science.gov (United States)

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  12. Lidar-based individual tree species classification using convolutional neural network

    Science.gov (United States)

    Mizoguchi, Tomohiro; Ishii, Akira; Nakamura, Hiroyuki; Inoue, Tsuyoshi; Takamatsu, Hisashi

    2017-06-01

    Terrestrial lidar is commonly used for detailed documentation in the field of forest inventory investigation. Recent improvements of point cloud processing techniques enabled efficient and precise computation of an individual tree shape parameters, such as breast-height diameter, height, and volume. However, tree species are manually specified by skilled workers to date. Previous works for automatic tree species classification mainly focused on aerial or satellite images, and few works have been reported for classification techniques using ground-based sensor data. Several candidate sensors can be considered for classification, such as RGB or multi/hyper spectral cameras. Above all candidates, we use terrestrial lidar because it can obtain high resolution point cloud in the dark forest. We selected bark texture for the classification criteria, since they clearly represent unique characteristics of each tree and do not change their appearance under seasonable variation and aged deterioration. In this paper, we propose a new method for automatic individual tree species classification based on terrestrial lidar using Convolutional Neural Network (CNN). The key component is the creation step of a depth image which well describe the characteristics of each species from a point cloud. We focus on Japanese cedar and cypress which cover the large part of domestic forest. Our experimental results demonstrate the effectiveness of our proposed method.

  13. Automated authorship attribution using advanced signal classification techniques.

    Directory of Open Access Journals (Sweden)

    Maryam Ebrahimpour

    Full Text Available In this paper, we develop two automated authorship attribution schemes, one based on Multiple Discriminant Analysis (MDA and the other based on a Support Vector Machine (SVM. The classification features we exploit are based on word frequencies in the text. We adopt an approach of preprocessing each text by stripping it of all characters except a-z and space. This is in order to increase the portability of the software to different types of texts. We test the methodology on a corpus of undisputed English texts, and use leave-one-out cross validation to demonstrate classification accuracies in excess of 90%. We further test our methods on the Federalist Papers, which have a partly disputed authorship and a fair degree of scholarly consensus. And finally, we apply our methodology to the question of the authorship of the Letter to the Hebrews by comparing it against a number of original Greek texts of known authorship. These tests identify where some of the limitations lie, motivating a number of open questions for future work. An open source implementation of our methodology is freely available for use at https://github.com/matthewberryman/author-detection.

  14. Classification of smooth Fano polytopes

    DEFF Research Database (Denmark)

    Øbro, Mikkel

    A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...... Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm...... for the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for ....

  15. A proposal of criteria for the classification of systemic sclerosis.

    Science.gov (United States)

    Nadashkevich, Oleg; Davis, Paul; Fritzler, Marvin J

    2004-11-01

    Sensitive and specific criteria for the classification of systemic sclerosis are required by clinicians and investigators to achieve higher quality clinical studies and approaches to therapy. A clinical study of systemic sclerosis patients in Europe and Canada led to a set of criteria that achieve high sensitivity and specificity. Both clinical and laboratory investigations of patients with systemic sclerosis, related conditions and diseases with clinical features that can be mistaken as part of the systemic sclerosis spectrum were undertaken. Laboratory investigations included the detection of autoantibodies to centromere proteins, Scl-70 (topoisomerase I), and fibrillarin (U3-RNP). Based on the investigation of 269 systemic sclerosis patients and 720 patients presenting with related and confounding conditions, the following set of criteria for the classification of systemic sclerosis was proposed: 1) autoantibodies to: centromere proteins, Scl-70 (topo I), fibrillarin; 2) bibasilar pulmonary fibrosis; 3) contractures of the digital joints or prayer sign; 4) dermal thickening proximal to the wrists; 5) calcinosis cutis; 6) Raynaud's phenomenon; 7) esophageal distal hypomotility or reflux-esophagitis; 8) sclerodactyly or non-pitting digital edema; 9) teleangiectasias. The classification of definite SSc requires at least three of the above criteria. Criteria for the classification of systemic sclerosis have been proposed. Preliminary testing has defined the sensitivity and specificity of these criteria as high as 99% and 100%, respectively. Testing and validation of the proposed criteria by other clinical centers is required.

  16. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  17. A Parallel Adaboost-Backpropagation Neural Network for Massive Image Dataset Classification

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao; Wang, Min; Shi, Hao; Tian, Yun

    2016-01-01

    Image classification uses computers to simulate human understanding and cognition of images by automatically categorizing images. This study proposes a faster image classification approach that parallelizes the traditional Adaboost-Backpropagation (BP) neural network using the MapReduce parallel programming model. First, we construct a strong classifier by assembling the outputs of 15 BP neural networks (which are individually regarded as weak classifiers) based on the Adaboost algorithm. Second, we design Map and Reduce tasks for both the parallel Adaboost-BP neural network and the feature extraction algorithm. Finally, we establish an automated classification model by building a Hadoop cluster. We use the Pascal VOC2007 and Caltech256 datasets to train and test the classification model. The results are superior to those obtained using traditional Adaboost-BP neural network or parallel BP neural network approaches. Our approach increased the average classification accuracy rate by approximately 14.5% and 26.0% compared to the traditional Adaboost-BP neural network and parallel BP neural network, respectively. Furthermore, the proposed approach requires less computation time and scales very well as evaluated by speedup, sizeup and scaleup. The proposed approach may provide a foundation for automated large-scale image classification and demonstrates practical value. PMID:27905520

  18. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  19. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  20. 1968 Prototype Diagnostic Test.

    Science.gov (United States)

    Veterans Administration Hospital, Bedford, MA.

    This true-false diagnostic test was used for pretesting of employees at a Veterans Administration Hospital. The test is comprised of 20 items. An alternate test--Classification Questionnaire--was used for testing after remedial training. (For related document, see TM 002 334.) (DB)