Sample records for achieved classification error

  1. A classification of prescription errors.

    Neville, R G; Robertson, F; Livingstone, S.; Crombie, I K


    Three independent methods of study of prescription errors led to the development of a classification of errors based on the potential effects and inconvenience to patients, pharmacists and doctors. Four types of error are described: type A (potentially serious to patient); type B (major nuisance - pharmacist/doctor contact required); type C (minor nuisance - pharmacist must use professional judgement); and type D (trivial). The types of frequency of errors are detailed for a group of eight pr...

  2. Human error classification and data collection

    Analysis of human error data requires human error classification. As the human factors/reliability subject has developed so too has the topic of human error classification. The classifications vary considerably depending on whether it has been developed from a theoretical psychological approach to understanding human behavior or error, or whether it has been based on an empirical practical approach. This latter approach is often adopted by nuclear power plants that need to make practical improvements as soon as possible. This document will review aspects of human error classification and data collection in order to show where potential improvements could be made. It will attempt to show why there are problems with human error classification and data collection schemes and that these problems will not be easy to resolve. The Annex of this document contains the papers presented at the meeting. A separate abstract was prepared for each of these 12 papers. Refs, figs and tabs

  3. Analysis of thematic map classification error matrices.

    Rosenfield, G.H.


    The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author

  4. Classification error of the thresholded independence rule

    Bak, Britta Anker; Fenger-Grøn, Morten; Jensen, Jens Ledet

    We consider classification in the situation of two groups with normally distributed data in the ‘large p small n’ framework. To counterbalance the high number of variables we consider the thresholded independence rule. An upper bound on the classification error is established which is taylored to a...

  5. Correcting Classification Error in Income Mobility

    Jesús Pérez Mayo; M.A. Fajardo Caldera


    The mobility of a categorical variable can be a mix of two different parts: true movement and measurement or classification error. For instance, observed transitions can be hiding a real immobility and, therefore, these changes are caused by measurement error. The Latent Mixed Markov Model is proposed to solve this problem in this paper. Income mobility is a well-known example of categorical variables mobility in Economics. So, the authors think that the Latent Mixed Markov Model is a good op...

  6. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Sun Yanni


    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at and at

  7. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik


    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  8. Reducing Support Vector Machine Classification Error by Implementing Kalman Filter

    Muhsin Hassan


    Full Text Available The aim of this is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine classification errors in classifying pipeline corrosion depth. In pipeline defect classification, it is important to increase the accuracy of the SVM classification so that one can avoid misclassification which can lead to greater problems in monitoring pipeline defect and prediction of pipeline leakage. In this paper, it is found that noisy data can greatly affect the performance of SVM. Hence, Kalman Filter + SVM hybrid technique has been proposed as a solution to reduce SVM classification errors. The datasets has been added with Additive White Gaussian Noise in several stages to study the effect of noise on SVM classification accuracy. Three techniques have been studied in this experiment, namely SVM, hybrid of Discrete Wavelet Transform + SVM and hybrid of Kalman Filter + SVM. Experiment results have been compared to find the most promising techniques among them. MATLAB simulations show Kalman Filter and Support Vector Machine combination in a single system produced higher accuracy compared to the other two techniques.

  9. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Zhigao Zeng


    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  10. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando


    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862

  11. A non-linear learning & classification algorithm that achieves full training accuracy with stellar classification accuracy

    Khogali, Rashid


    A fast Non-linear and non-iterative learning and classification algorithm is synthesized and validated. This algorithm named the "Reverse Ripple Effect(R.R.E)", achieves 100% learning accuracy but is computationally expensive upon classification. The R.R.E is a (deterministic) algorithm that super imposes Gaussian weighted functions on training points. In this work, the R.R.E algorithm is compared against known learning and classification techniques/algorithms such as: the Perceptron Criterio...


    CHEN Jie; GONG Zi-tong; CHEN Zhi-cheng; TAN Man-zhi


    International concerns about the effects of global change on permafrost-affected soils and responses of permafrost terrestrial landscapes to such change have been increasing in the last two decades. To achieve a variety of goals including the determining of soil carbon stocks and dynamics in the Northern Hemisphere, the understanding of soil degradation and the best ways to protect the fragile ecosystems in permafrost environment, further study development on Cryosol classification is being in great demand. In this paper the existing Cryosol classifications contained in three representative soil taxonomies are introduced, and the problems in the practical application of the defining criteria used for category differentiation in these taxonomic systems are discussed. Meanwhile, the resumption and reconstruction of Chinese Cryosol classification within a taxonomic frame is proposed. In dealing with Cryosol classification the advantages that Chinese pedologists have and the challenges that they have to face are analyzed. Finally, several suggestions on the study development of the further taxonomic frame of Cryosol classification are put forward.

  13. The Effect of Error Correction vs. Error Detection on Iranian Pre-Intermediate EFL Learners' Writing Achievement

    Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad


    This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…

  14. Artificial intelligence environment for the analysis and classification of errors in discrete sequential processes

    Ahuja, S.B.


    The study evolved over two phases. First, an existing artificial intelligence technique, heuristic state space search, was used to successfully address and resolve significant issues that have prevented automated error classification in the past. A general method was devised for constructing heuristic functions to guide the search process, which successfully avoided the combinatorial explosion normally associated with search paradigms. A prototype error classifier, SLIPS/I, was tested and evaluated using both real-world data from a databank of speech errors and artificially generated random errors. It showed that heuristic state space search is a viable paradigm for conducting domain-independent error classification within practical limits of memory space and processing time. The second phase considered sequential error classification as a diagnostic process in which a set of disorders (elementary errors) is said to be a classification of an observed set of manifestations (local differences between an intended sequence and the errorful sequence) it if provides a regular cover for them. Using a model of abductive logic based on the set covering theory, this new perspective of error classification as a diagnostic process models human diagnostic reasoning in classifying complex errors. A high level, non-procedural error specification language (ESL) was also designed.

  15. A proposal for the detection and classification of discourse errors

    Eva M. Mestre-Mestre; Carrió Pastor, Mª Luisa


    Our interest lies in error from the point of view of language in context,therefore we will focus on errors produced at the discourse level. The main objective of this paper is to detect discourse competence errors and their implications through the analysis of a corpus of English written texts produced by Higher Education students with a B1 level (following the Common European Framework of Reference for Languages). Further objectives are to propose categories which could help us to c...

  16. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki


    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  17. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Steven Kelly


    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  18. Modifed Minimum Classification Error Learning and Its Application to Neural Networks

    Shimodaira, Hiroshi; Rokui, Jun; Nakai, Mitsuru


    A novel method to improve the generalization performance of the Minimum Classification Error (MCE) / Generalized Probabilistic Descent (GPD) learning is proposed. The MCE/GPD learning proposed by Juang and Katagiri in 1992 results in better recognition performance than the maximum-likelihood (ML) based learning in various areas of pattern recognition. Despite its superiority in recognition performance, as well as other learning algorithms, it still suffers from the problem of "over-fitting...

  19. Standard Errors of Proportions Used in Reporting Changes in School Performance with Achievement Levels.

    Arce-Ferrer, Alvaro; Frisbie, David A.; Kolen, Michael J.


    Studies of the achievement test results for about 490 school districts at grade 4 and about 420 districts at grade 5 show that the error variance of estimates of change at the school level is large enough to interfere with interpretations of annual change estimates. (SLD)

  20. Classification of error in anatomic pathology: a proposal for an evidence-based standard.

    Foucar, Elliott


    Error in anatomic pathology (EAP) is an appropriate problem to consider using the disease model with which all pathologists are familiar. In analogy to medical diseases, diagnostic errors represent a complex constellation of often-baffling deviations from the "normal" condition. Ideally, one would wish to approach such "diseases of diagnosis" with effective treatments or preventative measures, but interventions in the absence of a clear understanding of pathogenesis are often ineffective or even harmful. Medical therapy has its history of "bleeding and purging," and error-prevention has a history of "blaming and shaming." The urge to take action in dealing with either medical illnesses or diagnostic failings is, of course, admirable. However, the principle of primum non nocere should guide one's action in both circumstances. The first step in using the disease model to address EAP is the development of a valid taxonomy to allow for grouping together of abnormalities that have a similar pathogenesis. It is apparent that disease categories such as "tumor" are not valuable until they are further refined by precise and accurate classification. Likewise, "error" is an impossibly broad concept that must be parsed into meaningful subcategories before it can be understood with sufficient clarity to be prevented. One important EAP subtype that has been particularly difficult to understand and classify is knowledge-based interpretative (KBI) error. Not only is the latter sometimes confused with distinctly different error types such as human lapses, but there is danger of mistaking system-wide problems (eg, imprecise or inaccurate diagnostic criteria) for the KBI errors of individual pathologists. This paper presents a theoretically-sound taxonomic system for classification of error that can be used for evidence-based categorization of individual cases. Any taxonomy of error in medicine must distinguish between the various factors that may produce mistakes, and importantly

  1. Classification errors in contingency tables analyzed with hierarchical log-linear models. Technical report No. 20

    Korn, E L


    This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.

  2. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    Cohen, Aaron M


    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here. PMID:17947623

  3. Software platform for managing the classification of error- related potentials of observers

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.


    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  4. Evaluating the Type II error rate in a sediment toxicity classification using the Reference Condition Approach.

    Rodriguez, Pilar; Maestre, Zuriñe; Martinez-Madrid, Maite; Reynoldson, Trefor B


    Sediments from 71 river sites in Northern Spain were tested using the oligochaete Tubifex tubifex (Annelida, Clitellata) chronic bioassay. 47 sediments were identified as reference primarily from macroinvertebrate community characteristics. The data for the toxicological endpoints were examined using non-metric MDS. Probability ellipses were constructed around the reference sites in multidimensional space to establish a classification for assessing test-sediments into one of three categories (Non Toxic, Potentially Toxic, and Toxic). The construction of such probability ellipses sets the Type I error rate. However, we also wished to include in the decision process for identifying pass-fail boundaries the degree of disturbance required to be detected, and the likelihood of being wrong in detecting that disturbance (i.e. the Type II error). Setting the ellipse size to use based on Type I error does not include any consideration of the probability of Type II error. To do this, the toxicological response observed in the reference sediments was manipulated by simulating different degrees of disturbance (simpacted sediments), and measuring the Type II error rate for each set of the simpacted sediments. From this procedure, the frequency at each probability ellipse of identifying impairment using sediments with known level of disturbance is quantified. Thirteen levels of disturbance and seven probability ellipses were tested. Based on the results the decision boundary for Non Toxic and Potentially Toxic was set at the 80% probability ellipse, and the boundary for Potentially Toxic and Toxic at the 95% probability ellipse. Using this approach, 9 test sediments were classified as Toxic, 2 as Potentially Toxic, and 13 as Non Toxic. PMID:20980065

  5. Block-Based Motion Estimation Using the Pixelwise Classification of the Motion Compensation Error

    Jun-Yong Kim


    Full Text Available In this paper, we propose block-based motion estimation (ME algorithms based on the pixelwise classification of two different motion compensation (MC errors: 1 displaced frame difference (DFD and 2 brightness constraint constancy term (BCCT. Block-based ME has drawbacks such as unreliable motion vectors (MVs and blocking artifacts, especially in object boundaries. The proposed block matching algorithm (BMA-based methods attempt to reduce artifacts in object-boundary blocks caused by incorrect assumption of a single rigid (translational motion. They yield more appropriate MVs in boundary blocks under the assumption that there exist up to three nonoverlapping regions with different motions. The proposed algorithms also reduce the blocking artifact in the conventional BMA, in which the overlappedblock motion compensation (OBMC is employed especially to the selected regions to prevent the degradation of details. Experimental results with several test sequences show the effectiveness of theproposed algorithms.

  6. Errors

    Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)

  7. Inborn errors of metabolism with 3-methylglutaconic aciduria as discriminative feature: proper classification and nomenclature.

    Wortmann, Saskia B; Duran, Marinus; Anikster, Yair; Barth, Peter G; Sperl, Wolfgang; Zschocke, Johannes; Morava, Eva; Wevers, Ron A


    Increased urinary 3-methylglutaconic acid excretion is a relatively common finding in metabolic disorders, especially in mitochondrial disorders. In most cases 3-methylglutaconic acid is only slightly elevated and accompanied by other (disease specific) metabolites. There is, however, a group of disorders with significantly and consistently increased 3-methylglutaconic acid excretion, where the 3-methylglutaconic aciduria is a hallmark of the phenotype and the key to diagnosis. Until now these disorders were labelled by roman numbers (I-V) in the order of discovery regardless of pathomechanism. Especially, the so called "unspecified" 3-methylglutaconic aciduria type IV has been ever growing, leading to biochemical and clinical diagnostic confusion. Therefore, we propose the following pathomechanism based classification and a simplified diagnostic flow chart for these "inborn errors of metabolism with 3-methylglutaconic aciduria as discriminative feature". One should distinguish between "primary 3-methylglutaconic aciduria" formerly known as type I (3-methylglutaconyl-CoA hydratase deficiency, AUH defect) due to defective leucine catabolism and the--currently known--three groups of "secondary 3-methylglutaconic aciduria". The latter should be further classified and named by their defective protein or the historical name as follows: i) defective phospholipid remodelling (TAZ defect or Barth syndrome, SERAC1 defect or MEGDEL syndrome) and ii) mitochondrial membrane associated disorders (OPA3 defect or Costeff syndrome, DNAJC19 defect or DCMA syndrome, TMEM70 defect). The remaining patients with significant and consistent 3-methylglutaconic aciduria in whom the above mentioned syndromes have been excluded, should be referred to as "not otherwise specified (NOS) 3-MGA-uria" until elucidation of the underlying pathomechanism enables proper (possibly extended) classification. PMID:23296368

  8. Further results on fault-tolerant distributed classification using error-correcting codes

    Wang, Tsang-Yi; Han, Yunghsiang S.; Varshney, Pramod K.


    In this paper, we consider the distributed classification problem in wireless sensor networks. The DCFECC-SD approach employing the binary code matrix has recently been proposed to cope with the errors caused by both sensor faults and the effect of fading channels. The DCFECC-SD approach extends the DCFECC approach by using soft decision decoding to combat channel fading. However, the performance of the system employing the binary code matrix could be degraded if the distance between different hypotheses can not be kept large. This situation could happen when the number of sensor is small or the number of hypotheses is large. In this paper, we design the DCFECC-SD approach employing the D-ary code matrix, where D>2. Simulation results show that the performance of the DCFECC-SD approach employing the D-ary code matrix is better than that of the DCFECC-SD approach employing the binary code matrix. Performance evaluation of DCFECC-SD using different number of bits of local decision information is also provided when the total channel energy output from each sensor node is fixed.

  9. Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs

    Amit, Yali; Walker, Jacob


    We describe an attractor network of binary perceptrons receiving inputs from a retinotopic visual feature layer. Each class is represented by a random subpopulation of the attractor layer, which is turned on in a supervised manner during learning of the feed forward connections. These are discrete three state synapses and are updated based on a simple field dependent Hebbian rule. For testing, the attractor layer is initialized by the feedforward inputs and then undergoes asynchronous random updating until convergence to a stable state. Classification is indicated by the sub-population that is persistently activated. The contribution of this paper is two-fold. This is the first example of competitive classification rates of real data being achieved through recurrent dynamics in the attractor layer, which is only stable if recurrent inhibition is introduced. Second, we demonstrate that employing three state synapses with feedforward inhibition is essential for achieving the competitive classification rates due to the ability to effectively employ both positive and negative informative features. PMID:22737121

  10. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli.

    Mandelkow, Hendrik; de Zwart, Jacco A; Duyn, Jeff H


    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  11. New classification of operators' human errors at overseas nuclear power plants and preparation of easy-to-use case sheets

    At nuclear power plants, plant operators examine other human error cases, including those that occurred at other plants, so that they can learn from such experiences and avoid making similar errors again. Although there is little data available on errors made at domestic plants, nuclear operators in foreign countries are reporting even minor irregularities and signs of faults, and a large amount of data on human errors at overseas plants could be collected and examined. However, these overseas data have not been used effectively because most of them are poorly organized or not properly classified and are often hard to understand. Accordingly, we carried out a study on the cases of human errors at overseas power plants in order to help plant personnel clearly understand overseas experiences and avoid repeating similar errors, The study produced the following results, which were put to use at nuclear power plants and other facilities. (1) ''One-Point-Advice'' refers to a practice where a leader gives pieces of advice to his team of operators in order to prevent human errors before starting work. Based on this practice and those used in the aviation industry, we have developed a new method of classifying human errors that consists of four basic actions and three applied actions. (2) We used this new classification method to classify human errors made by operators at overseas nuclear power plants. The results show that the most frequent errors caused not by operators themselves but due to insufficient team monitoring, for which superiors and/or their colleagues were responsible. We therefore analyzed and classified possible factors contributing to insufficient team monitoring, and demonstrated that the frequent errors have also occurred at domestic power plants. (3) Using the new classification formula, we prepared a human error case sheets that is easy for plant personnel to understand. The sheets are designed to make data more understandable and easier to remember

  12. Five-way Smoking Status Classification Using Text Hot-Spot Identification and Error-correcting Output Codes

    Cohen, Aaron M.


    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...

  13. Stochastic analysis of multiple-passband spectral classifications systems affected by observation errors

    Tsokos, C. P.


    The classification of targets viewed by a pushbroom type multiple band spectral scanner by algorithms suitable for implementation in high speed online digital circuits is considered. A class of algorithms suitable for use with a pipelined classifier is investigated through simulations based on observed data from agricultural targets. It is shown that time distribution of target types is an important determining factor in classification efficiency.

  14. Medication errors in outpatient setting of a tertiary care hospital: classification and root cause analysis

    Sunil Basukala


    Conclusions: Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Hence, A focus on easy-to-use and inexpensive techniques for medication error reduction should be used to have the greatest impact. [Int J Basic Clin Pharmacol 2015; 4(6.000: 1235-1240

  15. Time Series Analysis of Temporal Data by Classification using Mean Absolute Error

    Swati Soni


    Full Text Available There has been a lot of research on the application ofdata mining and knowledge discovery technologies into financialmarket prediction area. However, most of the existing researchfocused on mining structured or numeric data such as financialreports, historical quotes, etc. Another kind of data source –unstructured data such as financial news articles, comments onfinancial markets by experts, etc., which is usually of a muchhigher availability, seems to be neglected due to theirinconvenience to be represented as numeric feature vectors forfurther applying data mining algorithms. A new hybrid systemhas been developed for this purpose. It retrieves financial newsarticles from the internet periodically and using classificationmining techniques to categorize those articles into differentcategories according to their expected effects on the marketbehaviors, then the results will be compared with the real marketdata. This classification with 10 cross fold validation combinationof algorithms can be applied to do financial market prediction in the future

  16. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Spinnato, J.; Roubaud, M.-C.; Burle, B.; Torrésani, B.


    Objective. The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. Approach. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  17. Noise in remote-sensing systems - The effect on classification error

    Landgrebe, D. A.; Malaret, E.


    Several types of noise in remote-sensing systems are treated. The purpose is to provide enhanced understanding of the relationship of noise sources to both analysis results and sensor design. The context of optical sensors and spectral pattern recognition analysis methods is used to enable tractability for quantitative results. First, the concept of multispectral classification is reviewed. Next, stochastic models are discussed for both signals and noise, including thermal, shot and quantization noise along with atmospheric effects. A model enabling the study of the combined effect of these sources is presented, and a system performance index is defined. Theoretical results showing the interrelated effects of the noise sources on system performance are given. Results of simulations using the system model are presented for several values of system parameters, using some noise parameters of the Thematic Mapper scanner as an illustration. Results show the relative importance of each of the noise sources on system performance, including how sensor noise interacts with atmospheric effects to degrade accuracy.

  18. Classification

    Clary, Renee; Wandersee, James


    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  19. Classification and Analysis of Human Errors Involved in Test and Maintenance-Related Unplanned Reactor Trip Events

    Test and maintenance (T and M) human errors involved in unplanned reactor trip events in Korean nuclear power plants were analyzed according to James Reason's basic error types, and the characteristics of the T and M human errors by error type were delineated by the distinctive nature of major contributing factors, error modes, and the predictivity of possible errors. Human errors due to a planning failure where a work procedure is provided are dominated by the activities during low-power states or startup operations, and human errors due to a planning failure where a work procedure does not exist are dominated by corrective maintenance activities during full-power states. Human errors during execution of a planned work sequence show conspicuous error patterns; four error modes such as 'wrong object', 'omission', 'too little', and 'wrong action' appeared to be dominant. In view of a human error predictivity, human errors due to a planning failure is deemed to be very difficult to identify in advance, while human errors during execution are sufficiently predictable by using human error prediction or human reliability analysis methods with adequate resources

  20. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.


    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  1. Use of Total Precipitable Water Classification of A Priori Error and Quality Control in Atmospheric Temperature and Water Vapor Sounding Retrieval

    Eun-Han KWON; Jun LI; Jinlong LI; B. J. SOHN; Elisabeth WEISZ


    This study investigates the use of dynamic a priori error information according to atmospheric moistness and the use of quality controls in temperature and water vapor profile retrievals from hyperspectral infrared (IR) sounders.Temperature and water vapor profiles are retrieved from Atmospheric InfraRed Sounder (AIRS) radiance measurements by applying a physical iterative method using regression retrieval as the first guess. Based on the dependency of first-guess errors on the degree of atmospheric moistness,the a priori first-guess errors classified by total precipitable water (TPW) are applied in the AIRS physical retrieval procedure.Compared to the retrieval results from a fixed a priori error,boundary layer moisture retrievals appear to be improved via TPW classification of a priori first-guess errors.Six quality control (QC)tests,which check non-converged or bad retrievals,large residuals,high terrain and desert areas,and large temperature and moisture deviations from the first guess regression retrieval,are also applied in the AIRS physical retrievals.Significantly large errors are found for the retrievals rejected by these six QCs,and the retrieval errors are substantially reduced via QC over land,which suggest the usefulness and high impact of the QCs,especially over land.In conclusion,the use of dynamic a priori error information according to atmospheric moistness,and the use of appropriate QCs dealing with the geographical information and the deviation from the first-guess as well as the conventional inverse performance are suggested to improve temperature and moisture retrievals and their applications.

  2. Maximum mutual information regularized classification

    Wang, Jim Jing-Yan


    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  3. Hybrid evolutionary techniques in feed forward neural network with distributed error for classification of handwritten Hindi `SWARS'

    Kumar, Somesh; Pratap Singh, Manu; Goel, Rajkumar; Lavania, Rajesh


    In this work, the performance of feedforward neural network with a descent gradient of distributed error and the genetic algorithm (GA) is evaluated for the recognition of handwritten 'SWARS' of Hindi curve script. The performance index for the feedforward multilayer neural networks is considered here with distributed instantaneous unknown error i.e. different error for different layers. The objective of the GA is to make the search process more efficient to determine the optimal weight vectors from the population. The GA is applied with the distributed error. The fitness function of the GA is considered as the mean of square distributed error that is different for each layer. Hence the convergence is obtained only when the minimum of different errors is determined. It has been analysed that the proposed method of a descent gradient of distributed error with the GA known as hybrid distributed evolutionary technique for the multilayer feed forward neural performs better in terms of accuracy, epochs and the number of optimal solutions for the given training and test pattern sets of the pattern recognition problem.

  4. The Effects of Motor Coordination Error Duration on Reaction Time and Motivational Achievement Tasks among Young Romanian Psychology Students

    Mihai Aniţei; Mihaela Chraif


    Present study is focused on highlighting the effects of motor coordination error duration on reaction time to multiple stimuli, on motivation from competition and on motivation from personal goals among young psychology students. Method: the participants were 65 undergraduate students, aged between 19 and 24 years old (m= 21.65; S.D. = 1.49), 32 male and 33 female, all from the Faculty of Psychology and Educational Sciences, University of Bucharest, Romania. Instruments were the Determination...

  5. Comparison of maintenance worker's human error events occurred at United States and domestic nuclear power plants. The proposal of the classification method with insufficient knowledge and experience and the classification result of its application

    Human errors by maintenance workers in U.S. nuclear power plants were compared with those in Japanese nuclear power plants for the same period in order to identify the characteristics of such errors. As for U.S. events, cases which occurred during 2006 were selected from the Nuclear Information Database of the Institute to Nuclear Safety System while Japanese cases that occurred during the same period, were extracted from the Nuclear Information Archives (NUCIA) owned by JANTI. The most common cause of human errors was insufficient knowledge or experience' accounting for about 40% for U.S. cases and 50% or more of cases in Japan. To break down 'insufficient knowledge', we classified the contents of knowledge into five categories; method', 'nature', 'reason', 'scope' and 'goal', and classified the level of knowledge into four categories: 'known', 'comprehended', 'applied' and analytic'. By using this classification, the patterns of combination of each item of the content and the level of knowledge were compared. In the U.S. cases, errors due to 'insufficient knowledge of nature and insufficient knowledge of method' were prevalent while three other items', 'reason', scope' and 'goal' which involve work conditions among the contents of knowledge rarely occurred. In Japan, errors arising from 'nature not being comprehended' were rather prevalent while other cases were distributed evenly for all categories including the work conditions. For addressing insufficient knowledge or experience', we consider that the following approaches are valid: according to the knowledge level which is required for the work, the reflection of knowledge on the procedure or education materials, training and confirmation of understanding level, virtual practice and instruction of experience should be implemented. As for the knowledge on the work conditions, it is necessary to enter the work conditions in the procedure and education materials while conducting training or education. (author)

  6. Collection and classification of human error and human reliability data from Indian nuclear power plants for use in PSA

    Complex systems such as NPPs involve a large number of Human Interactions (HIs) in every phase of plant operations. Human Reliability Analysis (HRA) in the context of a PSA, attempts to model the HIs and evaluate/predict their impact on safety and reliability using human error/human reliability data. A large number of HRA techniques have been developed for modelling and integrating HIs into PSA but there is a significant lack of HAR data. In the face of insufficient data, human reliability analysts have had to resort to expert judgement methods in order to extend the insufficient data sets. In this situation, the generation of data from plant operating experience assumes importance. The development of a HRA data bank for Indian nuclear power plants was therefore initiated as part of the programme of work on HRA. Later, with the establishment of the coordinated research programme (CRP) on collection of human reliability data and use in PSA by IAEA in 1994-95, the development was carried out under the aegis of the IAEA research contract No. 8239/RB. The work described in this report covers the activities of development of a data taxonomy and a human error reporting form (HERF) based on it, data structuring, review and analysis of plant event reports, collection of data on human errors, analysis of the data and calculation of human error probabilities (HEPs). Analysis of plant operating experience does yield a good amount of qualitative data but obtaining quantitative data on human reliability in the form of HEPs is seen to be more difficult. The difficulties have been highlighted and some ways to bring about improvements in the data situation have been discussed. The implementation of a data system for HRA is described and useful features that can be incorporated in future systems are also discussed. (author)

  7. Discriminative Structured Dictionary Learning for Image Classification

    王萍; 兰俊花; 臧玉卫; 宋占杰


    In this paper, a discriminative structured dictionary learning algorithm is presented. To enhance the dictionary’s discriminative power, the reconstruction error, classification error and inhomogeneous representation error are integrated into the objective function. The proposed approach learns a single structured dictionary and a linear classifier jointly. The learned dictionary encourages the samples from the same class to have similar sparse codes, and the samples from different classes to have dissimilar sparse codes. The solution to the objective function is achieved by employing a feature-sign search algorithm and Lagrange dual method. Experimental results on three public databases demonstrate that the proposed approach outperforms several recently proposed dictionary learning techniques for classification.

  8. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.


    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  9. Sampling method for monitoring classification of cultivated land in county area based on Kriging estimation error%基于Kriging估计误差的县域耕地等级监测布样方法

    杨建宇; 汤赛; 郧文聚; 张超; 朱德海; 陈彦清


    China, an agricultural country, has a large population but not enough cultivated land. Until 2011, the cultivated land per capita was 1.38 mu (0.09 ha), only 40% of the world average, and it is getting worse with industrialization and urbanization. The next task for the Ministry of Land and Resources:Dynamic monitoring of cultivated land classification in which a number of counties will be sampled; in each county, a sample-based monitoring network would be established that reflects the distribution and its tendency of cultivated land classification in county area and estimates of non-sampled locations. Due to the correlation among samples, traditional methods such as simple random sampling, stratified sampling, and systematic sampling are insufficient to achieve the goal. Therefore, in this paper we introduced a spatial sampling method based on the Kriging estimation error. For our case, natural classifications of cultivated land identified from the last Land Resource Survey and Cultivated Land Evaluation are regarded as the true value and classifications of non-sampled cultivated lands would be predicted by interpolating the sample data. Finally, RMSE (root-mean-square error) of Kriging interpolation is redefined to measure the performance of the network. To be specific, five steps are needed for the monitoring network. First, the optimal sample size is determined by analyzing the variation trend between the number and the accuracy of samples. Then, set up the basic monitoring network using square grids. The suitable grid size can be chosen by comparing the grid sizes and the corresponding RMSEs from the Kriging interpolation of the samples data. Because some centers of grids do not overlap the area of cultivated land, the third step is to add some points near the centers of grids to create the global monitoring network. These points are selected from centroids of cultivated land spots which are closest to the centers and inside the searching circles around the

  10. Classification with High-Dimensional Sparse Samples

    Huang, Dayu


    The task of the binary classification problem is to determine which of two distributions has generated a length-$n$ test sequence. The two distributions are unknown; however two training sequences of length $N$, one from each distribution, are observed. The distributions share an alphabet of size $m$, which is significantly larger than $n$ and $N$. How does $N,n,m$ affect the probability of classification error? We characterize the achievable error rate in a high-dimensional setting in which $N,n,m$ all tend to infinity and $\\max\\{n,N\\}=o(m)$. The results are: * There exists an asymptotically consistent classifier if and only if $m=o(\\min\\{N^2,Nn\\})$. * The best achievable probability of classification error decays as $-\\log(P_e)=J \\min\\{N^2, Nn\\}(1+o(1))/m$ with $J>0$ (shown by achievability and converse results). * A weighted coincidence-based classifier has a non-zero generalized error exponent $J$. * The $\\ell_2$-norm based classifier has a zero generalized error exponent.

  11. Achieving the "triple aim" for inborn errors of metabolism: a review of challenges to outcomes research and presentation of a new practice-based evidence framework.

    Potter, Beth K; Chakraborty, Pranesh; Kronick, Jonathan B; Wilson, Kumanan; Coyle, Doug; Feigenbaum, Annette; Geraghty, Michael T; Karaceper, Maria D; Little, Julian; Mhanni, Aizeddin; Mitchell, John J; Siriwardena, Komudi; Wilson, Brenda J; Syrowatka, Ania


    Across all areas of health care, decision makers are in pursuit of what Berwick and colleagues have called the "triple aim": improving patient experiences with care, improving health outcomes, and managing health system impacts. This is challenging in a rare disease context, as exemplified by inborn errors of metabolism. There is a need for evaluative outcomes research to support effective and appropriate care for inborn errors of metabolism. We suggest that such research should consider interventions at both the level of the health system (e.g., early detection through newborn screening, programs to provide access to treatments) and the level of individual patient care (e.g., orphan drugs, medical foods). We have developed a practice-based evidence framework to guide outcomes research for inborn errors of metabolism. Focusing on outcomes across the triple aim, this framework integrates three priority themes: tailoring care in the context of clinical heterogeneity; a shift from "urgent care" to "opportunity for improvement"; and the need to evaluate the comparative effectiveness of emerging and established therapies. Guided by the framework, a new Canadian research network has been established to generate knowledge that will inform the design and delivery of health services for patients with inborn errors of metabolism and other rare diseases. PMID:23222662

  12. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  13. Achievements in mental health outcome measurement in Australia: Reflections on progress made by the Australian Mental Health Outcomes and Classification Network (AMHOCN

    Burgess Philip


    Full Text Available Abstract Background Australia’s National Mental Health Strategy has emphasised the quality, effectiveness and efficiency of services, and has promoted the collection of outcomes and casemix data as a means of monitoring these. All public sector mental health services across Australia now routinely report outcomes and casemix data. Since late-2003, the Australian Mental Health Outcomes and Classification Network (AMHOCN has received, processed, analysed and reported on outcome data at a national level, and played a training and service development role. This paper documents the history of AMHOCN’s activities and achievements, with a view to providing lessons for others embarking on similar exercises. Method We conducted a desktop review of relevant documents to summarise the history of AMHOCN. Results AMHOCN has operated within a framework that has provided an overarching structure to guide its activities but has been flexible enough to allow it to respond to changing priorities. With no precedents to draw upon, it has undertaken activities in an iterative fashion with an element of ‘trial and error’. It has taken a multi-pronged approach to ensuring that data are of high quality: developing innovative technical solutions; fostering ‘information literacy’; maximising the clinical utility of data at a local level; and producing reports that are meaningful to a range of audiences. Conclusion AMHOCN’s efforts have contributed to routine outcome measurement gaining a firm foothold in Australia’s public sector mental health services.

  14. Band Selection and Classification of Hyperspectral Images using Mutual Information: An algorithm based on minimizing the error probability using the inequality of Fano

    Sarhrouni, ELkebir; Hammouch, Ahmed; Aboutajdine, Driss


    Hyperspectral image is a substitution of more than a hundred images, called bands, of the same region. They are taken at juxtaposed frequencies. The reference image of the region is called Ground Truth map (GT). the problematic is how to find the good bands to classify the pixels of regions; because the bands can be not only redundant, but a source of confusion, and decreasing so the accuracy of classification. Some methods use Mutual Information (MI) and threshold, to select relevant bands. ...

  15. Hyperspectral image classification based on spatial and spectral features and sparse representation

    Yang Jing-Hui; Wang Li-Guo; Qian Jin-Xi


    To minimize the low classification accuracy and low utilization of spatial information in traditional hyperspectral image classification methods, we propose a new hyperspectral image classification method, which is based on the Gabor spatial texture features and nonparametric weighted spectral features, and the sparse representation classification method (Gabor–NWSF and SRC), abbreviated GNWSF–SRC. The proposed (GNWSF–SRC) method first combines the Gabor spatial features and nonparametric weighted spectral features to describe the hyperspectral image, and then applies the sparse representation method. Finally, the classification is obtained by analyzing the reconstruction error. We use the proposed method to process two typical hyperspectral data sets with different percentages of training samples. Theoretical analysis and simulation demonstrate that the proposed method improves the classification accuracy and Kappa coefficient compared with traditional classification methods and achieves better classification performance.

  16. Análisis y Clasificación de Errores Cometidos por Alumnos de Secundaria en los Procesos de Sustitución Formal, Generalización y Modelización en Álgebra (Secondary Students´ Error Analysis and Classification in Formal Substitution, Generalization and Modelling Process in Algebra

    Raquel M. Ruano


    Full Text Available Presentamos un estudio con alumnos de educación secundaria sobre tres procesos específicos del lenguaje algebraico: la sustitución formal, la generalización y la modelización. A partir de las respuestas a un cuestionario, realizamos una clasificación de los errores cometidos y se analizan sus posibles orígenes. Finalmente, formulamos algunas consecuencias didácticas que se derivan de estos resultados. We present a study with secondary students about three specific processes of algebraic language: Formal substitution, generalization, and modelling. Using a test, we develop a students´ errors classifications, and we analyze its possible origins. Finally we present some didactical conclusions from the results.

  17. Ovarian Cancer Classification based on Mass Spectrometry Analysis of Sera

    Baolin Wu


    Full Text Available In our previous study [1], we have compared the performance of a number of widely used discrimination methods for classifying ovarian cancer using Matrix Assisted Laser Desorption Ionization (MALDI mass spectrometry data on serum samples obtained from Reflectron mode. Our results demonstrate good performance with a random forest classifier. In this follow-up study, to improve the molecular classification power of the MALDI platform for ovarian cancer disease, we expanded the mass range of the MS data by adding data acquired in Linear mode and evaluated the resultant decrease in classification error. A general statistical framework is proposed to obtain unbiased classification error estimates and to analyze the effects of sample size and number of selected m/z features on classification errors. We also emphasize the importance of combining biological knowledge and statistical analysis to obtain both biologically and statistically sound results. Our study shows improvement in classification accuracy upon expanding the mass range of the analysis. In order to obtain the best classification accuracies possible, we found that a relatively large training sample size is needed to obviate the sample variations. For the ovarian MS dataset that is the focus of the current study, our results show that approximately 20-40 m/z features are needed to achieve the best classification accuracy from MALDI-MS analysis of sera. Supplementary information can be found at

  18. The Sources of Error in Spanish Writing.

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.


    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  19. Modulation classification based on spectrogram


    The aim of modulation classification (MC) is to identify the modulation type of a communication signal. It plays an important role in many cooperative or noncooperative communication applications. Three spectrogram-based modulation classification methods are proposed. Their reccgnition scope and performance are investigated or evaluated by theoretical analysis and extensive simulation studies. The method taking moment-like features is robust to frequency offset while the other two, which make use of principal component analysis (PCA) with different transformation inputs,can achieve satisfactory accuracy even at low SNR (as low as 2 dB). Due to the properties of spectrogram, the statistical pattern recognition techniques, and the image preprocessing steps, all of our methods are insensitive to unknown phase and frequency offsets, timing errors, and the arriving sequence of symbols.

  20. Pitch Based Sound Classification

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U.


    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classif...

  1. An Analysis of Classification of Psychological Verb Errors of Thai Students Learning Chinese%泰国留学生汉语心理动词偏误类型分析



    在收集到大量的偏误语料基础上,通过定性和定量的方法,对泰国留学生学习汉语心理动词出现的偏误类型进行研究,通过分析笔者发现主要存在两类偏误情况,一类是词语偏误,一类是搭配偏误,本文主要是研究第一类词语偏误,主要是心理动词的遗漏、误加和误代三种类型。%This paper presents the author’ s qualitative and quantitative research on classification of errors of psychological verbs of the Thai students learning Chinese in China based on ample examples collected.According to the author, there are two categories of errors:1) lexicon; and 2) collocation.This paper focuses on the for-mer, i.e.omission, redundancy and wrong substitution of psychological verbs.

  2. Output and error messages

    This document describes the output data and output files that are produced by the SYVAC A/C 1.03 computer program. It also covers the error messages generated by incorrect input data, and the run classification procedure. SYVAC A/C 1.03 simulates the groundwater mediated movement of radionuclides from underground facilities for the disposal of low and intermediate level wastes to the accessible environment, and provides an estimate of the subsequent radiological risk to man. (author)

  3. A New Classification Approach Based on Multiple Classification Rules

    Zhongmei Zhou


    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  4. Earthquake classification, location, and error analysis in a volcanic environment: implications for the magmatic system of the 1989-1990 eruptions at redoubt volcano, Alaska

    Lahr, J.C.; Chouet, B.A.; Stephens, C.D.; Power, J.A.; Page, R.A.


    Determination of the precise locations of seismic events associated with the 1989-1990 eruptions of Redoubt Volcano posed a number of problems, including poorly known crustal velocities, a sparse station distribution, and an abundance of events with emergent phase onsets. In addition, the high relief of the volcano could not be incorporated into the hypoellipse earthquake location algorithm. This algorithm was modified to allow hypocenters to be located above the elevation of the seismic stations. The velocity model was calibrated on the basis of a posteruptive seismic survey, in which four chemical explosions were recorded by eight stations of the permanent network supplemented with 20 temporary seismographs deployed on and around the volcanic edifice. The model consists of a stack of homogeneous horizontal layers; setting the top of the model at the summit allows events to be located anywhere within the volcanic edifice. Detailed analysis of hypocentral errors shows that the long-period (LP) events constituting the vigorous 23-hour swarm that preceded the initial eruption on December 14 could have originated from a point 1.4 km below the crater floor. A similar analysis of LP events in the swarm preceding the major eruption on January 2 shows they also could have originated from a point, the location of which is shifted 0.8 km northwest and 0.7 km deeper than the source of the initial swarm. We suggest this shift in LP activity reflects a northward jump in the pathway for magmatic gases caused by the sealing of the initial pathway by magma extrusion during the last half of December. Volcano-tectonic (VT) earthquakes did not occur until after the initial 23-hour-long swarm. They began slowly just below the LP source and their rate of occurrence increased after the eruption of 01:52 AST on December 15, when they shifted to depths of 6 to 10 km. After January 2 the VT activity migrated gradually northward; this migration suggests northward propagating withdrawal of

  5. The Usability-Error Ontology

    Elkin, Peter L.; Beuscart-zephir, Marie-Catherine; Pelayo, Sylvia;


    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  6. Error estimation for pattern recognition

    Braga Neto, U


    This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to more specialized classifiers, it covers important topics and essential issues pertaining to the scientific validity of pattern classification. Additional features of the book include: * The latest results on the accuracy of error estimation * Performance analysis of resubstitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches * Highly interactive computer-based exercises and end-of-chapter problems

  7. Agricultural Land Use classification from Envisat MERIS

    Brodsky, L.; Kodesova, R.


    This study focuses on evaluation of a crop classification from middle-resolution images (Envisat MERIS) at national level. The main goal of such Land Use product is to provid spatial data for optimisation of monitoring of surface and groundwater pollution in the Czech Republic caused by pesticides use in agriculture. As there is a lack of spatial data on the pesticide use and their distribution, the localisation can be done according to the crop cover on arable land derived from the remote sensing images. Often high resolution data are used for agricultural Land Use classification but only at regional or local level. Envisat MERIS data, due to the wide satellite swath, can be used also at national level. The high temporal and also spectral resolution of MERIS data has indisputable advantage for crop classification. Methodology of a pixel-based MERIS classification applying an artificial neural-network (ANN) technique was proposed and performed at a national level, the Czech Republic. Five crop groups were finally selected - winter crops, spring crops, summer crops and other crops to be classified. Classification models included a linear, radial basis function (RBF) and a multi-layer percepton (MLP) ANN with 50 networks tested in training. The training data set consisted of about 200 samples per class, on which bootstrap resampling was applied. Selection of a subset of independent variables (Meris spectral channels) was used in the procedure. The best selected ANN model (MLP: 3 in, 13 hidden, 3 out) resulted in very good performance (correct classification rate 0.974, error 0.103) applying three crop types data set. In the next step data set with five crop types was evaluated. The ANN model (MLP: 5 in, 12 hidden, 5 out) performance was also very good (correct classification rate 0.930, error 0.370). The study showed, that while accuracy of about 80 % was achieved at pixel level when classifying only three crops, accuracy of about 70 % was achieved for five crop

  8. Research on Software Error Behavior Classification Based on Software Failure Chain%基于软件失效链的软件错误行为分类研究

    刘义颖; 江建慧


    目前软件应用广泛,对软件可靠性要求越来越高,研究软件的缺陷—错误—失效过程,提前预防失效的发生,减小软件失效带来的损失是十分必要的。研究描述软件错误行为的属性有助于独一无二地描述不同的错误行为,为建立软件故障模式库、软件故障预测和软件故障注入提供依据。文中基于软件失效链的理论,分析软件缺陷、软件错误和软件失效构成的因果链,由缺陷—错误—失效链之间的因果关系,进一步分析描述各个阶段异常的属性集合之间的联系。以现有的IEEE软件异常分类标准研究成果为基础,通过缺陷属性集合和失效属性集合来推导出错误属性集合,给出一种软件错误行为的分类方法,并给出属性集合以及参考值,选取基于最小相关和最大依赖度准则的属性约简算法进行实验,验证属性的合理性。%Software applications are more important than before. The requirements of reliability are more and more higher. It is very neces-sary to study the process of software defect-error-failure,to prevent failure happened in advance and reduce losses. It is helpful to de-scribe the unique software error behavior and help developers to communicate about this field. It also provides more support with software fault pattern library,software fault detection and fault injection. Based on software failure chain theory,analyze the causal chain of soft-ware defect-error-failure,further analyzing and describing each stage abnormal relationships between attributes sets. Based on the existing IEEE software anomaly classification standard,give out software error attributes sets and reference values and a way to classify error be-haviors. Verify rationality of attributes by the attribute reduction algorithm of minimal mutual information and maximal dependency.

  9. Sparse group lasso and high dimensional multinomial classification

    Vincent, Martin; Hansen, N.R.


    The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse...... group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  10. Nominal classification

    Senft, G.


    This handbook chapter summarizes some of the problems of nominal classification in language, presents and illustrates the various systems or techniques of nominal classification, and points out why nominal classification is one of the most interesting topics in Cognitive Linguistics.

  11. Error analysis in laparoscopic surgery

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.


    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  12. Pitch Based Sound Classification

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U


    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft...

  13. Signal Classification for Acoustic Neutrino Detection

    Neff, M; Enzenhöfer, A; Graf, K; Hößl, J; Katz, U; Lahmann, R; Richardt, C


    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of one percent is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  14. On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data

    Richards, Joseph W; Butler, Nathaniel R; Bloom, Joshua S; Brewer, John M; Crellin-Quick, Arien; Higgins, Justin; Kennedy, Rachel; Rischard, Maxime


    With the coming data deluge from synoptic surveys, there is a growing need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly-observed variables based on a small number of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics ("feature"), detail methods to robustly estimate periodic light-curve features, introduce tree-ensemble methods for accurate variable star classification, and show how to rigorously evaluate the classification results using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% overall classification error using the random forest classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying sam...

  15. Refractive Errors

    ... the eye keeps you from focusing well. The cause could be the length of the eyeball (longer or shorter), changes in the shape of the cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  16. Medication Errors

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  17. Robust Transmission of H.264/AVC Video Using Adaptive Slice Grouping and Unequal Error Protection

    Thomos, Nikolaos; Argyropoulos, Savvas; Nikolaos V. Boulgouris; Michael G. Strintzis


    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. The optimal classification of macroblocks into slice groups and the optimal channel rate allocation are achieved by iterating two interdependent steps. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms...

  18. Sparse group lasso and high dimensional multinomial classification

    Vincent, Martin


    We present a coordinate gradient descent algorithm for solving the sparse group lasso optimization problem with a broad class of convex loss functions. Convergence of the algorithm is established, and we use it to investigate the performance of the multinomial sparse group lasso classifier. On three different real data examples we find that multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. For the current implementation the time to compute the sparse group lasso solution is of the same order of magnitude as for the multinomial lasso algorithm as implemented in the R-package glmnet, and the implementation scales well with the problem size. One of the examples considered is a 50 class classification problem with 10k features, which amounts to estimating 500k parameters. The implementation is provided as an R package.

  19. ACCUWIND - Methods for classification of cup anemometers

    Dahlberg, J.-Å.; Friis Pedersen, Troels; Busche, P.


    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The EuropeanCLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been imple...

  20. Error calculations statistics in radioactive measurements

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  1. Multiple sparse representations classification

    Plenge, Esben; Klein, Stefan; Niessen, Wiro; Meijering, Erik


    textabstractSparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small...

  2. Multiple Sparse Representations Classification

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik


    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surro...

  3. Bayesian Classification in Medicine: The Transferability Question *

    Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann


    Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...

  4. Game Design Principles based on Human Error

    Guilherme Zaffari


    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  5. Errors in practical measurement in surveying, engineering, and technology

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  6. Single-trial classification of gait intent from human EEG



    Full Text Available Neuroimaging studies provide evidence of cortical involvement immediately before and during gait and during gait-related behaviors such as stepping in place or motor imagery of gait. Here we attempt to perform single-trial classification of gait intent from another movement plan (point intent or from standing in place. Subjects walked naturally from a starting position to a designated ending position, pointed at a designated position from the starting position, or remained standing at the starting position. The 700 ms of recorded EEG before movement onset was used for single-trial classification of trials based on action type and direction (left walk, forward walk, right walk, left point, right point, and stand as well as action type regardless of direction (stand, walk, point. Classification using regularized LDA was performed on PCA reduced feature space composed of levels 1-9 coefficients from a discrete wavelet decomposition using the Daubechies 4 wavelet. We achieved significant classification for all conditions, with errors as low as 17% when averaged across nine subjects. LDA and PCA highly weighted frequency ranges that included MRPs, with smaller contributions from frequency ranges that included mu and beta idle motor rhythms. Additionally, error patterns suggested a spatial structure to the EEG signal. Future applications of the cortical gait intent signal may include an additional dimension of control for prosthetics, preemptive corrective feedback for gait disturbances, or human computer interfaces.

  7. Rademacher Complexity in Neyman-Pearson Classification

    Min HAN; Di Rong CHEN; Zhao Xu SUN


    Neyman-Pearson(NP) criterion is one of the most important ways in hypothesis testing.It is also a criterion for classification. This paper addresses the problem of bounding the estimation error of NP classification, in terms of Rademacher averages. We investigate the behavior of the global and local Rademacher averages, and present new NP classification error bounds which are based on the localized averages, and indicate how the estimation error can be estimated without a priori knowledge of the class at hand.

  8. A deep learning approach to the classification of 3D CAD models

    Fei-wei QIN; Lu-ye LI; Shu-ming GAO; Xiao-ling YANG; Xiang CHEN


    Model classification is essential to the management and reuse of 3D CAD models. Manual model classification is laborious and error prone. At the same time, the automatic classification methods are scarce due to the intrinsic complexity of 3D CAD models. In this paper, we propose an automatic 3D CAD model classification approach based on deep neural networks. According to prior knowledge of the CAD domain, features are selected and extracted from 3D CAD models first, and then pre-processed as high dimensional input vectors for category recognition. By analogy with the thinking process of engineers, a deep neural network classifier for 3D CAD models is constructed with the aid of deep learning techniques. To obtain an optimal solution, multiple strategies are appropriately chosen and applied in the training phase, which makes our classifier achieve better per-formance. We demonstrate the efficiency and effectiveness of our approach through experiments on 3D CAD model datasets.

  9. Error Analysis in Composition of Iranian Lower Intermediate Students

    Taghavi, Mehdi


    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  10. Automated valve condition classification of a reciprocating compressor with seeded faults: experimentation and validation of classification strategy

    This paper deals with automatic valve condition classification of a reciprocating processor with seeded faults. The seeded faults are considered based on observation of valve faults in practice. They include the misplacement of valve and spring plates, incorrect tightness of the bolts for valve cover or valve seat, softening of the spring plate, and cracked or broken spring plate or valve plate. The seeded faults represent various stages of machine health condition and it is crucial to be able to correctly classify the conditions so that preventative maintenance can be performed before catastrophic breakdown of the compressor occurs. Considering the non-stationary characteristics of the system, time–frequency analysis techniques are applied to obtain the vibration spectrum as time develops. A data reduction algorithm is subsequently employed to extract the fault features from the formidable amount of time–frequency data and finally the probabilistic neural network is utilized to automate the classification process without the intervention of human experts. This study shows that the use of modification indices, as opposed to the original indices, greatly reduces the classification error, from about 80% down to about 20% misclassification for the 15 fault cases. Correct condition classification can be further enhanced if the use of similar fault cases is avoided. It is shown that 6.67% classification error is achievable when using the short-time Fourier transform and the mean variation method for the case of seven seeded faults with 10 training samples used. A stunning 100% correct classification can even be realized when the neural network is well trained with 30 training samples being used

  11. Automated valve condition classification of a reciprocating compressor with seeded faults: experimentation and validation of classification strategy

    Lin, Yih-Hwang; Liu, Huai-Sheng; Wu, Chung-Yung


    This paper deals with automatic valve condition classification of a reciprocating processor with seeded faults. The seeded faults are considered based on observation of valve faults in practice. They include the misplacement of valve and spring plates, incorrect tightness of the bolts for valve cover or valve seat, softening of the spring plate, and cracked or broken spring plate or valve plate. The seeded faults represent various stages of machine health condition and it is crucial to be able to correctly classify the conditions so that preventative maintenance can be performed before catastrophic breakdown of the compressor occurs. Considering the non-stationary characteristics of the system, time-frequency analysis techniques are applied to obtain the vibration spectrum as time develops. A data reduction algorithm is subsequently employed to extract the fault features from the formidable amount of time-frequency data and finally the probabilistic neural network is utilized to automate the classification process without the intervention of human experts. This study shows that the use of modification indices, as opposed to the original indices, greatly reduces the classification error, from about 80% down to about 20% misclassification for the 15 fault cases. Correct condition classification can be further enhanced if the use of similar fault cases is avoided. It is shown that 6.67% classification error is achievable when using the short-time Fourier transform and the mean variation method for the case of seven seeded faults with 10 training samples used. A stunning 100% correct classification can even be realized when the neural network is well trained with 30 training samples being used.

  12. ACCUWIND - Methods for classification of cup anemometers

    Dahlberg, J.-Å.; Friis Pedersen, Troels; Busche, P.


    the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implementedin the IEC 61400-12-1 standard on power performance measurements in annex I and J. The...... classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A categoryclassification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the...... theclassification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed....

  13. Medication errors: prescribing faults and prescription errors

    Velo, Giampaolo P; Minuz, Pietro


    Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients.Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common.Inadequate knowledge or competence and ...

  14. Network error correction with unequal link capacities

    Kim, Sukwon; Ho, Tracey; Effros, Michelle; Avestimehr, Amir Salman


    We study network error correction with unequal link capacities. Previous results on network error correction assume unit link capacities. We consider network error correction codes that can correct arbitrary errors occurring on up to z links. We find the capacity of a network consisting of parallel links, and a generalized Singleton outer bound for any arbitrary network. We show by example that linear coding is insufficient for achieving capacity in general. In our exampl...

  15. PSG-Based Classification of Sleep Phases

    Králík, M.


    This work is focused on classification of sleep phases using artificial neural network. The unconventional approach was used for calculation of classification features using polysomnographic data (PSG) of real patients. This approach allows to increase the time resolution of the analysis and, thus, to achieve more accurate results of classification.

  16. Audio Classification from Time-Frequency Texture

    Yu, Guoshen; Slotine, Jean-Jacques


    Time-frequency representations of audio signals often resemble texture images. This paper derives a simple audio classification algorithm based on treating sound spectrograms as texture images. The algorithm is inspired by an earlier visual classification scheme particularly efficient at classifying textures. While solely based on time-frequency texture features, the algorithm achieves surprisingly good performance in musical instrument classification experiments.

  17. Six-port error propagation

    Stelzer, Andreas; Diskus, Christian G.


    In this contribution the various influences on the accuracy of a near range precision radar are described. The front-end is a monostatic design operating at 34 - 36.2 GHz. The hardware configuration enables different modes of operation including FM-CW and interferometric modes. To achieve a highly accurate distance measurement, attention must be paid to various error sources. Due to the use of a six-port it is rather complicated to determine the corresponding error propagation. In the following the results of investigations on how to achieve an exceptional accuracy of +/- 0.1 mm are described.

  18. Classification problem in CBIR

    Tatiana Jaworska


    At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR). Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results ...

  19. Expected energy-based restricted Boltzmann machine for classification.

    Elfwing, S; Uchibe, E; Doya, K


    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. PMID:25318375

  20. Memory efficient hierarchical error diffusion

    He, Zhen; Fan, Zhigang


    Hierarchical Error Diffusion (HED) developed in [14] yields high-quality color halftone by explicitly designing three critical factors: dot overlapping, positioning, and coloring. However, HED requires more error memory buffer than the conventional error diffusion algorithms since the pixel error is diffused in dot-color domain, instead of colorant domain. This can potentially be an issue for certain low-cost hardware implementation. This paper develops a memory-efficient HED algorithm (MEHED). To achieve this goal, the pixel error in dot-color domain is converted backward and diffused to future pixels in input colorant domain, say, CMYK for print applications. Since the error-augmented pixel value is no longer bounded within the range [0, 1.0], the dot overlapping control algorithm developed in [14] needs to be generalized to coherently handle the pixel density of outside the normal range. The key is to carefully split the modified pixel density into three parts: negative, regular, and surplus densities. The determination of regular and surplus densities needs to be dependent on the density of K channel, in order to maintain local color and avoid halftone texture artifact. The resulting dot-color densities are serves as the input to hierarchical thresholding and coloring steps to generate final halftone output. Experimental results demonstrate that MEHED achieves similar image quality compared to HED.

  1. Classification problem in CBIR

    Tatiana Jaworska


    Full Text Available At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR. Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results of fuzzy rule-based classification in our CBIR. Further-more, these results are used to construct a search engine taking into account data mining.

  2. Strategic Classification

    Hardt, Moritz; Megiddo, Nimrod; Papadimitriou, Christos; Wootters, Mary


    Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior...

  3. Robust characterization of leakage errors

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph


    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  4. ACCUWIND - Methods for classification of cup anemometers

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.


    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  5. A qualitative description of human error

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  6. Automated compound classification using a chemical ontology

    Bobach Claudia


    Full Text Available Abstract Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning

  7. Does an awareness of differing types of spreadsheet errors aid end-users in identifying spreadsheets errors?

    Purser, Michael


    The research presented in this paper establishes a valid, and simplified, revision of previous spreadsheet error classifications. This investigation is concerned with the results of a web survey and two web-based gender and domain-knowledge free spreadsheet error identification exercises. The participants of the survey and exercises were a test group of professionals (all of whom regularly use spreadsheets) and a control group of students from the University of Greenwich (UK). The findings show that over 85% of users are also the spreadsheet's developer, supporting the revised spreadsheet error classification. The findings also show that spreadsheet error identification ability is directly affected both by spreadsheet experience and by error-type awareness. In particular, that spreadsheet error-type awareness significantly improves the user's ability to identify, the more surreptitious, qualitative error.

  8. COMPARE: classification of morphological patterns using adaptive regional elements.

    Fan, Yong; Shen, Dinggang; Gur, Ruben C; Gur, Raquel E; Davatzikos, Christos


    This paper presents a method for classification of structural brain magnetic resonance (MR) images, by using a combination of deformation-based morphometry and machine learning methods. A morphological representation of the anatomy of interest is first obtained using a high-dimensional mass-preserving template warping method, which results in tissue density maps that constitute local tissue volumetric measurements. Regions that display strong correlations between tissue volume and classification (clinical) variables are extracted using a watershed segmentation algorithm, taking into account the regional smoothness of the correlation map which is estimated by a cross-validation strategy to achieve robustness to outliers. A volume increment algorithm is then applied to these regions to extract regional volumetric features, from which a feature selection technique using support vector machine (SVM)-based criteria is used to select the most discriminative features, according to their effect on the upper bound of the leave-one-out generalization error. Finally, SVM-based classification is applied using the best set of features, and it is tested using a leave-one-out cross-validation strategy. The results on MR brain images of healthy controls and schizophrenia patients demonstrate not only high classification accuracy (91.8% for female subjects and 90.8% for male subjects), but also good stability with respect to the number of features selected and the size of SVM kernel used. PMID:17243588

  9. Vietnamese Document Representation and Classification

    Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter

    Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.


    With the coming data deluge from synoptic surveys, there is a need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly observed variables based on small numbers of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics (features), detail methods to robustly estimate periodic features, introduce tree-ensemble methods for accurate variable-star classification, and show how to rigorously evaluate a classifier using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% error rate using the random forest (RF) classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying samples of specific science classes: for pulsational variables used in Milky Way tomography we obtain a discovery efficiency of 98.2% and for eclipsing systems we find an efficiency of 99.1%, both at 95% purity. The RF classifier is superior to other methods in terms of accuracy, speed, and relative immunity to irrelevant features; the RF can also be used to estimate the importance of each feature in classification. Additionally, we present the first astronomical use of hierarchical classification methods to incorporate a known class taxonomy in the classifier, which reduces the catastrophic error rate from 8% to 7.8%. Excluding low-amplitude sources, the overall error rate improves to 14%, with a catastrophic error rate of 3.5%.

  11. Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification

    Bouselmi, Ghazi; Illina, Irina; Haton, Jean-Paul


    In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques.

  12. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis


    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  13. Learning from Errors

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine


    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  14. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Lev V. Utkin


    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  15. Evaluation criteria for software classification inventories, accuracies, and maps

    Jayroe, R. R., Jr.


    Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.

  16. Transporter Classification Database (TCDB)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  17. Refined Error Bounds for Several Learning Algorithms

    Hanneke, Steve


    This article studies the achievable guarantees on the error rates of certain learning algorithms, with particular focus on refining logarithmic factors. Many of the results are based on a general technique for obtaining bounds on the error rates of sample-consistent classifiers with monotonic error regions, in the realizable case. We prove bounds of this type expressed in terms of either the VC dimension or the sample compression size. This general technique also enables us to derive several ...

  18. Tissue Classification

    Van Leemput, Koen; Puonti, Oula


    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are no...... software packages such as SPM, FSL, and FreeSurfer....

  19. Classifying Classification

    Novakowski, Janice


    This article describes the experience of a group of first-grade teachers as they tackled the science process of classification, a targeted learning objective for the first grade. While the two-year process was not easy and required teachers to teach in a new, more investigation-oriented way, the benefits were great. The project helped teachers and…

  20. Neuromuscular disease classification system

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen


    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  1. Detection and Classification of Whale Acoustic Signals

    Xian, Yin

    This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data. Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear. We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale

  2. Robust Model Selection for Classification of Microarrays

    Ikumi Suzuki


    Full Text Available Recently, microarray-based cancer diagnosis systems have been increasingly investigated. However, cost reduction and reliability assurance of such diagnosis systems are still remaining problems in real clinical scenes. To reduce the cost, we need a supervised classifier involving the smallest number of genes, as long as the classifier is sufficiently reliable. To achieve a reliable classifier, we should assess candidate classifiers and select the best one. In the selection process of the best classifier, however, the assessment criterion must involve large variance because of limited number of samples and non-negligible observation noise. Therefore, even if a classifier with a very small number of genes exhibited the smallest leave-one-out cross-validation (LOO error rate, it would not necessarily be reliable because classifiers based on a small number of genes tend to show large variance. We propose a robust model selection criterion, the min-max criterion, based on a resampling bootstrap simulation to assess the variance of estimation of classification error rates. We applied our assessment framework to four published real gene expression datasets and one synthetic dataset. We found that a state- of-the-art procedure, weighted voting classifiers with LOO criterion, had a non-negligible risk of selecting extremely poor classifiers and, on the other hand, that the new min-max criterion could eliminate that risk. These finding suggests that our criterion presents a safer procedure to design a practical cancer diagnosis system.

  3. Field error lottery

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))


    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  4. Error And Error Analysis In Language Study

    Zakaria, Teuku Azhari


    Students make mistakes during their language learning course; orally, written, listening or reading comprehension. Making mistakes is inevitable and considered natural in ones’ inter-lingual process. Believed to be part of the learning process, making error and mistake are not bad thing; as everybody would experience the same. Both students and teacher will benefit from the event as both will learn what has been done well and what needs to be reviewed and rehearsed. Understanding error and th...

  5. The Error in Total Error Reduction

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.


    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons i...

  6. Inborn errors of metabolism

    ... metabolism. A few of them are: Fructose intolerance Galactosemia Maple sugar urine disease (MSUD) Phenylketonuria (PKU) Newborn ... disorder. Alternative Names Metabolism - inborn errors of Images Galactosemia References Bodamer OA. Approach to inborn errors of ...

  7. Multiple Sparse Representations Classification.

    Plenge, Esben; Klein, Stefan; Klein, Stefan S; Niessen, Wiro J; Meijering, Erik


    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and

  8. Medical errors in neurosurgery

    John D Rolston


    23.7-27.8% were technical, related to the execution of the surgery itself, highlighting the importance of systems-level approaches to protecting patients and reducing errors. Conclusions: Overall, the magnitude of medical errors in neurosurgery and the lack of focused research emphasize the need for prospective categorization of morbidity with judicious attribution. Ultimately, we must raise awareness of the impact of medical errors in neurosurgery, reduce the occurrence of medical errors, and mitigate their detrimental effects.

  9. Network error correction with unequal link capacities

    Kim, Sukwon; Effros, Michelle; Avestimehr, Amir Salman


    This paper studies the capacity of single-source single-sink noiseless networks under adversarial or arbitrary errors on no more than z edges. Unlike prior papers, which assume equal capacities on all links, arbitrary link capacities are considered. Results include new upper bounds, network error correction coding strategies, and examples of network families where our bounds are tight. An example is provided of a network where the capacity is 50% greater than the best rate that can be achieved with linear coding. While coding at the source and sink suffices in networks with equal link capacities, in networks with unequal link capacities, it is shown that intermediate nodes may have to do coding, nonlinear error detection, or error correction in order to achieve the network error correction capacity.

  10. Programming Errors in APL.

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  11. Unsupervised classification of operator workload from brain signals

    Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin


    Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.

  12. Extreme Entropy Machines: Robust information theoretic classification

    Czarnecki, Wojciech Marian; Tabor, Jacek


    Most of the existing classification methods are aimed at minimization of empirical risk (through some simple point-based error measured with loss function) with added regularization. We propose to approach this problem in a more information theoretic way by investigating applicability of entropy measures as a classification model objective function. We focus on quadratic Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the construction of Extreme Entropy Machines (EEM). ...

  13. Deep neural networks for spam classification

    Kasmani, Mohamed Khizer


    This project elucidates the development of a spam filtering method using deep neural networks. A classification model employing algorithms such as Error Back Propagation (EBP) and Restricted Boltzmann Machines (RBM) is used to identify spam and non-spam emails. Moreover, a spam classification system employing deep neural network algorithms is developed, which has been tested on Enron email dataset in order to help users manage large volumes of email and, furthermore, their email folders. The ...

  14. Distributed Maintenance Error Information, Investigation and Intervention

    Zolla, George; Boex, Tony; Flanders, Pat; Nelson, Doug; Tufts, Scott; Schmidt, John K.


    This paper describes a safety information management system designed to capture maintenance factors that contribute to aircraft mishaps. The Human Factors Analysis and Classification System-Maintenance Extension taxonomy (HFACS-ME), an effective framework for classifying and analyzing the presence of maintenance errors that lead to mishaps, incidents, and personal injuries, is the theoretical foundation. An existing desktop mishap application is updated, a prototype we...

  15. Error-prone signalling.

    Johnstone, R A; Grafen, A


    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  16. Habitat Classification of Temperate Marine Macroalgal Communities Using Bathymetric LiDAR

    Richard Zavalas


    Full Text Available Here, we evaluated the potential of using bathymetric Light Detection and Ranging (LiDAR to characterise shallow water (<30 m benthic habitats of high energy subtidal coastal environments. Habitat classification, quantifying benthic substrata and macroalgal communities, was achieved in this study with the application of LiDAR and underwater video groundtruth data using automated classification techniques. Bathymetry and reflectance datasets were used to produce secondary terrain derivative surfaces (e.g., rugosity, aspect that were assumed to influence benthic patterns observed. An automated decision tree classification approach using the Quick Unbiased Efficient Statistical Tree (QUEST was applied to produce substrata, biological and canopy structure habitat maps of the study area. Error assessment indicated that habitat maps produced were primarily accurate (>70%, with varying results for the classification of individual habitat classes; for instance, producer accuracy for mixed brown algae and sediment substrata, was 74% and 93%, respectively. LiDAR was also successful for differentiating canopy structure of macroalgae communities (i.e., canopy structure classification, such as canopy forming kelp versus erect fine branching algae. In conclusion, habitat characterisation using bathymetric LiDAR provides a unique potential to collect baseline information about biological assemblages and, hence, potential reef connectivity over large areas beyond the range of direct observation. This research contributes a new perspective for assessing the structure of subtidal coastal ecosystems, providing a novel tool for the research and management of such highly dynamic marine environments.

  17. Volumetric magnetic resonance imaging classification for Alzheimer's disease based on kernel density estimation of local features

    YAN Hao; WANG Hu; WANG Yong-hui; ZHANG Yu-mei


    Background The classification of Alzheimer's disease (AD) from magnetic resonance imaging (MRI) has been challenged by lack of effective and reliable biomarkers due to inter-subject variability.This article presents a classification method for AD based on kernel density estimation (KDE) of local features.Methods First,a large number of local features were extracted from stable image blobs to represent various anatomical patterns for potential effective biomarkers.Based on distinctive descriptors and locations,the local features were robustly clustered to identify correspondences of the same underlying patterns.Then,the KDE was used to estimate distribution parameters of the correspondences by weighting contributions according to their distances.Thus,biomarkers could be reliably quantified by reducing the effects of further away correspondences which were more likely noises from inter-subject variability.Finally,the Bayes classifier was applied on the distribution parameters for the classification of AD.Results Experiments were performed on different divisions of a publicly available database to investigate the accuracy and the effects of age and AD severity.Our method achieved an equal error classification rate of 0.85 for subject aged 60-80 years exhibiting mild AD and outperformed a recent local feature-based work regardless of both effects.Conclusions We proposed a volumetric brain MRI classification method for neurodegenerative disease based on statistics of local features using KDE.The method may be potentially useful for the computer-aided diagnosis in clinical settings.

  18. 28 CFR 524.73 - Classification procedures.


    ... of Prisons from state or territorial jurisdictions. All state prisoners while solely in service of... classification may be made at any level to achieve the immediate effect of requiring prior clearance for...

  19. Soft Classification of Diffractive Interactions at the LHC

    Multivariate machine learning techniques provide an alternative to the rapidity gap method for event-by-event identification and classification of diffraction in hadron-hadron collisions. Traditionally, such methods assign each event exclusively to a single class producing classification errors in overlap regions of data space. As an alternative to this so called hard classification approach, we propose estimating posterior probabilities of each diffractive class and using these estimates to weigh event contributions to physical observables. It is shown with a Monte Carlo study that such a soft classification scheme is able to reproduce observables such as multiplicity distributions and relative event rates with a much higher accuracy than hard classification.

  20. Classification in Australia.

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  1. Classification in context

    Mai, Jens Erik


    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary cla...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  2. Multi-borders classification

    Mills, Peter


    The number of possible methods of generalizing binary classification to multi-class classification increases exponentially with the number of class labels. Often, the best method of doing so will be highly problem dependent. Here we present classification software in which the partitioning of multi-class classification problems into binary classification problems is specified using a recursive control language.

  3. Development of a classification system for cup anemometers - CLASSCUP

    Friis Pedersen, Troels


    objectives to quantify the errors associated with the use of cup anemometers, and to determine the requirements for an optimum design of a cup anemometer, and to develop a classification system forquantification of systematic errors of cup anemometers. The present report describes this proposed...... classification system. A classification method for cup anemometers has been developed, which proposes general external operational ranges to be used. Anormal category range connected to ideal sites of the IEC power performance standard was made, and another extended category range for complex terrain was...... proposed. General classification indices were proposed for all types of cup anemometers. As a resultof the classification, the cup anemometer will be assigned to a certain class: 0.5, 1, 2, 3 or 5 with corresponding intrinsic errors (%) as a vector instrument (3D) or as a horizontal instrument (2D). The...

  4. A gender-based analysis of Iranian EFL learners' types of written errors

    Faezeh Boroomand


    Full Text Available Committing errors is inevitable in process of language acquisition and learning. Analysis of learners' errors from different perspectives, contributes to the improvement of language learning and teaching. Although the issue of gender differences has received considerable attention in the context of second or foreign language learning and teaching, few studies on the relationship between gender and EFL learners' written errors have been carried out. The present study conducted on 100 Iranian advanced EFL learners' written errors (50 male learners and 50 female learners, presents different classifications and subdivisions of errors, and carries out an analysis on these errors. Detecting the most committed errors in each classification, findings reveal significant differences between error frequencies of the two male and female groups (more error frequency in female written productions.

  5. Achieving Standardization

    Henningsson, Stefan


    International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...... competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  6. Achieving Standardization

    Henningsson, Stefan


    International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...... competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  7. Uncorrected refractive errors

    Naidoo, Kovin S; Jyoti Jaggernath


    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error S...

  8. Correction for quadrature errors

    Netterstrøm, A.; Christensen, Erik Lintz


    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signals......, which appear as self-clutter in the radar image. When digital techniques are used for generation and processing or the radar signal it is possible to reduce these error signals. In the paper the quadrature devices are analyzed, and two different error compensation methods are considered. The practical...

  9. Error coding simulations

    Noble, Viveca K.


    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  10. Confident Predictability: Identifying reliable gene expression patterns for individualized tumor classification using a local minimax kernel algorithm

    Berry Damon


    Full Text Available Abstract Background Molecular classification of tumors can be achieved by global gene expression profiling. Most machine learning classification algorithms furnish global error rates for the entire population. A few algorithms provide an estimate of probability of malignancy for each queried patient but the degree of accuracy of these estimates is unknown. On the other hand local minimax learning provides such probability estimates with best finite sample bounds on expected mean squared error on an individual basis for each queried patient. This allows a significant percentage of the patients to be identified as confidently predictable, a condition that ensures that the machine learning algorithm possesses an error rate below the tolerable level when applied to the confidently predictable patients. Results We devise a new learning method that implements: (i feature selection using the k-TSP algorithm and (ii classifier construction by local minimax kernel learning. We test our method on three publicly available gene expression datasets and achieve significantly lower error rate for a substantial identifiable subset of patients. Our final classifiers are simple to interpret and they can make prediction on an individual basis with an individualized confidence level. Conclusions Patients that were predicted confidently by the classifiers as cancer can receive immediate and appropriate treatment whilst patients that were predicted confidently as healthy will be spared from unnecessary treatment. We believe that our method can be a useful tool to translate the gene expression signatures into clinical practice for personalized medicine.

  11. Evaluation of drug administration errors in a teaching hospital

    Berdot Sarah


    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.


    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.


    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Brink, Henrik; Crellin-Quick, Arien [Astronomy Department, University of California, Berkeley, CA 94720-3411 (United States); Butler, Nathaniel R., E-mail: [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States)


    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  14. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien


    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  15. We need to talk about error: causes and types of error in veterinary practice.

    Oxtoby, C; Ferguson, E; White, K; Mossop, L


    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error. PMID:26489997

  16. Classification and knowledge

    Kurtz, Michael J.


    Automated procedures to classify objects are discussed. The classification problem is reviewed, and the relation of epistemology and classification is considered. The classification of stellar spectra and of resolved images of galaxies is addressed.

  17. Hazard classification methodology

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  18. Remote Sensing Information Classification

    Rickman, Douglas L.


    This viewgraph presentation reviews the classification of Remote Sensing data in relation to epidemiology. Classification is a way to reduce the dimensionality and precision to something a human can understand. Classification changes SCALAR data into NOMINAL data.


    Li Jun; Zhang Shunyi; Lu Yanqing; Yan Junrong


    Accurate and real-time classification of network traffic is significant to network operation and management such as QoS differentiation, traffic shaping and security surveillance. However, with many newly emerged P2P applications using dynamic port numbers, masquerading techniques, and payload encryption to avoid detection, traditional classification approaches turn to be ineffective. In this paper, we present a layered hybrid system to classify current Internet traffic, motivated by variety of network activities and their requirements of traffic classification. The proposed method could achieve fast and accurate traffic classification with low overheads and robustness to accommodate both known and unknown/encrypted applications. Furthermore, it is feasible to be used in the context of real-time traffic classification. Our experimental results show the distinct advantages of the proposed classification system, compared with the one-step Machine Learning (ML) approach.

  20. AR-based Method for ECG Classification and Patient Recognition

    Branislav Vuksanovic


    Full Text Available The electrocardiogram (ECG is the recording of heart activity obtained by measuring the signals from electrical contacts placed on the skin of the patient. By analyzing ECG, it is possible to detect the rate and consistency of heartbeats and identify possible irregularities in heart operation. This paper describes a set of techniques employed to pre-process the ECG signals and extract a set of features – autoregressive (AR signal parameters used to characterise ECG signal. Extracted parameters are in this work used to accomplish two tasks. Firstly, AR features belonging to each ECG signal are classified in groups corresponding to three different heart conditions – normal, arrhythmia and ventricular arrhythmia. Obtained classification results indicate accurate, zero-error classification of patients according to their heart condition using the proposed method. Sets of extracted AR coefficients are then extended by adding an additional parameter – power of AR modelling error and a suitability of developed technique for individual patient identification is investigated. Individual feature sets for each group of detected QRS sections are classified in p clusters where p represents the number of patients in each group. Developed system has been tested using ECG signals available in MIT/BIH and Politecnico of Milano VCG/ECG database. Achieved recognition rates indicate that patient identification using ECG signals could be considered as a possible approach in some applications using the system developed in this work. Pre-processing stages, applied parameter extraction techniques and some intermediate and final classification results are described and presented in this paper.

  1. Medical error and disclosure.

    White, Andrew A; Gallagher, Thomas H


    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370


    Frederique Robert-Inacio


    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  3. Adaptive Error Resilience for Video Streaming

    Lakshmi R. Siruvuri


    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  4. A predictive cognitive error analysis technique for emergency tasks

    This paper introduces an analysis framework and procedure for the support of cognitive error analysis of emergency tasks in nuclear power plants. The framework provides a new perspective in the utilization of error factors into error prediction. The framework can be characterized by two features. First, error factors that affect the occurrence of human error are classified into three groups, 'task characteristics factors (TCF)', 'situation factors (SF)', and 'performance assisting factors (PAF)', and are utilized in the error prediction. This classification aims to support error prediction from the viewpoint of assessing the adequacy of PAF under given TCF and SF. Second, the assessment of error factors is made in the perspective of the performance of each cognitive function. Through this, error factors assessment is made in an integrative way not independently. Furthermore, it enables analysts to identify vulnerable cognitive functions and error factors, and to obtain specific error reduction strategies. Finally, the framework and procedure was applied to the error analysis of the 'bleed and feed operation' of emergency tasks

  5. KMRR thermal power measurement error estimation

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  6. Correlated errors can lead to better performance of quantum codes

    A formulation for evaluating the performance of quantum error correcting codes for a general error model is presented. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. We classify correlated errors using the system-bath interaction: local versus nonlocal and two-body versus many-body interactions. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery. We also find this timing to be an important factor in the design of a coding system for achieving higher fidelities

  7. Document region classification using low-resolution images: a human visual perception approach

    Chacon Murguia, Mario I.; Jordan, Jay B.


    This paper describes the design of a document region classifier. The regions of a document are classified as large text regions, LTR, and non-LTR. The foundations of the classifier are derived from human visual perception theories. The theories analyzed are texture discrimination based on textons, and perceptual grouping. Based on these theories, the classification task is stated as a texture discrimination problem and is implemented as a preattentive process. Once the foundations of the classifier are defined, engineering techniques are developed to extract features for deciding the class of information contained in the regions. The feature derived from the human visual perception theories is a measurement of periodicity of the blobs of the text regions. This feature is used to design a statistical classifier based on the minimum probability of error criterion to perform the classification of LTR and non-LTR. The method is test on free format low resolution document images achieving 93% of correct recognition.

  8. Uncorrected refractive errors

    Kovin S Naidoo


    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  9. Errors and violations

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  10. The analysis of human errors in nuclear power plant operation

    There are basically three different method known to approach human factors in the NPP-operation: - probabilistic error analysis; - analysis of human errors in real plant incidents; - job task analysis. Analysis of human errors occurring during operation and job analysis can be easily converted to operational improvements. From the analysis of human errors and errors' causes and, on the other hand, from the analysis of possible problems, it is possible to came to a derivation of requirements either for modifications of existing working systems or for the design of a new nuclear power plant. Of great importance is to have an established classification system for the error analysis in such a way that requirements can be derived by a set of elements of a matrix. (authors)

  11. Errors in imaging patients in the emergency setting.

    Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca


    Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955

  12. Minimax Optimal Rates of Convergence for Multicategory Classifications

    Di Rong CHEN; Xu YOU


    In the problem of classification (or pattern recognition),given a set of n samples,weattempt to construct a classifier gn with a small misclassification error.It is important to study the convergence rates of the misclassification error as n tends to infinity.It is known that such a rate can'texist for the set of all distributions.In this paper we obtain the optimal convergence rates for a classof distributions D(λ,ω) in multicategory classification and nonstandard binary classification.

  13. Classification of the web

    Mai, Jens Erik


    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  14. Error mode prediction.

    Hollnagel, E; Kaarstad, M; Lee, H C


    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  15. Pronominal Case-Errors

    Kaper, Willem


    Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)

  16. Errors in energy bills

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  17. Detecting Errors in Spreadsheets

    Ayalew, Yirsaw; Clermont, Markus; Mittermeir, Roland T.


    The paper presents two complementary strategies for identifying errors in spreadsheet programs. The strategies presented are grounded on the assumption that spreadsheets are software, albeit of a different nature than conventional procedural software. Correspondingly, strategies for identifying errors have to take into account the inherent properties of spreadsheets as much as they have to recognize that the conceptual models of 'spreadsheet programmers' differ from the conceptual models of c...

  18. Thermodynamics of Error Correction

    Sartori, Pablo; Pigolotti, Simone


    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  19. Smoothing error pitfalls

    T. von Clarmann


    Full Text Available The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  20. Neural Correlates of Reach Errors

    Diedrichsen, Jörn; Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza


    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showe...

  1. Multinomial mixture model with heterogeneous classification probabilities

    Holland, M.D.; Gray, B.R.


    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  2. Motion error compensation of multi-legged walking robots

    Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei


    Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.

  3. Integrating TM and Ancillary Geographical Data with Classification Trees for Land Cover Classification of Marsh Area

    NA Xiaodong; ZHANG Shuqing; ZHANG Huaiqing; LI Xiaofeng; YU Huan; LIU Chunyue


    The main objective of this research is to determine the capacity of land cover classification combining spectral and textural features of Landsat TM imagery with ancillary geographical data in wetlands of the Sanjiang Plain, Heilongjiang Province, China. Semi-variograms and Z-test value were calculated to assess the separability of grey-level co-occurrence texture measures to maximize the difference between land cover types. The degree of spatial autocorrelation showed that window sizes of 3×3 pixels and 11×11 pixels were most appropriate for Landsat TM image texture calculations. The texture analysis showed that co-occurrence entropy, dissimilarity, and variance texture measures, derived from the Landsat TM spectrum bands and vegetation indices provided the most significant statistical differentiation between land cover types. Subsequently, a Classification and Regression Tree (CART) algorithm was applied to three different combinations of predictors: 1) TM imagery alone (TM-only); 2) TM imagery plus image texture (TM+TXT model); and 3) all predictors including TM imagery, image texture and additional ancillary GIS information (TM+TXT+GIS model). Compared with traditional Maximum Likelihood Classification (MLC) supervised classification, three classification trees predictive models reduced the overall error rate significantly. Image texture measures and ancillary geographical variables depressed the speckle noise effectively and reduced classification error rate of marsh obviously. For classification trees model making use of all available predictors, omission error rate was 12.90% and commission error rate was 10.99% for marsh. The developed method is portable, relatively easy to implement and should be applicable in other settings and over larger extents.

  4. Error monitoring in musicians

    Clemens Maidhof


    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  5. Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications

    Carrió Pastor, María Luisa; Mestre-Mestre, Eva María


    Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…

  6. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc


    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue.…

  7. Effect of dose ascertainment errors on observed risk

    Inaccuracies in dose assignments can lead to misclassification in epidemiological studies. The extent of this misclassification is examined for different error functions, classification intervals, and actual dose distributions. The error function model is one which results in a truncated lognormal distribution of the assigned dose for each actual dose. The error function may vary as the actual dose changes. The effect of misclassification on the conclusions about dose effect relationships is examined for the linear and quadratic dose effect models. 10 references, 9 figures, 8 tables

  8. Automatic web services classification based on rough set theory

    陈立; 张英; 宋自林; 苗壮


    With development of web services technology, the number of existing services in the internet is growing day by day. In order to achieve automatic and accurate services classification which can be beneficial for service related tasks, a rough set theory based method for services classification was proposed. First, the services descriptions were preprocessed and represented as vectors. Elicited by the discernibility matrices based attribute reduction in rough set theory and taking into account the characteristic of decision table of services classification, a method based on continuous discernibility matrices was proposed for dimensionality reduction. And finally, services classification was processed automatically. Through the experiment, the proposed method for services classification achieves approving classification result in all five testing categories. The experiment result shows that the proposed method is accurate and could be used in practical web services classification.

  9. Feature extraction and classification in automatic weld seam radioscopy

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM)

  10. Texture Classification Based on Texton Features

    U Ravi Babu


    Full Text Available Texture Analysis plays an important role in the interpretation, understanding and recognition of terrain, biomedical or microscopic images. To achieve high accuracy in classification the present paper proposes a new method on textons. Each texture analysis method depends upon how the selected texture features characterizes image. Whenever a new texture feature is derived it is tested whether it precisely classifies the textures. Here not only the texture features are important but also the way in which they are applied is also important and significant for a crucial, precise and accurate texture classification and analysis. The present paper proposes a new method on textons, for an efficient rotationally invariant texture classification. The proposed Texton Features (TF evaluates the relationship between the values of neighboring pixels. The proposed classification algorithm evaluates the histogram based techniques on TF for a precise classification. The experimental results on various stone textures indicate the efficacy of the proposed method when compared to other methods.

  11. Error Correction in Classroom

    Dr. Grace Zhang


    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  12. Tanks for liquids: calibration and errors assessment

    After a brief reference to some of the problems raised by tanks calibration, two methods, theoretical and experimental are presented, so as to achieve it taking into account measurement errors. The method is applied to the transfer of liquid from one tank to another. Further, a practical example is developed. (author)

  13. Decomposing model systematic error

    Keenlyside, Noel; Shen, Mao-Lin


    Seasonal forecasts made with a single model are generally overconfident. The standard approach to improve forecast reliability is to account for structural uncertainties through a multi-model ensemble (i.e., an ensemble of opportunity). Here we analyse a multi-model set of seasonal forecasts available through ENSEMBLES and DEMETER EU projects. We partition forecast uncertainties into initial value and structural uncertainties, as function of lead-time and region. Statistical analysis is used to investigate sources of initial condition uncertainty, and which regions and variables lead to the largest forecast error. Similar analysis is then performed to identify common elements of model error. Results of this analysis will be used to discuss possibilities to reduce forecast uncertainty and improve models. In particular, better understanding of error growth will be useful for the design of interactive multi-model ensembles.

  14. Random errors revisited

    Jacobsen, Finn


    It is well known that the random errors of sound intensity estimates can be much larger than the theoretical minimum value determined by the BT-product, in particular under reverberant conditions and when there are several sources present. More than ten years ago it was shown that one can predict...... the random errors of estimates of the sound intensity in, say, one-third octave bands from the power and cross power spectra of the signals from an intensity probe determined with a dual channel FFT analyser. This is not very practical, though. In this paper it is demonstrated that one can predict the...... random errors from the power and cross power spectra determined with the same spectral resolution as the sound intensity itself....

  15. Synthesis of approximation errors

    Bareiss, E.H.; Michel, P.


    A method is developed for the synthesis of the error in approximations in the large of regular and irregular functions. The synthesis uses a small class of dimensionless elementary error functions which are weighted by the coefficients of the expansion of the regular part of the function. The question is answered whether a computer can determine the analytical nature of a solution by numerical methods. It is shown that continuous least-squares approximations of irregular functions can be replaced by discrete least-squares approximation and how to select the discrete points. The elementary error functions are used to show how the classical convergence criterions can be markedly improved. There are eight numerical examples included, 30 figures and 74 tables.

  16. Errors in Neonatology

    Antonio Boldrini


    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  17. Learning Interpretable SVMs for Biological Sequence Classification

    Sonnenburg Sören; Rätsch Gunnar; Schäfer Christin


    Abstract Background Support Vector Machines (SVMs) – using a variety of string kernels – have been successfully applied to biological sequence classification problems. While SVMs achieve high classification accuracy they lack interpretability. In many applications, it does not suffice that an algorithm just detects a biological signal in the sequence, but it should also provide means to interpret its solution in order to gain biological insight. Results We propose novel and efficient algorith...

  18. Achieving empowerment through information.

    Parmalee, J C; Scholomiti, T O; Whitman, P; Sees, M; Smith, D; Gardner, E; Bastian, C


    Despite the problems we encountered, which are not uncommon with the development and implementation of any data system, we are confident that our success in achieving our goals is due to the following: establishing a reliable information database connecting several related departments; interfacing with registration and billing systems to avoid duplication of data and chance for error; appointing a qualified Systems Manager devoted to the project; developing superusers to include intensive training in the operating system (UNIX), parameters of the information system, and the report writer. We achieved what we set out to accomplish: the development of a reliable database and reports on which to base a variety of hospital decisions; improved hospital utilization; reliable clinical data for reimbursement, quality management, and credentialing; enhanced communication and collaboration among departments; and an increased profile of the departments and staff. Data quality specialists, Utilization Management and Quality Management coordinators, and the Medical Staff Credentialing Supervisor and their managers are relied upon by physicians and administrators to provide timely information. The staff are recognized for their knowledge and expertise in their department-specific information. The most significant reward is the potential for innovation. Users are no longer restricted to narrow information corridors. UNIX programming encourages creativity without demanding a degree in computer science. The capability to reach and use diverse hospital database information is no longer a dream. PMID:10139109

  19. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Huang Kai


    Full Text Available Abstract Background Detailed knowledge of the subcellular location of each expressed protein is critical to a full understanding of its function. Fluorescence microscopy, in combination with methods for fluorescent tagging, is the most suitable current method for proteome-wide determination of subcellular location. Previous work has shown that neural network classifiers can distinguish all major protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Building on these results, we evaluate here new classifiers and features to improve the recognition of protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Results We report here a thorough comparison of the performance on this problem of eight different state-of-the-art classification methods, including neural networks, support vector machines with linear, polynomial, radial basis, and exponential radial basis kernel functions, and ensemble methods such as AdaBoost, Bagging, and Mixtures-of-Experts. Ten-fold cross validation was used to evaluate each classifier with various parameters on different Subcellular Location Feature sets representing both 2D and 3D fluorescence microscope images, including new feature sets incorporating features derived from Gabor and Daubechies wavelet transforms. After optimal parameters were chosen for each of the eight classifiers, optimal majority-voting ensemble classifiers were formed for each feature set. Comparison of results for each image for all eight classifiers permits estimation of the lower bound classification error rate for each subcellular pattern, which we interpret to reflect the fraction of cells whose patterns are distorted by mitosis, cell death or acquisition errors. Overall, we obtained statistically significant improvements in classification accuracy over the best previously published results, with the overall error rate being reduced by one-third to one-half and with the average

  20. Realizing Low-Energy Classification Systems by Implementing Matrix Multiplication Directly Within an ADC.

    Wang, Zhuo; Zhang, Jintao; Verma, Naveen


    In wearable and implantable medical-sensor applications, low-energy classification systems are of importance for deriving high-quality inferences locally within the device. Given that sensor instrumentation is typically followed by A-D conversion, this paper presents a system implementation wherein the majority of the computations required for classification are implemented within the ADC. To achieve this, first an algorithmic formulation is presented that combines linear feature extraction and classification into a single matrix transformation. Second, a matrix-multiplying ADC (MMADC) is presented that enables multiplication between an analog input sample and a digital multiplier, with negligible additional energy beyond that required for A-D conversion. Two systems mapped to the MMADC are demonstrated: (1) an ECG-based cardiac arrhythmia detector; and (2) an image-pixel-based facial gender detector. The RMS error over all multiplication performed, normalized to the RMS of ideal multiplication results is 0.018. Further, compared to idealized versions of conventional systems, the energy savings obtained are estimated to be 13× and 29×, respectively, while achieving similar level of performance. PMID:26849205

  1. Introduction to precision machine design and error assessment

    Mekid, Samir


    While ultra-precision machines are now achieving sub-nanometer accuracy, unique challenges continue to arise due to their tight specifications. Written to meet the growing needs of mechanical engineers and other professionals to understand these specialized design process issues, Introduction to Precision Machine Design and Error Assessment places a particular focus on the errors associated with precision design, machine diagnostics, error modeling, and error compensation. Error Assessment and ControlThe book begins with a brief overview of precision engineering and applications before introdu

  2. Note on Bessaga-Klee classification

    Cúth, Marek; Kalenda, Ondřej F. K.


    We collect several variants of the proof of the third case of the Bessaga-Klee relative classification of closed convex bodies in topological vector spaces. We were motivated by the fact that we have not found anywhere in the literature a complete correct proof. In particular, we point out an error in the proof given in the book of C.~Bessaga and A.~Pe\\l czy\\'nski (1975). We further provide a simplified version of T.~Dobrowolski's proof of the smooth classification of smooth convex bodies in ...

  3. Classification systems for natural resource management

    Kleckner, Richard L.


    Resource managers employ various types of resource classification systems in their management activities such as inventory, mapping, and data analysis. Classification is the ordering or arranging of objects into groups or sets on the basis of their relationships, and as such, provide the resource managers with a structure for organizing their needed information. In addition of conforming to certain logical principles, resource classifications should be flexible, widely applicable to a variety of environmental conditions, and useable with minimal training. The process of classification may be approached from the bottom up (aggregation) or the top down (subdivision) or a combination of both, depending on the purpose of the classification. Most resource classification systems in use today focus on a single resource and are used for a single, limited purpose. However, resource managers now must employ the concept of multiple use in their management activities. What they need is an integrated, ecologically based approach to resource classification which would fulfill multiple-use mandates. In an effort to achieve resource-data compatibility and data sharing among Federal agencies, and interagency agreement has been signed by five Federal agencies to coordinate and cooperate in the area of resource classification and inventory.

  4. Facts about Refractive Errors

    ... the cornea, or aging of the lens can cause refractive errors. What is refraction? Refraction is the bending of ... for objects at any distance, near or far. Astigmatism is a condition in ... This can cause images to appear blurry and stretched out. Presbyopia ...

  5. Team errors: definition and taxonomy

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  6. Hand eczema classification

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M;


    the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  7. Classification techniques based on AI application to defect classification in cast aluminum

    Platero, Carlos; Fernandez, Carlos; Campoy, Pascual; Aracil, Rafael


    This paper describes the Artificial Intelligent techniques applied to the interpretation process of images from cast aluminum surface presenting different defects. The whole process includes on-line defect detection, feature extraction and defect classification. These topics are discussed in depth through the paper. Data preprocessing process, as well as segmentation and feature extraction are described. At this point, algorithms employed along with used descriptors are shown. Syntactic filter has been developed to modelate the information and to generate the input vector to the classification system. Classification of defects is achieved by means of rule-based systems, fuzzy models and neural nets. Different classification subsystems perform together for the resolution of a pattern recognition problem (hybrid systems). Firstly, syntactic methods are used to obtain the filter that reduces the dimension of the input vector to the classification process. Rule-based classification is achieved associating a grammar to each defect type; the knowledge-base will be formed by the information derived from the syntactic filter along with the inferred rules. The fuzzy classification sub-system uses production rules with fuzzy antecedent and their consequents are ownership rates to every defect type. Different architectures of neural nets have been implemented with different results, as shown along the paper. In the higher classification level, the information given by the heterogeneous systems as well as the history of the process is supplied to an Expert System in order to drive the casting process.

  8. Classification of articulators.

    Rihani, A


    A simple classification in familiar terms with definite, clear characteristics can be adopted. This classification system is based on the number of records used and the adjustments necessary for the articulator to accept these records. The classification divides the articulators into nonadjustable, semiadjustable, and fully adjustable articulators (Table I). PMID:6928204

  9. Automated classification of patients with chronic lymphocytic leukemia and immunocytoma from flow cytometric three-color immunophenotypes.

    Valet, G K; Höffkes, H G


    The goal of this study was the discrimination between chronic lymphocytic leukemia (B-CLL), clinically more aggressive lymphoplasmocytoid immunocytoma (LP-IC) and other low-grade non-Hodgkin's lymphomas (NHL) of the B-cell type by automated analysis of flow cytometric immunophenotypes CD45/14/20, CD4/8/3, kappa/CD19/5, lambda/CD19/5 and CD10/23/19 from peripheral blood and bone marrow aspirate leukocytes using the multiparameter classification program CLASSIF1. The immunophenotype list mode files were exhaustively evaluated by combined lymphocyte, monocyte, and granulocyte (LMG) analysis. The results were introduced into databases and automatically classified in a standardized way. The resulting triple matrix classifiers are laboratory and instrument independent, error tolerant, and robust in the classification of unknown test samples. Practically 100% correct individual patient classification was achievable, and most manually unclassifiable patients were unambiguously classified. It is of interest that the single lambda/CD19/5 antibody triplet provided practically the same information as the full set of the five antibody triplets. This demonstrates that standardized classification can be used to optimize immunophenotype panels. On-line classification of test samples is accessible on the Internet: Immunophenotype panels are usually devised for the detection of the frequency of abnormal cell populations. As shown by computer classification, most the highly discriminant information is, however, not contained in percentage frequency values of cell populations, but rather in total antibody binding, antibody binding ratios, and relative antibody surface density parameters of various lymphocyte, monocyte, and granulocyte cell populations. PMID:9440819

  10. Improving Accuracy of Image Classification Using GIS

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  11. Control by model error estimation

    Likins, P. W.; Skelton, R. E.


    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  12. Normalization Benefits Microarray-Based Classification

    Chen Yidong


    Full Text Available When using cDNA microarrays, normalization to correct labeling bias is a common preliminary step before further data analysis is applied, its objective being to reduce the variation between arrays. To date, assessment of the effectiveness of normalization has mainly been confined to the ability to detect differentially expressed genes. Since a major use of microarrays is the expression-based phenotype classification, it is important to evaluate microarray normalization procedures relative to classification. Using a model-based approach, we model the systemic-error process to generate synthetic gene-expression values with known ground truth. These synthetic expression values are subjected to typical normalization methods and passed through a set of classification rules, the objective being to carry out a systematic study of the effect of normalization on classification. Three normalization methods are considered: offset, linear regression, and Lowess regression. Seven classification rules are considered: 3-nearest neighbor, linear support vector machine, linear discriminant analysis, regular histogram, Gaussian kernel, perceptron, and multiple perceptron with majority voting. The results of the first three are presented in the paper, with the full results being given on a complementary website. The conclusion from the different experiment models considered in the study is that normalization can have a significant benefit for classification under difficult experimental conditions, with linear and Lowess regression slightly outperforming the offset method.

  13. Normalization Benefits Microarray-Based Classification

    Edward R. Dougherty


    Full Text Available When using cDNA microarrays, normalization to correct labeling bias is a common preliminary step before further data analysis is applied, its objective being to reduce the variation between arrays. To date, assessment of the effectiveness of normalization has mainly been confined to the ability to detect differentially expressed genes. Since a major use of microarrays is the expression-based phenotype classification, it is important to evaluate microarray normalization procedures relative to classification. Using a model-based approach, we model the systemic-error process to generate synthetic gene-expression values with known ground truth. These synthetic expression values are subjected to typical normalization methods and passed through a set of classification rules, the objective being to carry out a systematic study of the effect of normalization on classification. Three normalization methods are considered: offset, linear regression, and Lowess regression. Seven classification rules are considered: 3-nearest neighbor, linear support vector machine, linear discriminant analysis, regular histogram, Gaussian kernel, perceptron, and multiple perceptron with majority voting. The results of the first three are presented in the paper, with the full results being given on a complementary website. The conclusion from the different experiment models considered in the study is that normalization can have a significant benefit for classification under difficult experimental conditions, with linear and Lowess regression slightly outperforming the offset method.

  14. Classification Accuracy and Consistency under Item Response Theory Models Using the Package classify

    Chris Wheadon


    The R package classify presents a number of useful functions which can be used to estimate the classification accuracy and consistency of assessments. Classification accuracy refers to the probability that an examinees achieved grade classification on an assessment reflects their true grade. Classification consistency refers to the probability that an examinee will be classified into the same grade classification under repeated administrations of an assessment. Understanding the classificatio...

  15. Error analysis and data reduction for interferometric surface measurements

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  16. Photon level chemical classification using digital compressive detection

    Highlights: ► A new digital compressive detection strategy is developed. ► Chemical classification demonstrated using as few as ∼10 photons. ► Binary filters are optimal when taking few measurements. - Abstract: A key bottleneck to high-speed chemical analysis, including hyperspectral imaging and monitoring of dynamic chemical processes, is the time required to collect and analyze hyperspectral data. Here we describe, both theoretically and experimentally, a means of greatly speeding up the collection of such data using a new digital compressive detection strategy. Our results demonstrate that detecting as few as ∼10 Raman scattered photons (in as little time as ∼30 μs) can be sufficient to positively distinguish chemical species. This is achieved by measuring the Raman scattered light intensity transmitted through programmable binary optical filters designed to minimize the error in the chemical classification (or concentration) variables of interest. The theoretical results are implemented and validated using a digital compressive detection instrument that incorporates a 785 nm diode excitation laser, digital micromirror spatial light modulator, and photon counting photodiode detector. Samples consisting of pairs of liquids with different degrees of spectral overlap (including benzene/acetone and n-heptane/n-octane) are used to illustrate how the accuracy of the present digital compressive detection method depends on the correlation coefficients of the corresponding spectra. Comparisons of measured and predicted chemical classification score plots, as well as linear and non-linear discriminant analyses, demonstrate that this digital compressive detection strategy is Poisson photon noise limited and outperforms total least squares-based compressive detection with analog filters.

  17. Stellar classification from single-band imaging using machine learning

    Kuntzer, T.; Tewes, M.; Courbin, F.


    Information on the spectral types of stars is of great interest in view of the exploitation of space-based imaging surveys. In this article, we investigate the classification of stars into spectral types using only the shape of their diffraction pattern in a single broad-band image. We propose a supervised machine learning approach to this endeavour, based on principal component analysis (PCA) for dimensionality reduction, followed by artificial neural networks (ANNs) estimating the spectral type. Our analysis is performed with image simulations mimicking the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) in the F606W and F814W bands, as well as the Euclid VIS imager. We first demonstrate this classification in a simple context, assuming perfect knowledge of the point spread function (PSF) model and the possibility of accurately generating mock training data for the machine learning. We then analyse its performance in a fully data-driven situation, in which the training would be performed with a limited subset of bright stars from a survey, and an unknown PSF with spatial variations across the detector. We use simulations of main-sequence stars with flat distributions in spectral type and in signal-to-noise ratio, and classify these stars into 13 spectral subclasses, from O5 to M5. Under these conditions, the algorithm achieves a high success rate both for Euclid and HST images, with typical errors of half a spectral class. Although more detailed simulations would be needed to assess the performance of the algorithm on a specific survey, this shows that stellar classification from single-band images is well possible.

  18. Forward error correction in optical ethernet communications

    Oliveras Boada, Jordi


    [ANGLÈS] A way of incrementing the amount of information sent through an optical fibre is ud-WDM (ultra dense – Wavelength Division Multiplexing). The problem is that the sensitivity of the receiver requires certain SNR (Signal Noise Ratio) that are only achieved in low distances, so to increase them a codification called FEC (Forward Error Correction) can be used. This should reduce the BER (Bit Error Rate) at the receiver letting the signal to be transmitted to longer distances. Another pro...

  19. Manson's triple error.

    F, Delaporte


    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  20. Minimum Error Tree Decomposition

    Liu, L; Ma, Y.; Wilkins, D.; Bian, Z.; Ying, X


    This paper describes a generalization of previous methods for constructing tree-structured belief network with hidden variables. The major new feature of the described method is the ability to produce a tree decomposition even when there are errors in the correlation data among the input variables. This is an important extension of existing methods since the correlational coefficients usually cannot be measured with precision. The technique involves using a greedy search algorithm that locall...

  1. Semiparametric Bernstein–von Mises for the error standard deviation

    Jonge, de, B.; Zanten, van, M.


    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  2. Semiparametric Bernstein-von Mises for the error standard deviation

    Jonge, de, B.; Zanten, van, M.


    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein-von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  3. Error Analysis and Its Implication



    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  4. Progression in nuclear classification

    In this book, summarize the author's achievements of nuclear classification by new method in latest 30 years, new foundational law of nuclear layer in matter world is found. It is explained with a hypothesis of a nucleus which it is made up of two nucleon's clusters with deuteron and triton. Its concrete content is: to advance a new method which analyze data of nuclei with natural abundance using relationship between the numbers of proton and neutron. The relationship of each nucleus increases to 4 sets: S+H=Z H+Z=N Z+N=A and S-H=K. To expand the similarity between proton and neutron to the similarity among p,n, deuteron, triton, and He-5 clusters. According to the distribution law of same kind of nuclei, it obtains that the upper limits of stable region both should be '44s'. New foundational law of nuclear system is 1,2,4,8,16,8,4,2,1. In order to explain new law, a hypothesis which nucleus is made up of deuteron and triton is developing and nuclear field of whole number is built up. And it relates that unity of matter motion, which is the most foundational form atomic nuclear systematic is similar to the most first-class form chromosome numbers of mankind. These achievements will shake the foundations of traditional nuclear science. These achievements will supply new tasks in developing nuclear theory. And shake the ground of which magic number is the basic of nuclear science. It opens up a new field on foundational research. The book will supply new knowledge for researcher, teachers and students in universities and polytechnic schools. Scientific workers read in works of research and technical exploit. It can be stored up for library and laboratory of society and universities. In nowadays of prosperity our nation by science and education, the book is readable for workers of scientific technology and amateurs of natural science

  5. Characterization of the error budget of Alba-NOM

    The Alba-NOM instrument is a high accuracy scanning machine capable of measuring the slope profile of long mirrors with resolution below the nanometer scale and for a wide range of curvatures. We present the characterization of different sources of errors that limit the uncertainty of the instrument. We have investigated three main contributions to the uncertainty of the measurements: errors introduced by the scanning system and the pentaprism, errors due to environmental conditions, and optical errors of the autocollimator. These sources of error have been investigated by measuring the corresponding motion errors with a high accuracy differential interferometer and by simulating their impact on the measurements by means of ray-tracing. Optical error contributions have been extracted from the analysis of redundant measurements of test surfaces. The methods and results are presented, as well as an example of application that has benefited from the achieved accuracy

  6. On the Foundations of Adversarial Single-Class Classification

    El-Yaniv, Ran


    Motivated by authentication, intrusion and spam detection applications we consider single-class classification (SCC) as a two-person game between the learner and an adversary. In this game the learner has a sample from a target distribution and the goal is to construct a classifier capable of distinguishing observations from the target distribution from observations emitted from an unknown other distribution. The ideal SCC classifier must guarantee a given tolerance for the false-positive error (false alarm rate) while minimizing the false negative error (intruder pass rate). Viewing SCC as a two-person zero-sum game we identify both deterministic and randomized optimal classification strategies for different game variants. We demonstrate that randomized classification can provide a significant advantage. In the deterministic setting we show how to reduce SCC to two-class classification where in the two-class problem the other class is a synthetically generated distribution. We provide an efficient and practi...

  7. Sparse Partial Least Squares Classification for High Dimensional Data*

    Chung, Dongjun; Keles, Sunduz


    Partial least squares (PLS) is a well known dimension reduction method which has been recently adapted for high dimensional classification problems in genome biology. We develop sparse versions of the recently proposed two PLS-based classification methods using sparse partial least squares (SPLS). These sparse versions aim to achieve variable selection and dimension reduction simultaneously. We consider both binary and multicategory classification. We provide analytical and simulation-based i...

  8. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu; Wang Xiaosheng


    Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...

  9. Classification of ASKAP Vast Radio Light Curves

    Rebbapragada, Umaa; Lo, Kitty; Wagstaff, Kiri L.; Reed, Colorado; Murphy, Tara; Thompson, David R.


    The VAST survey is a wide-field survey that observes with unprecedented instrument sensitivity (0.5 mJy or lower) and repeat cadence (a goal of 5 seconds) that will enable novel scientific discoveries related to known and unknown classes of radio transients and variables. Given the unprecedented observing characteristics of VAST, it is important to estimate source classification performance, and determine best practices prior to the launch of ASKAP's BETA in 2012. The goal of this study is to identify light curve characterization and classification algorithms that are best suited for archival VAST light curve classification. We perform our experiments on light curve simulations of eight source types and achieve best case performance of approximately 90% accuracy. We note that classification performance is most influenced by light curve characterization rather than classifier algorithm.

  10. Fingerprint Gender Classification using Wavelet Transform and Singular Value Decomposition

    Gnanasivam, P


    A novel method of gender Classification from fingerprint is proposed based on discrete wavelet transform (DWT) and singular value decomposition (SVD). The classification is achieved by extracting the energy computed from all the sub-bands of DWT combined with the spatial features of non-zero singular values obtained from the SVD of fingerprint images. K nearest neighbor (KNN) used as a classifier. This method is experimented with the internal database of 3570 fingerprints finger prints in which 1980 were male fingerprints and 1590 were female fingerprints. Finger-wise gender classification is achieved which is 94.32% for the left hand little fingers of female persons and 95.46% for the left hand index finger of male persons. Gender classification for any finger of male persons tested is attained as 91.67% and 84.69% for female persons respectively. Overall classification rate is 88.28% has been achieved.

  11. Medical error and systems of signaling: conceptual and linguistic definition.

    Smorti, Andrea; Cappelli, Francesco; Zarantonello, Roberta; Tani, Franca; Gensini, Gian Franco


    In recent years the issue of patient safety has been the subject of detailed investigations, particularly as a result of the increasing attention from the patients and the public on the problem of medical error. The purpose of this work is firstly to define the classification of medical errors, which are distinguished between two perspectives: those that are personal, and those that are caused by the system. Furthermore we will briefly review some of the main methods used by healthcare organizations to identify and analyze errors. During this discussion it has been determined that, in order to constitute a practical, coordinated and shared action to counteract the error, it is necessary to promote an analysis that considers all elements (human, technological and organizational) that contribute to the occurrence of a critical event. Therefore, it is essential to create a culture of constructive confrontation that encourages an open and non-punitive debate about the causes that led to error. In conclusion we have thus underlined that in health it is essential to affirm a system discussion that considers the error as a learning source, and as a result of the interaction between the individual and the organization. In this way, one should encourage a non-guilt bearing discussion on evident errors and on those which are not immediately identifiable, in order to create the conditions that recognize and corrects the error even before it produces negative consequences. PMID:25034521

  12. Aprender de los errores

    Pacheco, José Miguel


    Es interesante constatar que con frecuencia los errores, siempre lamentables, nos enseñan más que la repetición de los éxitos. Vamos a comentar un ejemplo, con la idea de ofrecer una reflexión al profesorado de matemáticas en general. El análisis de los ejercicios de los alumnos puso de relieve la falta de una visión crítica de la enseñanza en cierto puntos clave, como se expondrá en este artículo.

  13. Recursive heuristic classification

    Wilkins, David C.


    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  14. Security classification of information

    Quist, A.S.


    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  15. Classification and data acquisition with incomplete data

    Williams, David P.

    In remote-sensing applications, incomplete data can result when only a subset of sensors (e.g., radar, infrared, acoustic) are deployed at certain regions. The limitations of single sensor systems have spurred interest in employing multiple sensor modalities simultaneously. For example, in land mine detection tasks, different sensor modalities are better-suited to capture different aspects of the underlying physics of the mines. Synthetic aperture radar sensors may be better at detecting surface mines, while infrared sensors may be better at detecting buried mines. By employing multiple sensor modalities to address the detection task, the strengths of the disparate sensors can be exploited in a synergistic manner to improve performance beyond that which would be achievable with either single sensor alone. When multi-sensor approaches are employed, however, incomplete data can be manifested. If each sensor is located on a separate platform ( e.g., aircraft), each sensor may interrogate---and hence collect data over---only partially overlapping areas of land. As a result, some data points may be characterized by data (i.e., features) from only a subset of the possible sensors employed in the task. Equivalently, this scenario implies that some data points will be missing features. Increasing focus in the future on using---and fusing data from---multiple sensors will make such incomplete-data problems commonplace. In many applications involving incomplete data, it is possible to acquire the missing data at a cost. In multi-sensor remote-sensing applications, data is acquired by deploying sensors to data points. Acquiring data is usually an expensive, time-consuming task, a fact that necessitates an intelligent data acquisition process. Incomplete data is not limited to remote-sensing applications, but rather, can arise in virtually any data set. In this dissertation, we address the general problem of classification when faced with incomplete data. We also address the

  16. Graded Achievement, Tested Achievement, and Validity

    Brookhart, Susan M.


    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  17. Cost-sensitive classification for rare events: an application to the credit rating model validation for SMEs

    Raffaella Calabrese


    Receiver Operating Characteristic (ROC) curve is used to assess the discriminatory power of credit rating models. To identify the optimal threshold on the ROC curve, the iso-performance lines are used. The ROC curve and the iso-performance line assume equal classification error costs and that the two classification groups are relatively balanced. These assumptions are unrealistic in the application to credit risk. In order to remove these hypotheses, the curve of Classification Error Costs is...

  18. Emotion Classification from Noisy Speech - A Deep Learning Approach

    Rana, Rajib


    This paper investigates the performance of Deep Learning for speech emotion classification when the speech is compounded with noise. It reports on the classification accuracy and concludes with the future directions for achieving greater robustness for emotion recognition from noisy speech.

  19. Classification of titanium dioxide

    In this work the X-ray diffraction (XRD), Scanning Electron Microscopy (Sem) and the X-ray Dispersive Energy Spectroscopy techniques are used with the purpose to achieve a complete identification of phases and mixture of phases of a crystalline material as titanium dioxide. The problem for solving consists of being able to distinguish a sample of titanium dioxide being different than a titanium dioxide pigment. A standard sample of titanium dioxide with NIST certificate is used, which indicates a purity of 99.74% for the TiO2. The following way is recommended to proceed: a)To make an analysis by means of X-ray diffraction technique to the sample of titanium dioxide pigment and on the standard of titanium dioxide waiting not find differences. b) To make a chemical analysis by the X-ray Dispersive Energy Spectroscopy via in a microscope, taking advantage of the high vacuum since it is oxygen which is analysed and if it is concluded that the aluminium oxide appears in a greater proportion to 1% it is established that is a titanium dioxide pigment, but if it is lesser then it will be only titanium dioxide. This type of analysis is an application of the nuclear techniques useful for the tariff classification of merchandise which is considered as of difficult recognition. (Author)

  20. Carotid and Jugular Classification in ARTSENS.

    Sahani, Ashish Kumar; Shah, Malay Ilesh; Joseph, Jayaraj; Sivaprakasam, Mohanasankar


    Over past few years our group has been working on the development of a low-cost device, ARTSENS, for measurement of local arterial stiffness (AS) of the common carotid artery (CCA). This uses a single element ultrasound transducer to obtain A-mode frames from the CCA. It is designed to be fully automatic in its operation such that, a general medical practitioner can use the device without any prior knowledge of ultrasound modality. Placement of the probe over CCA and identification of echo positions corresponding to its two walls are critical steps in the process of measurement of AS. We had reported an algorithm to locate the CCA walls based on their characteristic motion. Unfortunately, in supine position, the internal jugular vein (IJV) expands in the carotid triangle and pulsates in a manner that confounds the existing algorithm and leads to wrong measurements of the AS. Jugular venous pulse (JVP), on its own right, is a very important physiological signal for diagnosis of morbidities of the right side of the heart and there is a lack of noninvasive methods for its accurate estimation. We integrated an ECG device to the existing hardware of ARTSENS and developed a method based on physiology of the vessels, which now enable us to segregate the CCA pulse (CCP) and the JVP. False identification rate is less than 4%. To retain the capabilities of ARTSENS to operate without ECG, we designed another method where the classification can be achieved without an ECG, albeit errors are a bit higher. These improvements enable ARTSENS to perform automatic measurement of AS even in the supine position and make it a unique and handy tool to perform JVP analysis. PMID:25700474

  1. Predicting sample size required for classification performance

    Figueroa Rosa L


    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  2. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Fangyu Pan


    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  3. New methodology in biomedical science: methodological errors in classical science.

    Skurvydas, Albertas


    The following methodological errors are observed in biomedical sciences: paradigmatic ones; those of exaggerated search for certainty; science dehumanisation; deterministic and linearity; those of making conclusions; errors of reductionism or quality decomposition as well as exaggerated enlargement; errors connected with discarding odd; unexpected or awkward facts; those of exaggerated mathematization; isolation of science; the error of "common sense"; Ceteris Paribus law's ("other things being equal" laws) error; "youth" and common sense; inflexibility of criteria of the truth; errors of restricting the sources of truth and ways of searching for truth; the error connected with wisdom gained post factum; the errors of wrong interpretation of research mission; "laziness" to repeat the experiment as well as the errors of coordination of errors. One of the basic aims for the present-day scholars of biomedicine is, therefore, mastering the new non-linear, holistic, complex way of thinking that will, undoubtedly, enable one to make less errors doing research. The aim of "scientific travelling" will be achieved with greater probability if the "travelling" itself is performed with great probability. PMID:15687745

  4. Classifications for Proliferative Vitreoretinopathy (PVR: An Analysis of Their Use in Publications over the Last 15 Years

    Salvatore Di Lauro


    Full Text Available Purpose. To evaluate the current and suitable use of current proliferative vitreoretinopathy (PVR classifications in clinical publications related to treatment. Methods. A PubMed search was undertaken using the term “proliferative vitreoretinopathy therapy”. Outcome parameters were the reported PVR classification and PVR grades. The way the classifications were used in comparison to the original description was analyzed. Classification errors were also included. It was also noted whether classifications were used for comparison before and after pharmacological or surgical treatment. Results. 138 papers were included. 35 of them (25.4% presented no classification reference or did not use any one. 103 publications (74.6% used a standardized classification. The updated Retina Society Classification, the first Retina Society Classification, and the Silicone Study Classification were cited in 56.3%, 33.9%, and 3.8% papers, respectively. Furthermore, 3 authors (2.9% used modified-customized classifications and 4 (3.8% classification errors were identified. When the updated Retina Society Classification was used, only 10.4% of authors used a full C grade description. Finally, only 2 authors reported PVR grade before and after treatment. Conclusions. Our findings suggest that current classifications are of limited value in clinical practice due to the inconsistent and limited use and that it may be of benefit to produce a revised classification.

  5. Payment Error Rate Measurement (PERM)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  6. Skylab water balance error analysis

    Leonard, J. I.


    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  7. Classiology and soil classification

    Rozhkov, V. A.


    Classiology can be defined as a science studying the principles and rules of classification of objects of any nature. The development of the theory of classification and the particular methods for classifying objects are the main challenges of classiology; to a certain extent, they are close to the challenges of pattern recognition. The methodology of classiology integrates a wide range of methods and approaches: from expert judgment to formal logic, multivariate statistics, and informatics. Soil classification assumes generalization of available data and practical experience, formalization of our notions about soils, and their representation in the form of an information system. As an information system, soil classification is designed to predict the maximum number of a soil's properties from the position of this soil in the classification space. The existing soil classification systems do not completely satisfy the principles of classiology. The violation of logical basis, poor structuring, low integrity, and inadequate level of formalization make these systems verbal schemes rather than classification systems sensu stricto. The concept of classification as listing (enumeration) of objects makes it possible to introduce the notion of the information base of classification. For soil objects, this is the database of soil indices (properties) that might be applied for generating target-oriented soil classification system. Mathematical methods enlarge the prognostic capacity of classification systems; they can be applied to assess the quality of these systems and to recognize new soil objects to be included in the existing systems. The application of particular principles and rules of classiology for soil classification purposes is discussed in this paper.

  8. Efficient Pairwise Multilabel Classification

    Loza Mencía, Eneldo


    Multilabel classification learning is the task of learning a mapping between objects and sets of possibly overlapping classes and has gained increasing attention in recent times. A prototypical application scenario for multilabel classification is the assignment of a set of keywords to a document, a frequently encountered problem in the text classification domain. With upcoming Web 2.0 technologies, this domain is extended by a wide range of tag suggestion tasks and the trend definitely...

  9. Efficient multivariate sequence classification

    Kuksa, Pavel P.


    Kernel-based approaches for sequence classification have been successfully applied to a variety of domains, including the text categorization, image classification, speech analysis, biological sequence analysis, time series and music classification, where they show some of the most accurate results. Typical kernel functions for sequences in these domains (e.g., bag-of-words, mismatch, or subsequence kernels) are restricted to {\\em discrete univariate} (i.e. one-dimensional) string data, such ...

  10. Classifier in Age classification

    B. Santhi; R.Seethalakshmi


    Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This st...

  11. Error forecasting schemes of error correction at receiver

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  12. Sensitivity analysis of DOA estimation algorithms to sensor errors

    Li, Fu; Vaccaro, Richard J.


    A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.

  13. Error Detection in ESL Teaching

    Rogoveanu Raluca


    This study investigates the role of error correction in the larger paradigm of ESL teaching and learning. It conceptualizes error as an inevitable variable in the process of learning and as a frequently occurring element in written and oral discourses of ESL learners. It also identifies specific strategies in which error can be detected and corrected and makes reference to various theoretical trends and their approach to error correction, as well as to the relation between language instructor...

  14. On the Arithmetic of Errors

    Markov, Svetoslav; Hayes, Nathan


    An approximate number is an ordered pair consisting of a (real) number and an error bound, briefly error, which is a (real) non-negative number. To compute with approximate numbers the arithmetic operations on errors should be well-known. To model computations with errors one should suitably define and study arithmetic operations and order relations over the set of non-negative numbers. In this work we discuss the algebraic properties of non-negative numbers starting from fa...

  15. Aspects de la classification

    Mari, Jean-François; Napoli, Amedeo


    Les techniques de classification numérique ont toujours été présentes en reconnaissance des formes. Les réseaux de neurones montrent chaque jour leurs (très ?) bonnes propriétés de classification, et la classification se fait de plus en plus présente en représentation des connaissances. Ainsi, ce rapport présente, simplement dans un but introductif, les aspects mathématiques, statistiques, neuromimétiques et cognitifs de la classification.

  16. Ontologies vs. Classification Systems

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne


    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...... classification systems and meta data taxonomies, should be based on ontologies....

  17. Automatic classification of background EEG activity in healthy and sick neonates

    Löfhede, Johan; Thordstein, Magnus; Löfgren, Nils; Flisberg, Anders; Rosa-Zurera, Manuel; Kjellmer, Ingemar; Lindecrantz, Kaj


    The overall aim of our research is to develop methods for a monitoring system to be used at neonatal intensive care units. When monitoring a baby, a range of different types of background activity needs to be considered. In this work, we have developed a scheme for automatic classification of background EEG activity in newborn babies. EEG from six full-term babies who were displaying a burst suppression pattern while suffering from the after-effects of asphyxia during birth was included along with EEG from 20 full-term healthy newborn babies. The signals from the healthy babies were divided into four behavioural states: active awake, quiet awake, active sleep and quiet sleep. By using a number of features extracted from the EEG together with Fisher's linear discriminant classifier we have managed to achieve 100% correct classification when separating burst suppression EEG from all four healthy EEG types and 93% true positive classification when separating quiet sleep from the other types. The other three sleep stages could not be classified. When the pathological burst suppression pattern was detected, the analysis was taken one step further and the signal was segmented into burst and suppression, allowing clinically relevant parameters such as suppression length and burst suppression ratio to be calculated. The segmentation of the burst suppression EEG works well, with a probability of error around 4%.

  18. Uncertainty quantification and error analysis

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL


    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  19. Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification

    R. Sathya


    Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.

  20. Support vector classification algorithm based on variable parameter linear programming

    Xiao Jianhua; Lin Jian


    To solve the problems of SVM in dealing with large sample size and asymmetric distributed samples, a support vector classification algorithm based on variable parameter linear programming is proposed.In the proposed algorithm, linear programming is employed to solve the optimization problem of classification to decrease the computation time and to reduce its complexity when compared with the original model.The adjusted punishment parameter greatly reduced the classification error resulting from asymmetric distributed samples and the detailed procedure of the proposed algorithm is given.An experiment is conducted to verify whether the proposed algorithm is suitable for asymmetric distributed samples.

  1. Firewall Configuration Errors Revisited

    Wool, Avishai


    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  2. Concepts of Classification and Taxonomy. Phylogenetic Classification

    Fraix-Burnet, Didier


    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 1 Why phylogenetic tools in astrophysics? 1.1 History of classification The need for classifying living organisms is very ancient, and the first classification system can be dated back to the Greeks. The goal was very practical since it was intended to distinguish between eatable and toxic aliments, or kind and dangerous animals. Simple resemblance was used and has been used for centuries. Basically, until the XVIIIth...

  3. Error-backpropagation in temporally encoded networks of spiking neurons

    Bohte, Sander; La Poutré, Han; Kok, Joost


    For a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-coded networks. We perf...

  4. Active Dictionary Learning in Sparse Representation Based Classification

    Xu, Jin; He, Haibo; Man, Hong


    Sparse representation, which uses dictionary atoms to reconstruct input vectors, has been studied intensively in recent years. A proper dictionary is a key for the success of sparse representation. In this paper, an active dictionary learning (ADL) method is introduced, in which classification error and reconstruction error are considered as the active learning criteria in selection of the atoms for dictionary construction. The learned dictionaries are caculated in sparse representation based...

  5. Register file soft error recovery

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.


    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  6. An SMP soft classification algorithm for remote sensing

    Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.


    This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.

  7. Library Classification 2020

    Harris, Christopher


    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  8. Musings on galaxy classification

    Classification schemes and their utility are discussed with a number of examples, particularly for cD galaxies. Data suggest that primordial turbulence rather than tidal torques is responsible for most of the presently observed angular momentum of galaxies. Finally, some of the limitations on present-day schemes for galaxy classification are pointed out. 54 references, 4 figures, 3 tables

  9. Common errors in disease mapping

    Ricardo Ocaña-Riola


    Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.

  10. The effect of artificial neural networks structure in critical heat flux prediction error

    CHF (Critical Heat Flux) is an important parameter for the design of nuclear reactor. Although much experimental and theoretical research has been performed, there is not a single correlation to predict CHF because there are many parameter which influence CHF .These parameters are based on inlet, local and outlet conditions. Recently some attempts have been achieved to predict critical heat flux by different methods. Correlation and neural networks base predictors are two major approaches to this aim. ANNs (Artificial neural networks) are powerful tools for prediction, data modeling and classification. Some researches have been shown that trained artificial neural networks predict the CHF better than any other conventional correlation methods. We trained two types of neural networks with the experimental CHF data and compared CHF prediction error of them. These types are RBF (Radial Basis Function) and MLP (Multi Layer Perceptron). Predicting CHF in local condition (pressure, mass flux rate and equilibrium quality) with RBF providing an accuracy of ± 7.9% and with MLP providing an accuracy of ± 8.3%. The effects of structures in prediction error generating are also evaluated and results are reported. (authors)

  11. Neural Networks for Emotion Classification

    Sun, Yafei


    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  12. The 13 errors.

    Flower, J


    The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717


    Narra Gopal


    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  14. Support Vector classifiers for Land Cover Classification

    Pal, Mahesh


    Support vector machines represent a promising development in machine learning research that is not widely used within the remote sensing community. This paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral DAIS)data in which multi-class SVMs are compared with maximum likelihood and artificial neural network methods in terms of classification accuracy. Our results show that the SVM achieves a higher level of classification accuracy than either the maximum likelihood or the neural classifier, and that the support vector machine can be used with small training datasets and high-dimensional data.

  15. Cluster Based Text Classification Model

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock


    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases the...... classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....


    Camelia, CHIRILA


    Full Text Available Nowadays the accurate translation of legal texts has become highly important as the mistranslation of a passage in a contract, for example, could lead to lawsuits and loss of money. Consequently, the translation of legal texts to other languages faces many difficulties and only professional translators specialised in legal translation should deal with the translation of legal documents and scholarly writings. The purpose of this paper is to analyze translation from three perspectives: translation quality, errors and difficulties encountered in translating legal texts and consequences of such errors in professional translation. First of all, the paper points out the importance of performing a good and correct translation, which is one of the most important elements to be considered when discussing translation. Furthermore, the paper presents an overview of the errors and difficulties in translating texts and of the consequences of errors in professional translation, with applications to the field of law. The paper is also an approach to the differences between languages (English and Romanian that can hinder comprehension for those who have embarked upon the difficult task of translation. The research method that I have used to achieve the objectives of the paper was the content analysis of various Romanian and foreign authors' works.

  17. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery


    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  18. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.


    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  19. Random errors in egocentric networks.

    Almquist, Zack W


    The systematic errors that are induced by a combination of human memory limitations and common survey design and implementation have long been studied in the context of egocentric networks. Despite this, little if any work exists in the area of random error analysis on these same networks; this paper offers a perspective on the effects of random errors on egonet analysis, as well as the effects of using egonet measures as independent predictors in linear models. We explore the effects of false-positive and false-negative error in egocentric networks on both standard network measures and on linear models through simulation analysis on a ground truth egocentric network sample based on facebook-friendships. Results show that 5-20% error rates, which are consistent with error rates known to occur in ego network data, can cause serious misestimation of network properties and regression parameters. PMID:23878412

  20. [Error factors in spirometry].

    Quadrelli, S A; Montiel, G C; Roncoroni, A J


    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  1. School Size, Achievement, and Achievement Gaps

    Bradley J. McMillen


    Full Text Available In order to examine the relationship between school size and achievement, a study was conducted using longitudinal achievement data from North Carolina for three separate cohorts of public school students (one elementary, one middle and one high school. Results revealed several interactions between size and student characteristics, all of which indicated that the achievement gaps typically existing between certain subgroups (i.e., more versus less-advantaged, lower versus higher-achieving were larger in larger schools. Results varied across the grade level cohorts and across subjects, but in general effects were more common in mathematics than in reading, and were more pronounced at the high school level. Study results are discussed in the context of educational equity and cost-effectiveness.

  2. Architecture design for soft errors

    Mukherjee, Shubu


    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  3. A chance to avoid mistakes human error

    Trying to give an answer to the lack of public information in the industry, in relationship with the different tools that are managed in the nuclear industry for minimizing the human error, a group of workers from different sections of the St. Maria de Garona NPP (Quality Assurance/ Organization and Human Factors) decided to embark on a challenging and exciting project: 'Write a book collecting all the knowledge accumulated during their daily activities, very often during lecture time of external information received from different organizations within the nuclear industry (INPO, WANO...), but also visiting different NPP's, maintaining meetings and participating in training courses related de Human and Organizational Factors'. Main objective of the book is presenting to the industry in general, the different tools that are used and fostered in the nuclear industry, in a practical way. In this way, the assimilation and implementation in others industries could be possible and achievable in and efficient context. One year of work, and our project is a reality. We have presented and abstract during the last Spanish Nuclear Society meeting in Sevilla, last October...and the best, the book is into the market for everybody in web-site: The book is structured in the following areas: 'Errare humanum est': Trying to present what is the human error to the reader, its origin and the different barriers. The message is that the reader see the error like something continuously present in our lives... even more frequently than we think. Studying its origin can be established aimed at barriers to avoid or at least minimize it. 'Error's bitter face': Shows the possible consequences of human errors. What better that presenting real experiences that have occurred in the industry. In the book, accidents in the nuclear industry, like Tree Mile Island NPP, Chernobyl NPP, and incidents like Davis Besse NPP in the past, helps to the reader to make a reflection about the

  4. Classification in Medical Image Analysis Using Adaptive Metric KNN

    Chen, Chen; Chernoff, Konstantin; Karemore, Gopal Raghunath; Lo, Pechin Chien Pau; Nielsen, Mads; Lauze, Francois Bernard


    the assumption that images are drawn from Brownian Image Model (BIM), the normalized metric based on variance of the data, the empirical metric is based on the empirical covariance matrix of the unlabeled data, and an optimized metric obtained by minimizing the classification error. The spectral...

  5. A theory of human error

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.


    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  6. [Classifications in forensic medicine and their logical basis].

    Kovalev, A V; Shmarov, L A; Ten'kov, A A


    The objective of the present study was to characterize the main requirements for the correct construction of classifications used in forensic medicine, with special reference to the errors that occur in the relevant text-books, guidelines, and manuals and the ways to avoid them. This publication continues the series of thematic articles of the authors devoted to the logical errors in the expert conclusions. The preparation of further publications is underway to report the results of the in-depth analysis of the logical errors encountered in expert conclusions, text-books, guidelines, and manuals. PMID:25764904

  7. Developing control charts to review and monitor medication errors.

    Ciminera, J L; Lease, M P


    There is a need to monitor reported medication errors in a hospital setting. Because the quantity of errors vary due to external reporting, quantifying the data is extremely difficult. Typically, these errors are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using absolute (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in time rather than sample averages, and when many successive differences may be zero. PMID:10116719

  8. Utilizing web data in identification and correction of OCR errors

    Taghva, Kazem; Agarwal, Shivam


    In this paper, we report on our experiments for detection and correction of OCR errors with web data. More specifically, we utilize Google search to access the big data resources available to identify possible candidates for correction. We then use a combination of the Longest Common Subsequences (LCS) and Bayesian estimates to automatically pick the proper candidate. Our experimental results on a small set of historical newspaper data show a recall and precision of 51% and 100%, respectively. The work in this paper further provides a detailed classification and analysis of all errors. In particular, we point out the shortcomings of our approach in its ability to suggest proper candidates to correct the remaining errors.

  9. Estimating achievement from fame

    Simkin, M. V.; Roychowdhury, V. P.


    We report a method for estimating people's achievement based on their fame. Earlier we discovered (cond-mat/0310049) that fame of fighter pilot aces (measured as number of Google hits) grows exponentially with their achievement (number of victories). We hypothesize that the same functional relation between achievement and fame holds for other professions. This allows us to estimate achievement for professions where an unquestionable and universally accepted measure of achievement does not exi...


    薛建中; 郑崇勋; 闫相国


    Objective This paper presents classifications of mental tasks based on EEG signals using an adaptive Radial Basis Function (RBF) network with optimal centers and widths for the Brain-Computer Interface (BCI) schemes. Methods Initial centers and widths of the network are selected by a cluster estimation method based on the distribution of the training set. Using a conjugate gradient descent method, they are optimized during training phase according to a regularized error function considering the influence of their changes to output values. Results The optimizing process improves the performance of RBF network, and its best cognition rate of three task pairs over four subjects achieves 87.0%. Moreover, this network runs fast due to the fewer hidden layer neurons. Conclusion The adaptive RBF network with optimal centers and widths has high recognition rate and runs fast. It may be a promising classifier for on-line BCI scheme.

  11. Classification system for reporting events involving human malfunctions

    The report describes a set of categories for reporting industrial incidents and events involving human malfunction. The classification system aims at ensuring information adequate for improvement of human work situations and man-machine interface systems and for attempts to quantify ''human error'' rates. The classification system has a multifacetted non-hierarchical structure and its compatibility with Ispra's ERDS classification is described. The collection of the information in general and for quantification purposes are discussed. 24 categories, 12 of which being human factors-oriented, are listed with their respective subcategories, and comments are given. Underlying models of human data process and their typical malfuntions and of a human decision sequence are described. The work reported is a joint contribution to the CSNI Group of Experts on Human Error Data and Assessment

  12. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity.

    Spüler, Martin; Niethammer, Christian


    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  13. Uncertainty in 2D hydrodynamic models from errors in roughness parameterization based on aerial images

    Straatsma, Menno; Huthoff, Fredrik


    In The Netherlands, 2D-hydrodynamic simulations are used to evaluate the effect of potential safety measures against river floods. In the investigated scenarios, the floodplains are completely inundated, thus requiring realistic representations of hydraulic roughness of floodplain vegetation. The current study aims at providing better insight into the uncertainty of flood water levels due to uncertain floodplain roughness parameterization. The study focuses on three key elements in the uncertainty of floodplain roughness: (1) classification error of the landcover map, (2), within class variation of vegetation structural characteristics, and (3) mapping scale. To assess the effect of the first error source, new realizations of ecotope maps were made based on the current floodplain ecotope map and an error matrix of the classification. For the second error source, field measurements of vegetation structure were used to obtain uncertainty ranges for each vegetation structural type. The scale error was investigated by reassigning roughness codes on a smaller spatial scale. It is shown that classification accuracy of 69% leads to an uncertainty range of predicted water levels in the order of decimeters. The other error sources are less relevant. The quantification of the uncertainty in water levels can help to make better decisions on suitable flood protection measures. Moreover, the relation between uncertain floodplain roughness and the error bands in water levels may serve as a guideline for the desired accuracy of floodplain characteristics in hydrodynamic models.

  14. Deep Reconstruction Models for Image Set Classification.

    Hayat, Munawar; Bennamoun, Mohammed; An, Senjian


    Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods. PMID:26353289

  15. A Fuzzy Logic Based Sentiment Classification



    Full Text Available Sentiment classification aims to detect information such as opinions, explicit , implicit feelings expressed in text. The most existing approaches are able to detect either explicit expressions or implicit expressions of sentiments in the text separately. In this proposed framework it will detect both Implicit and Explicit expressions available in the meeting transcripts. It will classify the Positive, Negative, Neutral words and also identify the topic of the particular meeting transcripts by using fuzzy logic. This paper aims to add some additional features for improving the classification method. The quality of the sentiment classification is improved using proposed fuzzy logic framework .In this fuzzy logic it includes the features like Fuzzy rules and Fuzzy C-means algorithm.The quality of the output is evaluated using the parameters such as precision, recall, f-measure. Here Fuzzy C-means Clustering technique measured in terms of Purity and Entropy. The data set was validated using 10-fold cross validation method and observed 95% confidence interval between the accuracy values .Finally, the proposed fuzzy logic method produced more than 85 % accurate results and error rate is very less compared to existing sentiment classification techniques.

  16. Quantum learning: optimal classification of qubit states

    Guta, Madalin


    Pattern recognition is a central topic in Learning Theory with numerous applications such as voice and text recognition, image analysis, computer diagnosis. The statistical set-up in classification is the following: we are given an i.i.d. training set $(X_{1},Y_{1}),... (X_{n},Y_{n})$ where $X_{i}$ represents a feature and $Y_{i}\\in \\{0,1\\}$ is a label attached to that feature. The underlying joint distribution of $(X,Y)$ is unknown, but we can learn about it from the training set and we aim at devising low error classifiers $f:X\\to Y$ used to predict the label of new incoming features. Here we solve a quantum analogue of this problem, namely the classification of two arbitrary unknown qubit states. Given a number of `training' copies from each of the states, we would like to `learn' about them by performing a measurement on the training set. The outcome is then used to design mesurements for the classification of future systems with unknown labels. We find the asymptotically optimal classification strategy a...

  17. Classification of hematologic malignancies using texton signatures.

    Tuzel, Oncel; Yang, Lin; Meer, Peter; Foran, David J


    We describe a decision support system to distinguish among hematology cases directly from microscopic specimens. The system uses an image database containing digitized specimens from normal and four different hematologic malignancies. Initially, the nuclei and cytoplasmic components of the specimens are segmented using a robust color gradient vector flow active contour model. Using a few cell images from each class, the basic texture elements (textons) for the nuclei and cytoplasm are learned, and the cells are represented through texton histograms. We propose to use support vector machines on the texton histogram based cell representation and achieve major improvement over the commonly used classification methods in texture research. Experiments with 3,691 cell images from 105 patients which originated from four different hospitals indicate more than 84% classification performance for individual cells and 89% for case based classification for the five class problem. PMID:19890460

  18. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  19. Text Classification Using Sentential Frequent Itemsets

    Shi-Zhu Liu; He-Ping Hu


    Text classification techniques mostly rely on single term analysis of the document data set, while more concepts,especially the specific ones, are usually conveyed by set of terms. To achieve more accurate text classifier, more informative feature including frequent co-occurring words in the same sentence and their weights are particularly important in such scenarios. In this paper, we propose a novel approach using sentential frequent itemset, a concept comes from association rule mining, for text classification, which views a sentence rather than a document as a transaction, and uses a variable precision rough set based method to evaluate each sentential frequent itemset's contribution to the classification. Experiments over the Reuters and newsgroup corpus are carried out, which validate the practicability of the proposed system.

  20. Learning Apache Mahout classification

    Gupta, Ashish


    If you are a data scientist who has some experience with the Hadoop ecosystem and machine learning methods and want to try out classification on large datasets using Mahout, this book is ideal for you. Knowledge of Java is essential.

  1. Classification in Medical Imaging

    Chen, Chen

    detection in a cardiovascular disease study. The third focus is to deepen the understanding of classification mechanism by visualizing the knowledge learned by a classifier. More specifically, to build the most typical patterns recognized by the Fisher's linear discriminant rule with applications......Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition......, a good metric is required to measure distance or similarity between feature points so that the classification becomes feasible. Furthermore, in order to build a successful classifier, one needs to deeply understand how classifiers work. This thesis focuses on these three aspects of classification...

  2. Inhibition in multiclass classification

    Huerta, Ramón; Vembu, Shankar; Amigó, José M.; Nowotny, Thomas; Elkan, Charles


    The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions, that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a classification function that embodies unselective inhibition and ...

  3. Twitter content classification

    Dann, Stephen


    This paper delivers a new Twitter content classification framework based sixteen existing Twitter studies and a grounded theory analysis of a personal Twitter history. It expands the existing understanding of Twitter as a multifunction tool for personal, profession, commercial and phatic communications with a split level classification scheme that offers broad categorization and specific sub categories for deeper insight into the real world application of the service.

  4. Text classification method review

    Mahinovs, Aigars; Tiwari, Ashutosh; Roy, Rajkumar; Baxter, David


    With the explosion of information fuelled by the growth of the World Wide Web it is no longer feasible for a human observer to understand all the data coming in or even classify it into categories. With this growth of information and simultaneous growth of available computing power automatic classification of data, particularly textual data, gains increasingly high importance. This paper provides a review of generic text classification process, phases of that process and met...

  5. Automatic Arabic Text Classification

    Al-harbi, S; Almuhareb, A.; Al-Thubaity , A; Khorsheed, M. S.; Al-Rajeh, A.


    Automated document classification is an important text mining task especially with the rapid growth of the number of online documents present in Arabic language. Text classification aims to automatically assign the text to a predefined category based on linguistic features. Such a process has different useful applications including, but not restricted to, e-mail spam detection, web page content filtering, and automatic message routing. This paper presents the results of experiments on documen...

  6. Classification of Sleep Disorders

    Michael J. Thorpy


    The classification of sleep disorders is necessary to discriminate between disorders and to facilitate an understanding of symptoms, etiology, and pathophysiology that allows for appropriate treatment. The earliest classification systems, largely organized according to major symptoms (insomnia, excessive sleepiness, and abnormal events that occur during sleep), were unable to be based on pathophysiology because the cause of most sleep disorders was unknown. These 3 symptom-based categories ar...

  7. Latent classification models

    Langseth, Helge; Nielsen, Thomas Dyhre


    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions of...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  8. Classifications of Software Transfers

    Wohlin, Claes; Smite, Darja


    Many companies have development sites around the globe. This inevitably means that development work may be transferred between the sites. This paper defines a classification of software transfer types; it divides transfers into three main types: full, partial and gradual transfers to describe the context of a transfer. The differences between transfer types, and hence the need for a classification, are illustrated with staffing curves for two different transfer types. The staffing curves are ...

  9. Human Error: A Concept Analysis

    Hansen, Frederick D.


    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  10. Human errors and safety culture

    The purpose of this paper is to focus attention on an apparent paradox: human errors can be a valuable source of safety experience if properly treated. From a punitive management and a climate of fear in reporting human errors toward a safety culture there is a long way but it is the right direction. (Author)

  11. Sandbox Learning: Try without error?

    Müller-Schloer, Christian


    Adaptivity is enabled by learning. Natural systems learn differently from technical systems. In particular, technical systems must not make errors. On the other hand, learning seems to be impossible without occasional errors. We propose a 3-level architecture for learning in adaptive technical systems and show its applicability in the domains of traffic control and communication network control.

  12. Barriers to medical error reporting

    Jalal Poorolajal


    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  13. Dual Processing and Diagnostic Errors

    Norman, Geoff


    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  14. Supernova Photometric Lightcurve Classification

    Zaidi, Tayeb; Narayan, Gautham


    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  15. Reflection error correction of gas turbine blade temperature

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan


    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  16. On Metrics for Error Correction in Network Coding

    Silva, Danilo


    The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a modified subspace metric, which is closely related to, but different than, the subspace metric of K\\"otter and Kschischang. In particular, in the case of a non-constant-dimension code, the decoder associated with the modified metric is shown to correct more errors then a minimum subspace distance decoder.

  17. Error localization in RHIC by fitting difference orbits

    Liu C.; Minty, M.; Ptitsyn, V.


    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  18. Provider error prevention: online total parenteral nutrition calculator.

    Lehmann, Christoph U.; Conner, Kim G.; Cox, Jeanne M.


    OBJECTIVE: 1. To reduce errors in the ordering of total parenteral nutrition (TPN) in the Newborn Intensive Care Unit (NICU) at the Johns Hopkins Hospital (JHH). 2. To develop a pragmatic low-cost medical information system to achieve this goal. METHODS: We designed an online total parenteral nutrition order entry system (TPNCalculator) using Internet technologies. Total development time was three weeks. Utilization, impact on medical errors and user satisfaction were evaluated. RESULTS: Duri...

  19. Hierarchical classification of social groups

    Витковская, Мария


    Classification problems are important for every science, and for sociology as well. Social phenomena, examined from the aspect of classification of social groups, can be examined deeper. At present one common classification of groups does not exist. This article offers the hierarchical classification of social group.

  20. Reducing latent errors, drift errors, and stakeholder dissonance.

    Samaras, George M


    Healthcare information technology (HIT) is being offered as a transformer of modern healthcare delivery systems. Some believe that it has the potential to improve patient safety, increase the effectiveness of healthcare delivery, and generate significant cost savings. In other industrial sectors, information technology has dramatically influenced quality and profitability - sometimes for the better and sometimes not. Quality improvement efforts in healthcare delivery have not yet produced the dramatic results obtained in other industrial sectors. This may be that previously successful quality improvement experts do not possess the requisite domain knowledge (clinical experience and expertise). It also appears related to a continuing misconception regarding the origins and meaning of work errors in healthcare delivery. The focus here is on system use errors rather than individual user errors. System use errors originate in both the development and the deployment of technology. Not recognizing stakeholders and their conflicting needs, wants, and desires (NWDs) may lead to stakeholder dissonance. Mistakes translating stakeholder NWDs into development or deployment requirements may lead to latent errors. Mistakes translating requirements into specifications may lead to drift errors. At the sharp end, workers encounter system use errors or, recognizing the risk, expend extensive and unanticipated resources to avoid them. PMID:22317001

  1. Prioritising interventions against medication errors

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard;


    studies of prescribing errors. In addition, it contributes to identify medication errors related to high-risk processes and drugs. The definition can therefore be considered as a relevant tool for decision makers in modern healthcare to prioritise interventional strategies. 1) Lisby M, Nielsen LP, Brock B...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... experts appointed by 13 healthcare-, professional- and scientific organisations in Denmark. Test of definition: The definition was applied to historic data from a somatic hospital (2003; 64 patients) [2] and further, prospectively tested in comparable studies of medication errors in a psychiatric hospital...

  2. A theory of human error

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.


    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  3. Photometric Supernova Classification with Machine Learning

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.


    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  4. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses Using Structural Plasticity

    Hussain, Shaista; Basu, Arindam


    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  5. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses using Structural Plasticity

    Shaista eHussain; Arindam eBasu


    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  6. Hyperspectral Data Classification Using Factor Graphs

    Makarau, A.; Müller, R.; Palubinskas, G.; Reinartz, P.


    Accurate classification of hyperspectral data is still a competitive task and new classification methods are developed to achieve desired tasks of hyperspectral data use. The objective of this paper is to develop a new method for hyperspectral data classification ensuring the classification model properties like transferability, generalization, probabilistic interpretation, etc. While factor graphs (undirected graphical models) are unfortunately not widely employed in remote sensing tasks, these models possess important properties such as representation of complex systems to model estimation/decision making tasks. In this paper we present a new method for hyperspectral data classification using factor graphs. Factor graph (a bipartite graph consisting of variables and factor vertices) allows factorization of a more complex function leading to definition of variables (employed to store input data), latent variables (allow to bridge abstract class to data), and factors (defining prior probabilities for spectral features and abstract classes; input data mapping to spectral features mixture and further bridging of the mixture to an abstract class). Latent variables play an important role by defining two-level mapping of the input spectral features to a class. Configuration (learning) on training data of the model allows calculating a parameter set for the model to bridge the input data to a class. The classification algorithm is as follows. Spectral bands are separately pre-processed (unsupervised clustering is used) to be defined on a finite domain (alphabet) leading to a representation of the data on multinomial distribution. The represented hyperspectral data is used as input evidence (evidence vector is selected pixelwise) in a configured factor graph and an inference is run resulting in the posterior probability. Variational inference (Mean field) allows to obtain plausible results with a low calculation time. Calculating the posterior probability for each class

  7. Error Estimation for Indoor 802.11 Location Fingerprinting

    Lemelson, Hendrik; Kjærgaard, Mikkel Baun; Hansen, Rene;


    that is inherent to 802.11-based positioning systems can be estimated.  Knowing the position error is crucial for many applications that rely on position information: End users could be informed about the estimated position error to avoid frustration in case the system gives faulty position information....... Service providers could adapt their delivered services based on the estimated position error to achieve a higher service quality. Finally, system operators could use the information to inspect whether a location system provides satisfactory positioning accuracy throughout the covered area. For position...

  8. Product Classification in Supply Chain

    Xing, Lihong; Xu, Yaoxuan


    Oriflame is a famous international direct sale cosmetics company with complicated supply chain operation but it lacks of a product classification system. It is vital to design a product classification method in order to support Oriflame global supply planning and improve the supply chain performance. This article is aim to investigate and design the multi-criteria of product classification, propose the classification model, suggest application areas of product classification results and intro...

  9. Reducing INDEL calling errors in whole genome and exome sequencing data

    Fang, Han; Wu, Yiyang; Narzisi, Giuseppe; O’Rawe, Jason A; Barrón, Laura T Jimenez; Rosenbaum, Julie; Ronemus, Michael; Iossifov, Ivan; Schatz, Michael C.; Lyon, Gholson J


    Background INDELs, especially those disrupting protein-coding regions of the genome, have been strongly associated with human diseases. However, there are still many errors with INDEL variant calling, driven by library preparation, sequencing biases, and algorithm artifacts. Methods We characterized whole genome sequencing (WGS), whole exome sequencing (WES), and PCR-free sequencing data from the same samples to investigate the sources of INDEL errors. We also developed a classification schem...

  10. Evaluating bias due to data linkage error in electronic healthcare records.

    Harron, K.; WADE, A.; Gilbert, R.; Muller-Pebody, B; Goldstein, H.


    Background Linkage of electronic healthcare records is becoming increasingly important for research purposes. However, linkage error due to mis-recorded or missing identifiers can lead to biased results. We evaluated the impact of linkage error on estimated infection rates using two different methods for classifying links: highest-weight (HW) classification using probabilistic match weights and prior-informed imputation (PII) using match probabilities. Methods A gold-standard dataset was crea...

  11. R-Peak Detection using Daubechies Wavelet and ECG Signal Classification using Radial Basis Function Neural Network

    Rai, H. M.; Trivedi, A.; Chatterjee, K.; Shukla, S.


    This paper employed the Daubechies wavelet transform (WT) for R-peak detection and radial basis function neural network (RBFNN) to classify the electrocardiogram (ECG) signals. Five types of ECG beats: normal beat, paced beat, left bundle branch block (LBBB) beat, right bundle branch block (RBBB) beat and premature ventricular contraction (PVC) were classified. 500 QRS complexes were arbitrarily extracted from 26 records in Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, which are available on Physionet website. Each and every QRS complex was represented by 21 points from p1 to p21 and these QRS complexes of each record were categorized according to types of beats. The system performance was computed using four types of parameter evaluation metrics: sensitivity, positive predictivity, specificity and classification error rate. The experimental result shows that the average values of sensitivity, positive predictivity, specificity and classification error rate are 99.8%, 99.60%, 99.90% and 0.12%, respectively with RBFNN classifier. The overall accuracy achieved for back propagation neural network (BPNN), multilayered perceptron (MLP), support vector machine (SVM) and RBFNN classifiers are 97.2%, 98.8%, 99% and 99.6%, respectively. The accuracy levels and processing time of RBFNN is higher than or comparable with BPNN, MLP and SVM classifiers.

  12. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    Bechet, P.; Mitran, R.; Munteanu, M.


    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  13. Improved VSM for Incremental Text Classification

    Yang, Zhen; Lei, Jianjun; Wang, Jian; Zhang, Xing; Guo, Jim


    As a simple classification method VSM has been widely applied in text information processing field. There are some problems for traditional VSM to select a refined vector model representation, which can make a good tradeoff between complexity and performance, especially for incremental text mining. To solve these problems, in this paper, several improvements, such as VSM based on improved TF, TFIDF and BM25, are discussed. And then maximum mutual information feature selection is introduced to achieve a low dimension VSM with less complexity, and at the same time keep an acceptable precision. The experimental results of spam filtering and short messages classification shows that the algorithm can achieve higher precision than existing algorithms under same conditions.

  14. Controlling errors in unidosis carts

    Inmaculada Díaz Fernández; Clara Fernández-Shaw Toda; David García Marco


    Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264) versus 0.6% (154) which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or una...

  15. Concepts of Classification and Taxonomy Phylogenetic Classification

    Fraix-Burnet, D.


    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works.

  16. Surface errors in the course of machining precision optics

    Biskup, H.; Haberl, A.; Rascher, R.


    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  17. Retrace error reconstruction based on point characteristic function.

    Yiwei, He; Hou, Xi; Haiyang, Quan; Song, Weihong


    Figure measuring interferometers generally work in the null condition, i.e., the reference rays share the same optical path with the test rays through the imaging system. In this case, except field distortion error, effect of other aberrations cancels out and doesn't result in measureable systematic error. However, for spatial carrier interferometry and non-null aspheric test cases, null condition cannot be achieved typically, and there is excessive measurement error that is referenced as retrace error. Previous studies about retrace error can be generally distinguished into two categories: one based on 4th-order aberration formalism, the other based on ray tracing through interferometer model. In this paper, point characteristic function (PCF) is used to analyze retrace error in a Fizeau interferometer working in high spatial carrier condition. We present the process of reconstructing retrace error with and without element error in detail. Our results are in contrast with those obtained by ray tracing through interferometer model. The small difference between them (less than 3%) shows that our method is effective. PMID:26561092

  18. Coded modulation with unequal error protection

    Wei, Lee-Fang


    It is always desirable to maintain communications in difficult situations, even though fewer messages can get across. This paper provides such capabilities for one-way broadcast media, such as the envisioned terrestrial broadcasting of digital high-definition television signals. In this television broadcasting, the data from video source encoders are not equally important. It is desirable that the important data be recovered by each receiver even under poor receiving conditions. Two approaches for providing such unequal error protection to different classes of data are presented. Power-efficient and bandwidth-efficient coded modulation is used in both approaches. The first approach is based on novel signal constellations with nonuniformly spaced signal points. The second uses time division multiplexing of different conventional coded modulation schemes. Both approaches can provide error protection for the important data to an extent that can hardly be achieved using conventional coded modulation with equal error protection. For modest amounts of important data, the first approach has, additionally, the potential of providing immunity from impulse noise through simple bit or signal-point interleaving.

  19. Aquisição do sistema ortográfico: desempenho na expressão escrita e classificação dos erros ortográficos Acquisition of the orthographic system: proficiency in written expression and classification of orthographic errors

    Clarice Costa Rosa


    Full Text Available OBJETIVO: analisar o desempenho na expressão escrita e classificar os erros da produção ortográfica que ocorrem durante as quatro primeiras séries do ensino fundamental, identificando os erros ortográficos mais freqüentes e descrevendo a evolução dos mesmos, comparando-os por série e gênero. MÉTODO: foi realizado um estudo transversal na população de alunos de 1ª a 4ª série de uma escola estadual do município de Porto Alegre. Foram avaliados 214 sujeitos por meio do ditado de palavras do subteste da escrita do Teste de Desempenho Escolar. RESULTADOS: foram observados maiores níveis de suficiência no domínio da expressão escrita nas séries iniciais; os sujeitos da 4ª série demonstraram dificuldade no domínio das regras de acentuação. Por meio da análise do ditado, constatou-se que os erros de representações múltiplas (14,76% foram os mais freqüentes nesta população. Quando comparado os diferentes tipos de erros ortográficos verificados nas quatro séries em conjunto, observou-se que houve diferença significante entre as mesmas no decorrer das séries (PPURPOSES: to analyze the proficiency in written expression, classifying the errors of orthographic production, which occurs during the first four grades of elementary school, identifying the most frequent and monitoring their developments, comparing the performance by grade and by gender. METHOD: a cross-sectional study was conducted in the population of students from 1st to 4th grade, in the city of Porto Alegre. 214 subjects were assessed by the standardized saying of the School Performance Test. RESULTS: we found higher sufficiency levels in the field of writing for the initial series. On the 4th grade the level of performance was considered to be lower with grammar emphasis' rules. Analyzing the test, it was evidenced that the errors of multiple representations, were the most frequent in this population. When the different types of orthographic errors of the

  20. Fast Wavelet-Based Visual Classification

    Yu, Guoshen; Slotine, Jean-Jacques


    We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a si...

  1. 3-PRS serial-parallel machine tool error calibration and parameter identification

    ZHAO Jun-wei; DAI Jun; HUANG Jun-jie


    3-PRS serial-parallel machine tool consists of a 3-degree-of-freedom (DOF) implementation platform and a 2-DOF X-Y platform. The error modeling and parameter identification methods were deduced based on 3-PRS serial-parallel machine tool. 3-PRS serial-parallel machine tool was researched, and the mechanism of error analysis, modeling, identification of error parameters and measurement equipment for the use of agency error of measurement were conducted. In order to achieve the geometric parameters calibration and error compensation of the serial-parallel machine tool, the nominal structural parameters of the controller was adjusted by identifying the structure of the machine tool. With the establishment of a vector space size chain, we can do the error analysis, error modeling, error measurement and error compensation can be done.

  2. The paradox of atheoretical classification

    Hjørland, Birger


    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may be...... very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural and...

  3. Information gathering for CLP classification

    Ida Marcello


    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  4. Quantum error correction beyond qubits

    Aoki, Takao; Takahashi, Go; Kajiya, Tadashi; Yoshikawa, Jun-Ichi; Braunstein, Samuel L.; van Loock, Peter; Furusawa, Akira


    Quantum computation and communication rely on the ability to manipulate quantum states robustly and with high fidelity. To protect fragile quantum-superposition states from corruption through so-called decoherence noise, some form of error correction is needed. Therefore, the discovery of quantum error correction (QEC) was a key step to turn the field of quantum information from an academic curiosity into a developing technology. Here, we present an experimental implementation of a QEC code for quantum information encoded in continuous variables, based on entanglement among nine optical beams. This nine-wave-packet adaptation of Shor's original nine-qubit scheme enables, at least in principle, full quantum error correction against an arbitrary single-beam error.

  5. Error-Correcting Data Structures

    de Wolf, Ronald


    We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.

  6. Orbital and Geodetic Error Analysis

    Felsentreger, T.; Maresca, P.; Estes, R.


    Results that previously required several runs determined in more computer-efficient manner. Multiple runs performed only once with GEODYN and stored on tape. ERODYN then performs matrix partitioning and linear algebra required for each individual error-analysis run.

  7. Comprehensive Error Rate Testing (CERT)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  8. Numerical optimization with computational errors

    Zaslavski, Alexander J


    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  9. Quantum learning: asymptotically optimal classification of qubit states

    Pattern recognition is a central topic in learning theory, with numerous applications such as voice and text recognition, image analysis and computer diagnosis. The statistical setup in classification is the following: we are given an i.i.d. training set (X1, Y1), ... , (Xn, Yn), where Xi represents a feature and Yiin{0, 1} is a label attached to that feature. The underlying joint distribution of (X, Y) is unknown, but we can learn about it from the training set, and we aim at devising low error classifiers f: X→Y used to predict the label of new incoming features. In this paper, we solve a quantum analogue of this problem, namely the classification of two arbitrary unknown mixed qubit states. Given a number of 'training' copies from each of the states, we would like to 'learn' about them by performing a measurement on the training set. The outcome is then used to design measurements for the classification of future systems with unknown labels. We found the asymptotically optimal classification strategy and show that typically it performs strictly better than a plug-in strategy, which consists of estimating the states separately and then discriminating between them using the Helstrom measurement. The figure of merit is given by the excess risk equal to the difference between the probability of error and the probability of error of the optimal measurement for known states. We show that the excess risk scales as n-1 and compute the exact constant of the rate.

  10. Vertebral fracture classification

    de Bruijne, Marleen; Pettersen, Paola C.; Tankó, László B.; Nielsen, Mads


    A novel method for classification and quantification of vertebral fractures from X-ray images is presented. Using pairwise conditional shape models trained on a set of healthy spines, the most likely unfractured shape is estimated for each of the vertebrae in the image. The difference between the true shape and the reconstructed normal shape is an indicator for the shape abnormality. A statistical classification scheme with the two shapes as features is applied to detect, classify, and grade various types of deformities. In contrast with the current (semi-)quantitative grading strategies this method takes the full shape into account, it uses a patient-specific reference by combining population-based information on biological variation in vertebra shape and vertebra interrelations, and it provides a continuous measure of deformity. Good agreement with manual classification and grading is demonstrated on 204 lateral spine radiographs with in total 89 fractures.

  11. Supernova Photometric Classification Challenge

    Kessler, Richard; Jha, Saurabh; Kuhlmann, Stephen


    We have publicly released a blinded mix of simulated SNe, with types (Ia, Ib, Ic, II) selected in proportion to their expected rate. The simulation is realized in the griz filters of the Dark Energy Survey (DES) with realistic observing conditions (sky noise, point spread function and atmospheric transparency) based on years of recorded conditions at the DES site. Simulations of non-Ia type SNe are based on spectroscopically confirmed light curves that include unpublished non-Ia samples donated from the Carnegie Supernova Project (CSP), the Supernova Legacy Survey (SNLS), and the Sloan Digital Sky Survey-II (SDSS-II). We challenge scientists to run their classification algorithms and report a type for each SN. A spectroscopically confirmed subset is provided for training. The goals of this challenge are to (1) learn the relative strengths and weaknesses of the different classification algorithms, (2) use the results to improve classification algorithms, and (3) understand what spectroscopically confirmed sub-...

  12. Bosniak classification system

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;


    BACKGROUND: The Bosniak classification was originally based on computed tomographic (CT) findings. Magnetic resonance (MR) and contrast-enhanced ultrasonography (CEUS) imaging may demonstrate findings that are not depicted at CT, and there may not always be a clear correlation between the findings...... at MR and CEUS imaging and those at CT. PURPOSE: To compare diagnostic accuracy of MR, CEUS, and CT when categorizing complex renal cystic masses according to the Bosniak classification. MATERIAL AND METHODS: From February 2011 to June 2012, 46 complex renal cysts were prospectively evaluated by...... three readers. Each mass was categorized according to the Bosniak classification and CT was chosen as gold standard. Kappa was calculated for diagnostic accuracy and data was compared with pathological results. RESULTS: CT images found 27 BII, six BIIF, seven BIII, and six BIV. Forty-three cysts could...

  13. Acoustic classification of dwellings

    Berardi, Umberto; Rasmussen, Birgit


    Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on sound...... insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms of...... descriptors, number of classes, and class intervals occurred between national schemes. However, a proposal “acoustic classification scheme for dwellings” has been developed recently in the European COST Action TU0901 with 32 member countries. This proposal has been accepted as an ISO work item. This paper...

  14. Errors in Chemical Sensor Measurements

    Artur Dybko


    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  15. Mixed Burst Error Correcting Codes

    Sethi, Amita


    In this paper, we construct codes which are an improvement on the previously known block wise burst error correcting codes in terms of their error correcting capabilities. Along with different bursts in different sub-blocks, the given codes also correct overlapping bursts of a given length in two consecutive sub-blocks of a code word. Such codes are called mixed burst correcting (mbc) codes.

  16. Job Mobility and Measurement Error

    Bergin, Adele


    This thesis consists of essays investigating job mobility and measurement error. Job mobility, captured here as a change of employer, is a striking feature of the labour market. In empirical work on job mobility, researchers often depend on self-reported tenure data to identify job changes. There may be measurement error in these responses and consequently observations may be misclassified as job changes when truly no change has taken place and vice versa. These observations serve as a starti...

  17. Legal aspects of medical errors

    Vučetić Čedomir S.; Vukašinović Zoran S.; Tulić Goran Dž.; Dulić Borivoje V.; Dimitrijević K.; Kalezić Nevena K.


    Healing people and medical care are together highly organized technological system with significant expert, ethical and legal regulative. Taking medical care is very sensitive area and it interfears deep into one’s integrity, so the law is necessary in this area as a regulator. The aim of work is to show medical errors from legal aspects and clinical practice. Errors, negligent conduct during the medical treatment and bad results of medical treatment are categories that can easily be sw...

  18. Land-cover classification with an expert classification algorithm using digital aerial photographs

    José L. de la Cruz


    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (Zea mays L., oats (Avena sativa L., rye (Secale cereale L., wheat (Triticum aestivum L. and barley (Hordeun vulgare L., (3 high protein crops, such as peas (Pisum sativum L. and beans (Vicia faba L., (4 alfalfa (Medicago sativa L., (5 woodlands and scrublands, including holly oak (Quercus ilex L. and common retama (Retama sphaerocarpa L., (6 urban soil, (7 olive groves (Olea europaea L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  19. Quantum error correction for beginners

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  20. Measuring verification device error rates

    A verification device generates a Type I (II) error when it recommends to reject (accept) a valid (false) identity claim. For a given identity, the rates or probabilities of these errors quantify random variations of the device from claim to claim. These are intra-identity variations. To some degree, these rates depend on the particular identity being challenged, and there exists a distribution of error rates characterizing inter-identity variations. However, for most security system applications we only need to know averages of this distribution. These averages are called the pooled error rates. In this paper the authors present the statistical underpinnings for the measurement of pooled Type I and Type II error rates. The authors consider a conceptual experiment, ''a crate of biased coins''. This model illustrates the effects of sampling both within trials of the same individual and among trials from different individuals. Application of this simple model to verification devices yields pooled error rate estimates and confidence limits for these estimates. A sample certification procedure for verification devices is given in the appendix

  1. Classification of syringomyelia.

    Milhorat, T H


    Syringomyelia poses special challenges for the clinician because of its complex symptomatology, uncertain pathogenesis, and multiple options of treatment. The purpose of this study was to classify intramedullary cavities according to their most salient pathological and clinical features. Pathological findings obtained in 175 individuals with tubular cavitations of the spinal cord were correlated with clinical and magnetic resonance (MR) imaging findings in a database of 927 patients. A classification system was developed in which the morbid anatomy, cause, and pathogenesis of these lesions are emphasized. The use of a disease-based classification of syringomyelia facilitates diagnosis and the interpretation of MR imaging findings and provides a guide to treatment. PMID:16676921

  2. Classification des rongeurs

    Mignon, Jacques; Hardouin, Jacques


    Les lecteurs du Bulletin BEDIM semblent parfois avoir des difficultés avec la classification scientifique des animaux connus comme "rongeurs" dans le langage courant. Vu les querelles existant encore aujourd'hui dans la mise en place de cette classification, nous ne nous en étonnerons guère. La brève synthèse qui suit concerne les animaux faisant ou susceptibles de faire partie du mini-élevage. The note aims at providing the main characteristics of the principal families of rodents relevan...

  3. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  4. Basic Hand Gestures Classification Based on Surface Electromyography

    Aleksander Palkowski; Grzegorz Redlarski


    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the prop...

  5. Text Classification Retrieval Based on Complex Network and ICA Algorithm

    Hongxia Li


    With the development of computer science and information technology, the library is developing toward information and network. The library digital process converts the book into digital information. The high-quality preservation and management are achieved by computer technology as well as text classification techniques. It realizes knowledge appreciation. This paper introduces complex network theory in the text classification process and put forwards the ICA semantic clustering algorithm. It...

  6. Classification Accuracy of Neural Networks with PCA in Emotion Recognition

    Novakovic Jasmina; Minic Milomir; Veljovic Alempije


    This paper presents classification accuracy of neural network with principal component analysis (PCA) for feature selections in emotion recognition using facial expressions. Dimensionality reduction of a feature set is a common preprocessing step used for pattern recognition and classification applications. PCA is one of the popular methods used, and can be shown to be optimal using different optimality criteria. Experiment results, in which we achieved a recognition rate of approximately 85%...

  7. Basic Hand Gestures Classification Based on Surface Electromyography.

    Palkowski, Aleksander; Redlarski, Grzegorz


    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method. PMID:27298630

  8. Basic Hand Gestures Classification Based on Surface Electromyography

    Aleksander Palkowski


    Full Text Available This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method.

  9. Basic Hand Gestures Classification Based on Surface Electromyography

    Palkowski, Aleksander; Redlarski, Grzegorz


    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method. PMID:27298630

  10. Compensatory neurofuzzy model for discrete data classification in biomedical

    Ceylan, Rahime


    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  11. Flare classification with X-ray, particles, and radio bursts

    Some important works are concisely reviewed on flare classification with the observational results from various satellites during the last solar maximum. First, the observational definitions of impulsive and gradual types of flares are given. Next, the phenomena pertaining to meter-wave bursts are described and explained. When these meter-wave phenomena are taken into account, it is shown that clear classification can be achieved. Basically, all the flares are classified into two types of events, that is, impulsive and gradual flares. This simple classification may help to understand the relationships among the various phenomena on the sun and those in the interplanetary space. (author)

  12. Words semantic orientation classification based on HowNet

    LI Dun; MA Yong-tao; GUO Jian-li


    Based on the text orientation classification, a new measurement approach to semantic orientation of words was proposed. According to the integrated and detailed definition of words in HowNet, seed sets including the words with intense orientations were built up. The orientation similarity between the seed words and the given word was then calculated using the sentiment weight priority to recognize the semantic orientation of common words. Finally, the words' semantic orientation and the context were combined to recognize the given words' orientation. The experiments show that the measurement approach achieves better results for common words' orientation classification and contributes particularly to the text orientation classification of large granularities.

  13. Sequence Classification: 890773 [

    Full Text Available oline as sole nitrogen source; deficiency of the human homolog causes HPII, an autosomal recessive inborn error of metabolism; Put2p || ...

  14. Semi-automatic classification of glaciovolcanic landforms: An object-based mapping approach based on geomorphometry

    Pedersen, G. B. M.


    A new object-oriented approach is developed to classify glaciovolcanic landforms (Procedure A) and their landform elements boundaries (Procedure B). It utilizes the principle that glaciovolcanic edifices are geomorphometrically distinct from lava shields and plains (Pedersen and Grosse, 2014), and the approach is tested on data from Reykjanes Peninsula, Iceland. The outlined procedures utilize slope and profile curvature attribute maps (20 m/pixel) and the classified results are evaluated quantitatively through error matrix maps (Procedure A) and visual inspection (Procedure B). In procedure A, the highest obtained accuracy is 94.1%, but even simple mapping procedures provide good results (> 90% accuracy). Successful classification of glaciovolcanic landform element boundaries (Procedure B) is also achieved and this technique has the potential to delineate the transition from intraglacial to subaerial volcanic activity in orthographic view. This object-oriented approach based on geomorphometry overcomes issues with vegetation cover, which has been typically problematic for classification schemes utilizing spectral data. Furthermore, it handles complex edifice outlines well and is easily incorporated into a GIS environment, where results can be edited or fused with other mapping results. The approach outlined here is designed to map glaciovolcanic edifices within the Icelandic neovolcanic zone but may also be applied to similar subaerial or submarine volcanic settings, where steep volcanic edifices are surrounded by flat plains.

  15. Quantitative Stellar Spectral Classification

    Jurgen Stock; María Jeanette Stock


    Equivalent widths of 19 absorption lines in CCD slit spectra of 490 stars are compared with their respective (B-V) colors and their absolute magnitudes derived from Hipparcos parallaxes. Algorithms are found which yield the absolut e magnitudes for all spectral types with an average error of 0.26 magnitudes. The (B-V) colors can be reproduced with an average error of 0.020 magnitues.

  16. Super pixel density based clustering automatic image classification method

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu


    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  17. Co-occurrence Models in Music Genre Classification

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan


    Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...... difficult 11 genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models....

  18. Iris Image Classification Based on Hierarchical Visual Codebook.

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang


    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection. PMID:26353275

  19. Text Classification Retrieval Based on Complex Network and ICA Algorithm

    Hongxia Li


    Full Text Available With the development of computer science and information technology, the library is developing toward information and network. The library digital process converts the book into digital information. The high-quality preservation and management are achieved by computer technology as well as text classification techniques. It realizes knowledge appreciation. This paper introduces complex network theory in the text classification process and put forwards the ICA semantic clustering algorithm. It realizes the independent component analysis of complex network text classification. Through the ICA clustering algorithm of independent component, it realizes character words clustering extraction of text classification. The visualization of text retrieval is improved. Finally, we make a comparative analysis of collocation algorithm and ICA clustering algorithm through text classification and keyword search experiment. The paper gives the clustering degree of algorithm and accuracy figure. Through simulation analysis, we find that ICA clustering algorithm increases by 1.2% comparing with text classification clustering degree. Accuracy can be improved by 11.1% at most. It improves the efficiency and accuracy of text classification retrieval. It also provides a theoretical reference for text retrieval classification of eBook

  20. Constructive Conjugate Codes for Quantum Error Correction and Cryptography

    Hamada, Mitsuru


    A conjugate code pair is defined as a pair of linear codes either of which contains the dual of the other. A conjugate code pair represents the essential structure of the corresponding Calderbank-Shor-Steane (CSS) quantum error-correcting code. It is known that conjugate code pairs are applicable to quantum cryptography. In this work, a polynomial construction of conjugate code pairs is presented. The constructed pairs achieve the highest known achievable rate on additive channels, and are de...

  1. Error Detection Processes in Problem Solving.

    Allwood, Carl Martin


    Describes a study which analyzed problem solvers' error detection processes by instructing subjects to think aloud when solving statistical problems. Effects of evaluative episodes on error detection, detection of different error types, error detection processes per se, and relationship of error detection behavior to problem-solving proficiency…

  2. A Novel Vehicle Classification Using Embedded Strain Gauge Sensors

    Qi Wang


    Full Text Available Abstract: This paper presents a new vehicle classification and develops a traffic monitoring detector to provide reliable vehicle classification to aid traffic management systems. The basic principle of this approach is based on measuring the dynamic strain caused by vehicles across pavement to obtain the corresponding vehicle parameters – wheelbase and number of axles – to then accurately classify the vehicle. A system prototype with five embedded strain sensors was developed to validate the accuracy and effectiveness of the classification method. According to the special arrangement of the sensors and the different time a vehicle arrived at the sensors one can estimate the vehicle’s speed accurately, corresponding to the estimated vehicle wheelbase and number of axles. Because of measurement errors and vehicle characteristics, there is a lot of overlap between vehicle wheelbase patterns. Therefore, directly setting up a fixed threshold for vehicle classification often leads to low-accuracy results. Using the machine learning pattern recognition method to deal with this problem is believed as one of the most effective tools. In this study, support vector machines (SVMs were used to integrate the classification features extracted from the strain sensors to automatically classify vehicles into five types, ranging from small vehicles to combination trucks, along the lines of the Federal Highway Administration vehicle classification guide. Test bench and field experiments will be introduced in this paper. Two support vector machines classification algorithms (one-against-all, one-against-one are used to classify single sensor data and multiple sensor combination data. Comparison of the two classification method results shows that the classification accuracy is very close using single data or multiple data. Our results indicate that using multiclass SVM-based fusion multiple sensor data significantly improves

  3. Shark Teeth Classification

    Brown, Tom; Creel, Sally; Lee, Velda


    On a recent autumn afternoon at Harmony Leland Elementary in Mableton, Georgia, students in a fifth-grade science class investigated the essential process of classification--the act of putting things into groups according to some common characteristics or attributes. While they may have honed these skills earlier in the week by grouping their own…

  4. Classification system: Netherlands

    Hartemink, A.E.


    Although people have always classified soils, it is only since the mid 19th century that soil classification emerged as an important topic within soil science. It forced soil scientists to think systematically about soils and its genesis and developed to facilitate communication between soil scienti

  5. Text document classification

    Novovičová, Jana

    č. 62 (2005), s. 53-54. ISSN 0926-4981 R&D Projects: GA AV ČR IAA2075302; GA AV ČR KSK1019101; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : document representation * categorization * classification Subject RIV: BD - Theory of Information

  6. Automated Stellar Spectral Classification

    Bailer-Jones, Coryn; Irwin, Mike; von Hippel, Ted


    Stellar classification has long been a useful tool for probing important astrophysical phenomena. Beyond simply categorizing stars it yields fundamental stellar parameters, acts as a probe of galactic abundance distributions and gives a first foothold on the cosmological distance ladder. The MK system in particular has survived on account of its robustness to changes in the calibrations of the physical parameters. Nonetheless, if stellar classification is to continue as a useful tool in stellar surveys, then it must adapt to keep pace with the large amounts of data which will be acquired as magnitude limits are pushed ever deeper. We are working on a project to automate the multi-parameter classification of visual stellar spectra, using artificial neural networks and other techniques. Our techniques have been developed with 10,000 spectra (B Analysis as a front-end compression of the data. Our continuing work also looks at the application of synthetic spectra to the direct classification of spectra in terms of the physical parameters of Teff, log g, and [Fe/H].

  7. Classification of waste packages

    Mueller, H.P.; Sauer, M.; Rojahn, T. [Versuchsatomkraftwerk GmbH, Kahl am Main (Germany)


    A barrel gamma scanning unit has been in use at the VAK for the classification of radioactive waste materials since 1998. The unit provides the facility operator with the data required for classification of waste barrels. Once these data have been entered into the AVK data processing system, the radiological status of raw waste as well as pre-treated and processed waste can be tracked from the point of origin to the point at which the waste is delivered to a final storage. Since the barrel gamma scanning unit was commissioned in 1998, approximately 900 barrels have been measured and the relevant data required for classification collected and analyzed. Based on the positive results of experience in the use of the mobile barrel gamma scanning unit, the VAK now offers the classification of barrels as a service to external users. Depending upon waste quantity accumulation, this measurement unit offers facility operators a reliable and time-saving and cost-effective means of identifying and documenting the radioactivity inventory of barrels scheduled for final storage. (orig.)

  8. The Classification Conundrum.

    Granger, Charles R.


    Argues against the five-kingdom scheme of classification as using inconsistent criteria, ending up with divisions that are forced, not natural. Advocates an approach using cell type/complexity and modification of the metabolic machinery, recommending the five-kingdom scheme as starting point for class discussion on taxonomy and its conceptual…

  9. Improving Student Question Classification

    Heiner, Cecily; Zachary, Joseph L.


    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  10. Classifications in popular music

    A. van Venrooij; V. Schmutz


    The categorical system of popular music, such as genre categories, is a highly differentiated and dynamic classification system. In this article we present work that studies different aspects of these categorical systems in popular music. Following the work of Paul DiMaggio, we focus on four questio

  11. Dynamic Latent Classification Model

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre;

    possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics in...

  12. Classification of myocardial infarction

    Saaby, Lotte; Poulsen, Tina Svenstrup; Hosbond, Susanne Elisabeth;


    The classification of myocardial infarction into 5 types was introduced in 2007 as an important component of the universal definition. In contrast to the plaque rupture-related type 1 myocardial infarction, type 2 myocardial infarction is considered to be caused by an imbalance between demand and...

  13. Error detection process - Model, design, and its impact on computer performance

    Shin, K. G.; Lee, Y.-H.


    An analytical model is developed for computer error detection processes and applied to estimate their influence on system performance. Faults in the hardware, not in the design, are assumed to be the potential cause of transition to erroneous states during normal operations. The classification properties and associated recovery methods of error detection are discussed. The probability of obtaining an unreliable result is evaluated, along with the resulting computational loss. Error detection during design is considered and a feasible design space is outlined. Extension of the methods to account for the effects of extant multiple faults is indicated.

  14. Hemisphericity and student achievement.

    Yeap, L L


    Hemispheric preference, the newest element of learning style, refers to the tendency of a person to use one side of the brain to perceive and function more than the other. The objective of the study was to investigate the psychological domain of learning styles in terms of the hemispheric patterns of Singapore Secondary Two students in the three achievement levels, namely Normal (low achievers), Express (average achievers), and Special (high achievers). Using the Cognitive Laterality Battery (Gordon, 1986) to measure the students' hemispheric dominance, the study found that it is in the psychological domain of the students' learning styles, in terms of their hemispheric dominance that the Secondary Two students in the three achievement levels are distinctly different. PMID:2583937

  15. [Classification of primary bone tumors].

    Dominok, G W; Frege, J


    An expanded classification for bone tumors is presented based on the well known international classification as well as earlier systems. The current status and future trends in this area are discussed. PMID:3461626

  16. Touch-Trigger Probe Error Compensation in a Machining Center

    Kinematic contact trigger probes are widely used for feature inspection and measurement on coordinate measurement machines (CMMs) and computer numerically controlled (CNC) machine tools. Recently, the probing accuracy has become one of the most important factors in the improvement of product quality, as the accuracy of such machining centers and measuring machines is increasing. Although high-accuracy probes using strain gauge can achieve this requirement, in this paper we study the universal economic kinematic contact probe to prove its probing mechanism and errors, and to try to make the best use of its performance. Stylus-ball-radius and center-alignment errors are proved, and the probing error mechanism on the 3D measuring coordinate is analyzed using numerical expressions. Macro algorithms are developed for the compensation of these errors, and actual tests and verifications are performed with a kinematic contact trigger probe and reference sphere on a CNC machine tool

  17. Quadratic Dynamical Decoupling with Non-Uniform Error Suppression

    Quiroz, G


    We analyze numerically the performance of the near-optimal quadratic dynamical decoupling (QDD) single-qubit decoherence errors suppression method [J. West et al., Phys. Rev. Lett. 104, 130501 (2010)]. The QDD sequence is formed by nesting two optimal Uhrig dynamical decoupling sequences for two orthogonal axes, comprising N1 and N2 pulses, respectively. Varying these numbers, we study the decoherence suppression properties of QDD directly by isolating the errors associated with each system basis operator present in the system-bath interaction Hamiltonian. Each individual error scales with the lowest order of the Dyson series, therefore immediately yielding the order of decoherence suppression. We show that the error suppression properties of QDD are dependent upon the parities of N1 and N2, and near-optimal performance is achieved for general single-qubit interactions when N1=N2.

  18. Efficient Fingercode Classification

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  19. A Conceptual Framework to use Remediation of Errors Based on Multiple External Remediation Applied to Learning Objects

    Maici Duarte Leite


    Full Text Available This paper presents the application of some concepts of Intelligent Tutoring Systems (ITS to elaborate a conceptual framework that uses the remediation of errors with Multiple External Representations (MERs in Learning Objects (LO. To this is demonstrated a development of LO for teaching the Pythagorean Theorem through this framework. This study explored the remediation process of error by a classification of error in mathematical, providing support for the use of MERs with the remediation of error. The main objective of the proposed framework is to assist the individual learner in the recovery of a mistake made during the interaction with the LO, either through carelessness or lack of knowledge. Initially, we present the compilation of the classification of mathematical errors and their relationship with MERs. Later the concepts involved with conceptual framework proposed. Finally, an experiment with LO developed with a authoring tool called FARMA, using the conceptual framework for teaching the Pythagorean Theorem is presented.

  20. Academic Achievement Among Juvenile Detainees.

    Grigorenko, Elena L; Macomber, Donna; Hart, Lesley; Naples, Adam; Chapman, John; Geib, Catherine F; Chart, Hilary; Tan, Mei; Wolhendler, Baruch; Wagner, Richard


    The literature has long pointed to heightened frequencies of learning disabilities (LD) within the population of law offenders; however, a systematic appraisal of these observations, careful estimation of these frequencies, and investigation of their correlates and causes have been lacking. Here we present data collected from all youth (1,337 unique admissions, mean age 14.81, 20.3% females) placed in detention in Connecticut (January 1, 2010-July 1, 2011). All youth completed a computerized educational screener designed to test a range of performance in reading (word and text levels) and mathematics. A subsample (n = 410) received the Wide Range Achievement Test, in addition to the educational screener. Quantitative (scale-based) and qualitative (grade-equivalence-based) indicators were then analyzed for both assessments. Results established the range of LD in this sample from 13% to 40%, averaging 24.9%. This work provides a systematic exploration of the type and severity of word and text reading and mathematics skill deficiencies among juvenile detainees and builds the foundation for subsequent efforts that may link these deficiencies to both more formal, structured, and variable definitions and classifications of LD, and to other types of disabilities (e.g., intellectual disability) and developmental disorders (e.g., ADHD) that need to be conducted in future research. PMID:24064502

  1. Oral epithelial dysplasia classification systems

    Warnakulasuriya, S; Reibel, J; Bouquot, J;


    report, we review the oral epithelial dysplasia classification systems. The three classification schemes [oral epithelial dysplasia scoring system, squamous intraepithelial neoplasia and Ljubljana classification] were presented and the Working Group recommended epithelial dysplasia grading for routine....... Several studies have shown great interexaminer and intraexaminer variability in the assessment of the presence or absence and the grade of oral epithelial dysplasia. The Working Group considered the two class classification (no/questionable/ mild - low risk; moderate or severe - implying high risk) and...

  2. Error-thresholds for qudit-based topological quantum memories

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.


    Extending the quantum computing paradigm from qubits to higher-dimensional quantum systems allows for increased channel capacity and a more efficient implementation of quantum gates. However, to perform reliable computations an efficient error-correction scheme adapted for these multi-level quantum systems is needed. A promising approach is via topological quantum error correction, where stability to external noise is achieved by encoding quantum information in non-local degrees of freedom. A key figure of merit is the error threshold which quantifies the fraction of physical qudits that can be damaged before logical information is lost. Here we analyze the resilience of generalized topological memories built from d-level quantum systems (qudits) to bit-flip errors. The error threshold is determined by mapping the quantum setup to a classical Potts-like model with bond disorder, which is then investigated numerically using large-scale Monte Carlo simulations. Our results show that topological error correction with qutrits exhibits an improved error threshold in comparison to qubit-based systems.

  3. A-posteriori error estimation for second order mechanical systems

    Thomas Ruiner; J(ǒ)rg Fehr; Bernard Haasdonk; Peter Eberhard


    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom.As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important.In this work,an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems.Due to the special second order structure of mechanical systems,an improvement of the a-posteriori error estimator is achieved· A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique.Therefore,it can be used for moment-matching based,Gramian matrices based or modal based model reduction techniques.The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system,and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  4. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.


    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  5. Negligence, genuine error, and litigation

    Sohn DH


    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  6. Large errors and severe conditions

    Smith, D L; Van Wormer, L A


    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  7. Synthetic aperture interferometry: error analysis

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  8. Towards automatic classification of all WISE sources

    Kurcz, A.; Bilicki, M.; Solarz, A.; Krupa, M.; Pollo, A.; Małek, K.


    Context. The Wide-field Infrared Survey Explorer (WISE) has detected hundreds of millions of sources over the entire sky. Classifying them reliably is, however, a challenging task owing to degeneracies in WISE multicolour space and low levels of detection in its two longest-wavelength bandpasses. Simple colour cuts are often not sufficient; for satisfactory levels of completeness and purity, more sophisticated classification methods are needed. Aims: Here we aim to obtain comprehensive and reliable star, galaxy, and quasar catalogues based on automatic source classification in full-sky WISE data. This means that the final classification will employ only parameters available from WISE itself, in particular those which are reliably measured for the majority of sources. Methods: For the automatic classification we applied a supervised machine learning algorithm, support vector machines (SVM). It requires a training sample with relevant classes already identified, and we chose to use the SDSS spectroscopic dataset (DR10) for that purpose. We tested the performance of two kernels used by the classifier, and determined the minimum number of sources in the training set required to achieve stable classification, as well as the minimum dimension of the parameter space. We also tested SVM classification accuracy as a function of extinction and apparent magnitude. Thus, the calibrated classifier was finally applied to all-sky WISE data, flux-limited to 16 mag (Vega) in the 3.4 μm channel. Results: By calibrating on the test data drawn from SDSS, we first established that a polynomial kernel is preferred over a radial one for this particular dataset. Next, using three classification parameters (W1 magnitude, W1-W2 colour, and a differential aperture magnitude) we obtained very good classification efficiency in all the tests. At the bright end, the completeness for stars and galaxies reaches ~95%, deteriorating to ~80% at W1 = 16 mag, while for quasars it stays at a level of

  9. The paradox of atheoretical classification

    Hjørland, Birger


    sometimes termed “descriptive” classifications). Paradoxically atheoretical classifications may be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. On the...

  10. Etiologic Classification in Ischemic Stroke

    Hakan Ay


    Ischemic stroke is an etiologically heterogenous disorder. Classification of ischemic stroke etiology into categories with discrete phenotypic, therapeutic, and prognostic features is indispensible to generate consistent information from stroke research. In addition, a functional classification of stroke etiology is critical to ensure unity among physicians and comparability among studies. There are two major approaches to etiologic classification in stroke. Phenotypic systems define subtypes...

  11. Hospital prescribing errors : epidemiological assessment of predictors

    Fijn, R; Van den Bemt, PMLA; Chow, M; De Blaey, CJ; De Jong-Van den Berg, LTW; Brouwers, JRBJ


    Aims To demonstrate in epidemiological method to assess predictors of prescribing errors.. Methods A retrospective case-control Study. comparing prescription,, with and without errors, Results Only prescriber and drug characteristics were associated with errors, Prescriber characteristics were medic

  12. Analysis of Medication Error Reports

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.


    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  13. Chronology of prescribing error during the hospital stay and prediction of pharmacist's alerts overriding: a prospective analysis

    Bruni Vanida


    Full Text Available Abstract Background Drug prescribing errors are frequent in the hospital setting and pharmacists play an important role in detection of these errors. The objectives of this study are (1 to describe the drug prescribing errors rate during the patient's stay, (2 to find which characteristics for a prescribing error are the most predictive of their reproduction the next day despite pharmacist's alert (i.e. override the alert. Methods We prospectively collected all medication order lines and prescribing errors during 18 days in 7 medical wards' using computerized physician order entry. We described and modelled the errors rate according to the chronology of hospital stay. We performed a classification and regression tree analysis to find which characteristics of alerts were predictive of their overriding (i.e. prescribing error repeated. Results 12 533 order lines were reviewed, 117 errors (errors rate 0.9% were observed and 51% of these errors occurred on the first day of the hospital stay. The risk of a prescribing error decreased over time. 52% of the alerts were overridden (i.e error uncorrected by prescribers on the following day. Drug omissions were the most frequently taken into account by prescribers. The classification and regression tree analysis showed that overriding pharmacist's alerts is first related to the ward of the prescriber and then to either Anatomical Therapeutic Chemical class of the drug or the type of error. Conclusions Since 51% of prescribing errors occurred on the first day of stay, pharmacist should concentrate his analysis of drug prescriptions on this day. The difference of overriding behavior between wards and according drug Anatomical Therapeutic Chemical class or type of error could also guide the validation tasks and programming of electronic alerts.

  14. Human Error and Organizational Management

    Alecxandrina DEACONU


    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  15. EWMA Chart and Measurement Error

    Maravelakis, Petros; Panaretos, John; Psarakis, Stelios


    Measurement error is a usually met distortion factor in real-world applications that influences the outcome of a process. In this paper, we examine the effect of measurement error on the ability of the EWMA control chart to detect out-of-control situations. The model used is the one involving linear covariates. We investigate the ability of the EWMA chart in the case of a shift in mean. The effect of taking multiple measurements on each sampled unit and the case of linearly increasing varianc...

  16. Most Used Rock Mass Classifications for Underground Opening

    Al-Jbori A’ssim


    Full Text Available Problem statement: Rock mass characterization is an integral part of rock engineering practice. The empirical design methods based on rock mass classifications systems provide quick assessments of the support requirements for underground excavations at any stage of a project, even if the available geotechnical data are limited. The underground excavation industry tends to lean on empirical approaches such as rock mass classification methods, which provide a rapid means of assessing rock mass quality and support requirements. Approach: There were several classifications systems used in underground construction design. This study reviewed and summarized the must used classification methods in the mining and tunneling systems. Results: The method of this research was collected of the underground excavations classifications method with its parameters calculations procedures for each one, trying to find the simplest, less costs and more efficient method. Conclusion: The study concluded with reference to errors that may arise in particular conditions and the choice of rock mass classification depend on the sensitivity of the projects, costs and the efficient.

  17. On the Smoothed Minimum Error Entropy Criterion

    Badong Chen; Principe, Jose C.


    Recent studies suggest that the minimum error entropy (MEE) criterion can outperform the traditional mean square error criterion in supervised machine learning, especially in nonlinear and non-Gaussian situations. In practice, however, one has to estimate the error entropy from the samples since in general the analytical evaluation of error entropy is not possible. By the Parzen windowing approach, the estimated error entropy converges asymptotically to the entropy of the error plus an indepe...

  18. A Synthetic Error Analysis of Positioning Equation for Airborne Three-Dimensional Laser Imaging Sensor

    Jiang, Yuesong; Chen, Ruiqiang; Wang, Yanling


    This paper presents the exact error analysis of point positioning equation used for airborne three-dimensional(3D) imaging sensor. With differential calculus and principles of precision analysis a mathematics formula on the point position error and relative factors is derived to show how each error source affects both vertical and horizontal coordinates. A comprehensive analysis of the related error sources and their achievable accuracy are provided. At last, two example figures are shown to compare the position accuracy of elliptical trace scan and the line-trace scan are drawn under the same error source and some corresponding conclusions.

  19. College Achievement and Earnings

    Gemus, Jonathan


    I study the size and sources of the monetary return to college achievement as measured by cumulative Grade Point Average (GPA). I first present evidence that the return to achievement is large and statistically significant. I find, however, that this masks variation in the return across different groups of people. In particular, there is no relationship between GPA and earnings for graduate degree holders but a large and positive relationship for people without a graduate degree. To reconcile...

  20. On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks

    Julius,; T., Sumana; Adityakrishna, C S


    Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.

  1. Class-specific Error Bounds for Ensemble Classifiers

    Prenger, R; Lemmond, T; Varshney, K; Chen, B; Hanley, W


    The generalization error, or probability of misclassification, of ensemble classifiers has been shown to be bounded above by a function of the mean correlation between the constituent (i.e., base) classifiers and their average strength. This bound suggests that increasing the strength and/or decreasing the correlation of an ensemble's base classifiers may yield improved performance under the assumption of equal error costs. However, this and other existing bounds do not directly address application spaces in which error costs are inherently unequal. For applications involving binary classification, Receiver Operating Characteristic (ROC) curves, performance curves that explicitly trade off false alarms and missed detections, are often utilized to support decision making. To address performance optimization in this context, we have developed a lower bound for the entire ROC curve that can be expressed in terms of the class-specific strength and correlation of the base classifiers. We present empirical analyses demonstrating the efficacy of these bounds in predicting relative classifier performance. In addition, we specify performance regions of the ROC curve that are naturally delineated by the class-specific strengths of the base classifiers and show that each of these regions can be associated with a unique set of guidelines for performance optimization of binary classifiers within unequal error cost regimes.

  2. Reader error during CT colonography: causes and implications for training

    This study investigated the variability in baseline computed tomography colonography (CTC) performance using untrained readers by documenting sources of error to guide future training requirements. Twenty CTC endoscopically validated data sets containing 32 polyps were consensus read by three unblinded radiologists experienced in CTC, creating a reference standard. Six readers without prior CTC training [four residents and two board-certified subspecialty gastrointestinal (GI) radiologists] read the 20 cases. Readers drew a region of interest (ROI) around every area they considered a potential colonic lesion, even if subsequently dismissed, before creating a final report. Using this final report, reader ROIs were classified as true positive detections, true negatives correctly dismissed, true detections incorrectly dismissed (i.e., classification error), or perceptual errors. Detection of polyps 1-5 mm, 6-9 mm, and ≥10 mm ranged from 7.1% to 28.6%, 16.7% to 41.7%, and 16.7% to 83.3%, respectively. There was no significant difference between polyp detection or false positives for the GI radiologists compared with residents (p=0.67, p=0.4 respectively). Most missed polyps were due to failure of detection rather than characterization (range 82-95%). Untrained reader performance is variable but generally poor. Most missed polyps are due perceptual error rather than characterization, suggesting basic training should focus heavily on lesion detection. (orig.)

  3. Generalization performance of graph-based semisupervised classification


    Semi-supervised learning has been of growing interest over the past few years and many methods have been proposed. Although various algorithms are provided to implement semi-supervised learning,there are still gaps in our understanding of the dependence of generalization error on the numbers of labeled and unlabeled data. In this paper,we consider a graph-based semi-supervised classification algorithm and establish its generalization error bounds. Our results show the close relations between the generalization performance and the structural invariants of data graph.

  4. AdaBoost classification for model-based segmentation of the outer wall of the common carotid artery in CTA

    Vukadinovic, D.; van Walsum, T.; Manniesing, R.; van der Lugt, A.; de Weert, T. T.; Niessen, W. J.


    A novel 2D slice based fully automatic method for model based segmentation of the outer vessel wall of the common carotid artery in CTA data set is introduced. The method utilizes a lumen segmentation and AdaBoost, a fast and robust machine learning algorithm, to initially classify (mark) regions outside and inside the vessel wall using the distance from the lumen and intensity profiles sampled radially from the gravity center of the lumen. A similar method using the distance from the lumen and the image intensity as features is used to classify calcium regions. Subsequently, an ellipse shaped deformable model is fitted to the classification result. The method achieves smaller detection error than the inter observer variability, and the method is robust against variation of the training data sets.

  5. What Is a Reading Error?

    Labov, William; Baker, Bettina


    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  6. Learner Corpora without Error Tagging

    Rastelli, Stefano


    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  7. Theory of Test Translation Error

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel


    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  8. Error processing in Huntington's disease.

    Christian Beste

    Full Text Available BACKGROUND: Huntington's disease (HD is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related negativity (Ne/ERN, a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD. Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. METHODOLOGY/PRINCIPLE FINDINGS: We assessed the error negativity (Ne in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. CONCLUSIONS/SIGNIFICANCE: The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated "cognitive" biomarker in HD.

  9. Cascade Error Projection Learning Algorithm

    Duong, T. A.; Stubberud, A. R.; Daud, T.


    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  10. Quantum Convolutional Error Correction Codes

    Chau, H. F.


    I report two general methods to construct quantum convolutional codes for quantum registers with internal $N$ states. Using one of these methods, I construct a quantum convolutional code of rate 1/4 which is able to correct one general quantum error for every eight consecutive quantum registers.

  11. Measurement error in geometric morphometrics.

    Fruciano, Carmelo


    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025

  12. Input/output error analyzer

    Vaughan, E. T.


    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  13. Sound classification of dwellings

    Rasmussen, Birgit


    needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality......National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...

  14. Soil Classification Using GATree

    Bhargavi, P


    This paper details the application of a genetic programming framework for classification of decision tree of Soil data to classify soil texture. The database contains measurements of soil profile data. We have applied GATree for generating classification decision tree. GATree is a decision tree builder that is based on Genetic Algorithms (GAs). The idea behind it is rather simple but powerful. Instead of using statistic metrics that are biased towards specific trees we use a more flexible, global metric of tree quality that try to optimize accuracy and size. GATree offers some unique features not to be found in any other tree inducers while at the same time it can produce better results for many difficult problems. Experimental results are presented which illustrate the performance of generating best decision tree for classifying soil texture for soil data set.

  15. QuorUM: An Error Corrector for Illumina Reads.

    Guillaume Marçais

    Full Text Available Illumina Sequencing data can provide high coverage of a genome by relatively short (most often 100 bp to 150 bp reads at a low cost. Even with low (advertised 1% error rate, 100 × coverage Illumina data on average has an error in some read at every base in the genome. These errors make handling the data more complicated because they result in a large number of low-count erroneous k-mers in the reads. However, there is enough information in the reads to correct most of the sequencing errors, thus making subsequent use of the data (e.g. for mapping or assembly easier. Here we use the term "error correction" to denote the reduction in errors due to both changes in individual bases and trimming of unusable sequence. We developed an error correction software called QuorUM. QuorUM is mainly aimed at error correcting Illumina reads for subsequent assembly. It is designed around the novel idea of minimizing the number of distinct erroneous k-mers in the output reads and preserving the most true k-mers, and we introduce a composite statistic π that measures how successful we are at achieving this dual goal. We evaluate the performance of QuorUM by correcting actual Illumina reads from genomes for which a reference assembly is available.We produce trimmed and error-corrected reads that result in assemblies with longer contigs and fewer errors. We compared QuorUM against several published error correctors and found that it is the best performer in most metrics we use. QuorUM is efficiently implemented making use of current multi-core computing architectures and it is suitable for large data sets (1 billion bases checked and corrected per day per core. We also demonstrate that a third-party assembler (SOAPdenovo benefits significantly from using QuorUM error-corrected reads. QuorUM error corrected reads result in a factor of 1.1 to 4 improvement in N50 contig size compared to using the original reads with SOAPdenovo for the data sets investigated

  16. Bayes Classification for the Fingerprint Retrieval



    Full Text Available The Fingerprint is the most commonly used biometric property in security, commerce, industrial, civilian and forensic applications. The goal is to raise the recognition rate in the fingerprint retrieval system. In this work, the Bayes classifier is adopted assuming Gaussian statistics. The set of training samples are expanded by spatial modeling technique and implement a variant of the Fisher’s Linear Discriminant Analysis (FLDA for dimension reduction and Quadratic Discriminant Analysis (QDA for lowering estimation errors. Finally calculating the probabilistic features for Gabor and Minutiae which helps to reduce the error rate about 75% which outperforms the K-NN classifier where the error rate was about 30-60%. The accuracy and Speed are evaluated using FVC2004 database and satisfactory retrieval performance is achieved. Thus the objective of the Fingerprint Retrieval system that is efficient and accurate is build.

  17. Sea ice classification using fast learning neural networks

    Dawson, M. S.; Fung, A. K.; Manry, M. T.


    A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.

  18. Short Text Classification: A Survey

    Ge Song


    Full Text Available With the recent explosive growth of e-commerce and online communication, a new genre of text, short text, has been extensively applied in many areas. So many researches focus on short text mining. It is a challenge to classify the short text owing to its natural characters, such as sparseness, large-scale, immediacy, non-standardization. It is difficult for traditional methods to deal with short text classification mainly because too limited words in short text cannot represent the feature space and the relationship between words and documents. Several researches and reviews on text classification are shown in recent times. However, only a few of researches focus on short text classification. This paper discusses the characters of short text and the difficulty of short text classification. Then we introduce the existing popular works on short text classifiers and models, including short text classification using sematic analysis, semi-supervised short text classification, ensemble short text classification, and real-time classification. The evaluations of short text classification are analyzed in our paper. Finally we summarize the existing classification technology and prospect for development trend of short text classification

  19. Estuary Classification Revisited

    Guha, Anirban; Lawrence, Gregory A.


    This paper presents the governing equations of a tidally-averaged, width-averaged, rectangular estuary in completely nondimensionalized forms. Subsequently, we discover that the dynamics of an estuary is entirely controlled by only two variables: (i) the Estuarine Froude number, and (ii) a nondimensional number related to the Estuarine Aspect ratio and the Tidal Froude number. Motivated by this new observation, the problem of estuary classification is re-investigated. Our analysis shows that ...

  20. Classification of Arabic Documents

    Elbery, Ahmed


    Arabic language is a very rich language with complex morphology, so it has a very different and difficult structure than other languages. So it is important to build an Arabic Text Classifier (ATC) to deal with this complex language. The importance of text or document classification comes from its wide variety of application domains such as text indexing, document sorting, text filtering, and Web page categorization. Due to the immense amount of Arabic documents as well as the number of inter...

  1. Qatar content classification

    Handosa, Mohamed


    Short title: Qatar content classification. Long title: Develop methods and software for classifying Arabic texts into a taxonomy using machine learning. Contact person and their contact information: Tarek Kanan, . Project description: Starting 4/1/2012, and running through 12/31/2015, is a project to advance digital libraries in the country of Qatar. This is led by VT, but also involves Penn State, Texas A&M, and Qatar University. Tarek is a GRA on this effort. His di...

  2. Classification of Meteorological Drought

    Zhang Qiang; Zou Xukai; Xiao Fengjin; Lu Houquan; Liu Haibo; Zhu Changhan; An Shunqing


    Background The national standard of the Classification of Meteorological Drought (GB/T 20481-2006) was developed by the National Climate Center in cooperation with Chinese Academy of Meteorological Sciences,National Meteorological Centre and Department of Forecasting and Disaster Mitigation under the China Meteorological Administration (CMA),and was formally released and implemented in November 2006.In 2008,this Standard won the second prize of the China Standard Innovation and Contribution Awards issued by SAC.Developed through independent innovation,it is the first national standard published to monitor meteorological drought disaster and the first standard in China and around the world specifying the classification of drought.Since its release in 2006,the national standard of Classification of Meteorological Drought has been used by CMA as the operational index to monitor and drought assess,and gradually used by provincial meteorological sureaus,and applied to the drought early warning release standard in the Methods of Release and Propagation of Meteorological Disaster Early Warning Signal.

  3. Histologic classification of gliomas.

    Perry, Arie; Wesseling, Pieter


    Gliomas form a heterogeneous group of tumors of the central nervous system (CNS) and are traditionally classified based on histologic type and malignancy grade. Most gliomas, the diffuse gliomas, show extensive infiltration in the CNS parenchyma. Diffuse gliomas can be further typed as astrocytic, oligodendroglial, or rare mixed oligodendroglial-astrocytic of World Health Organization (WHO) grade II (low grade), III (anaplastic), or IV (glioblastoma). Other gliomas generally have a more circumscribed growth pattern, with pilocytic astrocytomas (WHO grade I) and ependymal tumors (WHO grade I, II, or III) as the most frequent representatives. This chapter provides an overview of the histology of all glial neoplasms listed in the WHO 2016 classification, including the less frequent "nondiffuse" gliomas and mixed neuronal-glial tumors. For multiple decades the histologic diagnosis of these tumors formed a useful basis for assessment of prognosis and therapeutic management. However, it is now fully clear that information on the molecular underpinnings often allows for a more robust classification of (glial) neoplasms. Indeed, in the WHO 2016 classification, histologic and molecular findings are integrated in the definition of several gliomas. As such, this chapter and Chapter 6 are highly interrelated and neither should be considered in isolation. PMID:26948349

  4. Automated spectral classification using template matching

    Fu-Qing Duan; Rong Liu; Ping Guo; Ming-Quan Zhou; Fu-Chao Wu


    An automated spectral classification technique for large sky surveys is pro-posed. We firstly perform spectral line matching to determine redshift candidates for an observed spectrum, and then estimate the spectral class by measuring the similarity be-tween the observed spectrum and the shifted templates for each redshift candidate. As a byproduct of this approach, the spectral redshift can also be obtained with high accuracy. Compared with some approaches based on computerized learning methods in the liter-ature, the proposed approach needs no training, which is time-consuming and sensitive to selection of the training set. Both simulated data and observed spectra are used to test the approach; the results show that the proposed method is efficient, and it can achieve a correct classification rate as high as 92.9%, 97.9% and 98.8% for stars, galaxies and quasars, respectively.

  5. A Classification-based Review Recommender

    O'Mahony, Michael P.; Smyth, Barry

    Many online stores encourage their users to submit product/service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the mosthelpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

  6. Robust, Error-Tolerant Photometric Projector Compensation.

    Grundhöfer, Anselm; Iwai, Daisuke


    We propose a novel error tolerant optimization approach to generate a high-quality photometric compensated projection. The application of a non-linear color mapping function does not require radiometric pre-calibration of cameras or projectors. This characteristic improves the compensation quality compared with related linear methods if this approach is used with devices that apply complex color processing, such as single-chip digital light processing projectors. Our approach consists of a sparse sampling of the projector's color gamut and non-linear scattered data interpolation to generate the per-pixel mapping from the projector to camera colors in real time. To avoid out-of-gamut artifacts, the input image's luminance is automatically adjusted locally in an optional offline optimization step that maximizes the achievable contrast while preserving smooth input gradients without significant clipping errors. To minimize the appearance of color artifacts at high-frequency reflectance changes of the surface due to usually unavoidable slight projector vibrations and movement (drift), we show that a drift measurement and analysis step, when combined with per-pixel compensation image optimization, significantly decreases the visibility of such artifacts. PMID:26390454




    Full Text Available The accuracy of numerical solution for electromagnetic problem is greatly influenced by the convergence of the solution obtained. In order to quantify the correctness of the numerical solution the errors produced on solving the partial differential equations are required to be analyzed. Mesh quality is another parameter that affects convergence. The various quality metrics are dependent on the type of solver used for numerical simulation. The paper focuses on comparing the performance of iterative solvers used in COMSOL Multiphysics software. The modeling of coaxial coupled waveguide applicator operating at 485MHz has been done for local hyperthermia applications using adaptive finite element method. 3D heat distribution within the muscle phantom depicting spherical leison and localized heating pattern confirms the proper selection of the solver. The convergence plots are obtained during simulation of the problem using GMRES (generalized minimal residual and geometric multigrid linear iterative solvers. The best error convergence is achieved by using nonlinearity multigrid solver and further introducing adaptivity in nonlinear solver.

  8. Classification system for reporting events involving human malfunctions

    The report describes a set of categories for reporting industrial incidents and events involving human malfunction. The classification system aims at ensuring information adequate for improvement of human work situations and man-machine interface systems and for attempts to quantify ''human error'' rates. The classification system has a multifacetted non-hierarchial structure and its compatibility with Ispra's ERDS classification is described. The collection of the information in general and for quantification purposes are discussed. 24 categories, 12 of which being human factors oriented, are listed with their respective subcategories, and comments are given. Underlying models of human data processes and their typical malfunctions and of a human decision sequence are described. (author)

  9. High dimensional multiclass classification with applications to cancer diagnosis

    Vincent, Martin

    Probabilistic classifiers are introduced and it is shown that the only regular linear probabilistic classifier with convex risk is multinomial regression. Penalized empirical risk minimization is introduced and used to construct supervised learning methods for probabilistic classifiers. A sparse...... group lasso penalized approach to high dimensional multinomial classification is presented. On different real data examples it is found that this approach clearly outperforms multinomial lasso in terms of error rate and features included in the model. An efficient coordinate descent algorithm is...

  10. Classification of forest growth stage using Landsat TM data

    Fujisaki, Ikuko; Gerard, Patrick D.; Evans, David L.


    This study examined the utility of polytomous logistic regression in pixel classification of remotely sensed images by the growth stage of forests. For a population of grouped continuous categories, the assumption of normal distribution of independent variables, which is often required in multivariate classification methods, may not be appropriate. Two types of polytomous logistic regression procedures, multinomial and cumulative logistic regression, were used to classify Landsat TM data by growth stage (regeneration-immature, intermediate, and mature) of loblolly pine (Pinus taeda L.) forest in the east central Mississippi. Multinomial logistic regression is typically used for analysis of unordered categorical data. Cumulative logistic regression is one of the most commonly used methods of ordinal logistic regression which is generally preferred to analyze ordered categorical data, although, it imposes restrictions on the data. Three hundred sample points were located randomly throughout the study site and vectors of pixel values of four bands of Landsat TM data were used to predict growth stage at each sample location. The results were compared to that of parametric and nonparametric discriminant analysis, k-nearest neighbor method. Non-normal distribution of independent variables indicated a violation of the assumptions for parametric discriminant analysis. Classification with cumulative logistic regression using four bands was performed first. However, the assumption of the model was not met. So, the classification was also performed using only band 4 which appeared to meet the assumption. The error rate of cumulative logistic regression was 39.12% with all the bands and 37.70% with band 4 alone. Although error rate with cumulative logistic regression with band 4 alone resulted in the lowest error rate, the improvement over other methods was marginal. The error rate of k-nearest neighbor method varied from 38.68 to 48.06% depending on choice of the value of k.

  11. Optimized features selection for gender classification using optimization algorithms

    KHAN, Sajid Ali; Nazir, Muhammad; RIAZ, Naveed


    Optimized feature selection is an important task in gender classification. The optimized features not only reduce the dimensions, but also reduce the error rate. In this paper, we have proposed a technique for the extraction of facial features using both appearance-based and geometric-based feature extraction methods. The extracted features are then optimized using particle swarm optimization (PSO) and the bee algorithm. The geometric-based features are optimized by PSO with ensem...

  12. Research and practice on NPP safety DCS application software V and V defect classification system

    One of the most significant aims of Verification and Validation (V and V) is to find software errors and risks, especially for a DCS application software designed for nuclear power plant (NPP). Through classifying and analyzing errors, a number of obtained data can be utilized to estimate current status and potential risks of software development and improve the quality of project. A method of error classification is proposed, which is applied to whole V and V life cycle, using a MW pressurized reactor project as an example. The purpose is to analyze errors discovered by V and V activities, and result in improvement of safety critical DCS application software. (authors)

  13. Medication Distribution in Hospital: Errors Observed X Errors Perceived

    De Silva, G.N.; M. A. R. Rissato; N. S. Romano-Lieber


    Abstract: The aim of the present study was to compare errors committed in the distribution of medicationsat a hospital pharmacy with those perceived by staff members involved in the distributionprocess. Medications distributed to the medical and surgical wards were analyzed. The drugswere dispensed in individualized doses per patient, separated by administration time in boxes orplastic bags for 24 hours of care and using the carbon copy of the prescription. Nineteen staffmembers involved in t...

  14. Performance Comparison of Musical Instrument Family Classification Using Soft Set

    Saima Anwar Lashari


    Full Text Available Nowadays, it appears essential to design automatic and efficacious classification algorithm for the musical instruments. Automatic classification of musical instruments is made by extracting relevant features from the audio samples, afterward classification algorithm is used (using these extracted features to identify into which of a set of classes, the sound sample is possible to fit. The aim of this paper is to demonstrate the viability of soft set for audio signal classification. A dataset of 104 (single monophonic notes pieces of Traditional Pakistani musical instruments were designed. Feature extraction is done using two feature sets namely perception based and mel-frequency cepstral coefficients (MFCCs. In a while, two different classification techniques are applied for classification task, which are soft set (comparison table and fuzzy soft set (similarity measurement. Experimental results show that both classifiers can perform well on numerical data. However, soft set achieved accuracy up to 94.26% with best generated dataset. Consequently, these promising results provide new possibilities for soft set in classifying musical instrument sounds. Based on the analysis of the results, this study offers a new view on automatic instrument classification

  15. Methods for data classification

    Garrity, George; Lilburn, Timothy G.


    The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.

  16. An automated, real time classification system for biological and anthropogenic sounds from fixed ocean observatories

    Zaugg, Serge Alain; Schaar, Mike van der; Houegnigan, Ludwig; André, Michel


    The automated, real time classification of acoustic events in the marine environment is an important tool to study anthropogenic sound pollution, marine mammals and for mitigating human activities that are potentially harmful. We present a real time classification system targeted at many important groups of acoustic events (clicks, buzzes, calls, whistles from several cetacean species, tonal and impulsive shipping noise and explosions). The achieved classification performance ...

  17. Assessment of optimized Markov models in protein fold classification.

    Lampros, Christos; Simos, Thomas; Exarchos, Themis P; Exarchos, Konstantinos P; Papaloukas, Costas; Fotiadis, Dimitrios I


    Protein fold classification is a challenging task strongly associated with the determination of proteins' structure. In this work, we tested an optimization strategy on a Markov chain and a recently introduced Hidden Markov Model (HMM) with reduced state-space topology. The proteins with unknown structure were scored against both these models. Then the derived scores were optimized following a local optimization method. The Protein Data Bank (PDB) and the annotation of the Structural Classification of Proteins (SCOP) database were used for the evaluation of the proposed methodology. The results demonstrated that the fold classification accuracy of the optimized HMM was substantially higher compared to that of the Markov chain or the reduced state-space HMM approaches. The proposed methodology achieved an accuracy of 41.4% on fold classification, while Sequence Alignment and Modeling (SAM), which was used for comparison, reached an accuracy of 38%. PMID:25152041

  18. Rotation Invariant Texture Classification Using Binary Filter Response Pattern (BFRP)

    Guo, Zhenhua; Zhang, Lei; Zhang, David

    Using statistical textons for texture classification has shown great success recently. The maximal response 8 (MR8) method, which extracts an 8-dimensional feature set from 38 filters, is one of state-of-the-art rotation invariant texture classification methods. However, this method has two limitations. First, it require a training stage to build a texton library, thus the accuracy depends on the training samples; second, during classification, each 8-dimensional feature is assigned to a texton by searching for the nearest texton in the library, which is time consuming especially when the library size is big. In this paper, we propose a novel texton feature, namely Binary Filter Response Pattern (BFRP). It can well address the above two issues by encoding the filter response directly into binary representation. The experimental results on the CUReT database show that the proposed BFRP method achieves better classification result than MR8, especially when the training dataset is limited and less comprehensive.

  19. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    Nowakowski, Artur


    Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics. The proposed method is employed for the exemplary problem of artificial area identification. Classification of IKONOS multispectral data results in short computational time and overall accuracy of 94.4% comparing to 94.0% obtained by using AdaBoost.M1 with trees and 93.8% achieved using Random Forest. The influence of a manipulation of the final threshold of the strong classifier on classification results is reported.

  20. Ensemble polarimetric SAR image classification based on contextual sparse representation

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun


    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  1. Correction of errors in power measurements

    Pedersen, Knud Ole Helgesen


    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  2. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Zeng Bing


    Full Text Available This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream, our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth and high robustness (under varying and/or unclean channel conditions.

  3. THERP and HEART integrated methodology for human error assessment

    Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio


    THERP and HEART integrated methodology is proposed to investigate accident scenarios that involve operator errors during high-dose-rate (HDR) treatments. The new approach has been modified on the basis of fuzzy set concept with the aim of prioritizing an exhaustive list of erroneous tasks that can lead to patient radiological overexposures. The results allow for the identification of human errors that are necessary to achieve a better understanding of health hazards in the radiotherapy treatment process, so that it can be properly monitored and appropriately managed.




    We propose statistical models of uncertainty and error in numerical solutions. To represent errors efficiently in shock physics simulations we propose a composition law. The law allows us to estimate errors in the solutions of composite problems in terms of the errors from simpler ones as discussed in a previous paper. In this paper, we conduct a detailed analysis of the errors. One of our goals is to understand the relative magnitude of the input uncertainty vs. the errors created within the numerical solution. In more detail, we wish to understand the contribution of each wave interaction to the errors observed at the end of the simulation.

  5. Unsupervised amplitude and texture based classification of SAR images with multinomial latent model

    Kayabol, Koray; Zerubia, Josiane


    We combine both amplitude and texture statistics of the Synthetic Aperture Radar (SAR) images for classification purpose. We use Nakagami density to model the class amplitudes and a non-Gaussian Markov Random Field (MRF) texture model with t-distributed regression error to model the textures of the classes. A non-stationary Multinomial Logistic (MnL) latent class label model is used as a mixture density to obtain spatially smooth class segments. The Classification Expectation-Maximization (CE...

  6. Unsupervised amplitude and texture classification of SAR images with multinomial latent model

    Kayabol, Koray; Zerubia, Josiane


    We combine both amplitude and texture statistics of the Synthetic Aperture Radar (SAR) images for modelbased classification purpose. In a finite mixture model, we bring together the Nakagami densities to model the class amplitudes and a 2D Auto-Regressive texture model with t-distributed regression error to model the textures of the classes. A nonstationary Multinomial Logistic (MnL) latent class label model is used as a mixture density to obtain spatially smooth class segments. The Classific...

  7. Analysis of Classification and Clustering Algorithms using Weka For Banking Data

    G.Roch Libia Rani; K. Vanitha


    In this paper, we investigate the performance of different classification and clustering algorithms using weka software. The J48,Naive Bayes and Simple CART Classification algorthims are evaluated based on accuracy, time efficiency and error rates. The K-means, DBScan and EM clustering algorithms are evaluated based on accuracy of clustering. We run these algorithms on large and small data sets to evaluate how well they work.

  8. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    Valverde-Albacete, Francisco J.; Carmen Peláez-Moreno


    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are dep...

  9. Robot learning and error correction

    Friedman, L.


    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  10. Manson’s triple error

    Delaporte F.


    Full Text Available The author discusses the significance, implications and limitations of Manson’s work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.

  11. Performance analysis of fuzzy rule based classification system for transient identification in nuclear power plant

    Highlights: • An interpretable fuzzy system with acceptable accuracy can be used in nuclear power plant. • This system is worthy of being used as a redundant system for transient identification. • Deaerator level gives quicker response to fuzzy system, to classify transients in steam water system. • Increase in the number of input variables does not necessarily increase the efficiency of a fuzzy system. • Helps in operator guidance by reducing information overloading. - Abstract: Even though fuzzy rule based classification system (FRBCS) has been useful in event identification, it has led to strong clash in terms of better interpretability along with adequate percentage of accuracy. Basically for classification in nuclear power plant (NPP) which receives data within a cycle time of few milliseconds, either the accuracy or the interpretability of the FRBCS would get jeopardized. Online event identification of any abnormality or transient using FRBCS becomes really critical for the plant which has such a short cycle time. For such cases, the output from a FRBCS may not be conducive to classify the event every cycle. Thus, it is necessary to monitor the output of a classification system for certain amount of cycles till the static nature is attained. This gives a high level of confidence on the classifier output to be accurate. A FRBCS can produce this level of confidence by choosing the best input features with high interpretability and acceptable accuracy. The best feature selection out of a lot of input variables and preparing the rule base is again a very critical and challenging task in FRBCS. There is always a dilemma on judiciously choosing the number of input features for the FRBCS to achieve an optimized interpretable and accurate fuzzy system. It is always advisable to select least number of features with proper output error margin for a FRBCS. On adding extra features along with some rules as input to the system, certainly increases the accuracy

  12. Setting and Achieving Objectives.

    Knoop, Robert


    Provides basic guidelines which school officials and school boards may find helpful in negotiating, establishing, and managing objectives. Discusses characteristics of good objectives, specific and directional objectives, multiple objectives, participation in setting objectives, feedback on goal process and achievement, and managing a school…

  13. Intelligence and Educational Achievement

    Deary, Ian J.; Strand, Steve; Smith, Pauline; Fernandes, Cres


    This 5-year prospective longitudinal study of 70,000+ English children examined the association between psychometric intelligence at age 11 years and educational achievement in national examinations in 25 academic subjects at age 16. The correlation between a latent intelligence trait (Spearman's "g"from CAT2E) and a latent trait of educational…

  14. Explorations in achievement motivation

    Helmreich, Robert L.


    Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.

  15. Usability of ERP Error Messages

    Sadiq, Mazhar; Pirhonen, Antti


    Usability of complex information system like enterprise resource planning (ERP) system is still a challenging area. This is why many usability problems have been found in the ERP system. In this article, we tried to highlight the 21 usability problems in ERP error messages by using Nielsen’s heuristics and inquiry questionnaire methods. Nielsen’s heuristics is a better for finding a large number of unique usability problems in different areas. The inquiry questionnaire me...

  16. Urban Tree Classification Using Full-Waveform Airborne Laser Scanning

    Koma, Zs.; Koenig, K.; Höfle, B.


    Vegetation mapping in urban environments plays an important role in biological research and urban management. Airborne laser scanning provides detailed 3D geodata, which allows to classify single trees into different taxa. Until now, research dealing with tree classification focused on forest environments. This study investigates the object-based classification of urban trees at taxonomic family level, using full-waveform airborne laser scanning data captured in the city centre of Vienna (Austria). The data set is characterised by a variety of taxa, including deciduous trees (beeches, mallows, plane trees and soapberries) and the coniferous pine species. A workflow for tree object classification is presented using geometric and radiometric features. The derived features are related to point density, crown shape and radiometric characteristics. For the derivation of crown features, a prior detection of the crown base is performed. The effects of interfering objects (e.g. fences and cars which are typical in urban areas) on the feature characteristics and the subsequent classification accuracy are investigated. The applicability of the features is evaluated by Random Forest classification and exploratory analysis. The most reliable classification is achieved by using the combination of geometric and radiometric features, resulting in 87.5% overall accuracy. By using radiometric features only, a reliable classification with accuracy of 86.3% can be achieved. The influence of interfering objects on feature characteristics is identified, in particular for the radiometric features. The results indicate the potential of using radiometric features in urban tree classification and show its limitations due to anthropogenic influences at the same time.

  17. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu


    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  18. A Path Select Algorithm with Error Control Schemes and Energy Efficient Wireless Sensor Networks

    Sandeep Dahiya; Amit Banga; Brahm Prakash Dahiya; Neha Kumari


    A wireless sensor network consists of a large number of sensor nodes that are spread densely to observe the phenomenon. The whole network lifetime relies on the lifetime of the each sensor node. If one node dies, it could lead to a separation of the sensor network. Also a multi hop structure and broadcast channel of wireless sensornecessitate error control scheme to achieve reliable data transmission. Automatic repeat request (ARQ) and forward error correction (FEC) are the key error control ...

  19. Real-Time Compensation for Thermal Errors of the Milling Machine

    Tsung-Chia Chen; Chia-Jung Chang; Jui-Pin Hung; Rong-Mao Lee; Cheng-Chi Wang


    This paper is focused on developing a compensation module for reducing the thermal errors of a computer numerical control (CNC) milling machine. The thermal induced displacement variations of machine tools are a vital problem that causes positioning errors to be over than 65%. To achieve a high accuracy of machine tools, it is important to find the effective methods for reducing the thermal errors. To this end, this study first used 14 temperature sensors to examine the real temperature field...

  20. Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

    Zayed Ramadan


    Problem statement: This study introduced a variable step-size Least Mean-Square (LMS) algorithm in which the step-size is dependent on the Euclidian vector norm of the system output error. The error vector includes the last L values of the error, where L is a parameter to be chosen properly together with other parameters in the proposed algorithm to achieve a trade-off between speed of convergence and misadjustment. Approach: The performance of the algorithm was analyzed,&...

  1. Large errors and severe conditions

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probability distributions. Fourth, there are pragmatic reasons for seeking convenient analytical formulas to approximate the 'true' probability distributions of derived parameters generated by Monte Carlo simulation. This paper discusses each of these issues and illustrates the main concepts with realistic examples involving radioactivity decay and nuclear astrophysics

  2. On the Classification of Psychology in General Library Classification Schemes.

    Soudek, Miluse


    Holds that traditional library classification systems are inadequate to handle psychological literature, and advocates the establishment of new theoretical approaches to bibliographic organization. (FM)

  3. Personality and error monitoring: an update

    Sven Hoffmann


    People differ considerably with respect to their ability to initiate and maintain cognitive control. A core control function is the processing and evaluation of errors from which we learn to prevent maladaptive behavior. People strongly differ in the degree of error processing, and how errors are interpreted and appraised. In the present study it was investigated whether a correlate of error monitoring, the error negativity (Ne or ERN), is related to personality factors. Therefore the EEG was...

  4. Personality and error monitoring: an update

    Hoffmann, Sven; Wascher, Edmund; Falkenstein, Michael


    People differ considerably with respect to their ability to initiate and maintain cognitive control. A core control function is the processing and evaluation of errors from which we learn to prevent maladaptive behavior. People differ strongly in the degree of error processing, and how errors are interpreted and appraised. In the present study it was investigated whether a correlate of error monitoring, the error negativity (Ne or ERN), is related to personality factors. Therefore, the EEG wa...

  5. Detection and classification of parasite eggs for use in helminthic therapy

    Bruun, Johan Musaeus; Kapel, Christian M. O.; Carstensen, Jens Michael

    is based on matched filters and the classification is done using linear and quadratic discriminant analysis on a set of biologically inspired features, including the autocorrelation-based longitudinal anisotropy and the mean scattering intensity under dark field illumination. Despite the presence of...... impurities and overlapping eggs, the proposed method achieves cross-validated classification rates around 93%....

  6. Robust tissue classification for reproducible wound assessment in telemedicine environments

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves


    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  7. Completed Local Ternary Pattern for Rotation Invariant Texture Classification

    Taha H. Rassem


    Full Text Available Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP and the Completed Local Binary Count (CLBC, have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

  8. Joint Low-Rank and Sparse Principal Feature Coding for Enhanced Robust Representation and Visual Classification.

    Zhang, Zhao; Li, Fanzhang; Zhao, Mingbo; Zhang, Li; Yan, Shuicheng


    Recovering low-rank and sparse subspaces jointly for enhanced robust representation and classification is discussed. Technically, we first propose a transductive low-rank and sparse principal feature coding (LSPFC) formulation that decomposes given data into a component part that encodes low-rank sparse principal features and a noise-fitting error part. To well handle the outside data, we then present an inductive LSPFC (I-LSPFC). I-LSPFC incorporates embedded low-rank and sparse principal features by a projection into one problem for direct minimization, so that the projection can effectively map both inside and outside data into the underlying subspaces to learn more powerful and informative features for representation. To ensure that the learned features by I-LSPFC are optimal for classification, we further combine the classification error with the feature coding error to form a unified model, discriminative LSPFC (D-LSPFC), to boost performance. The model of D-LSPFC seamlessly integrates feature coding and discriminative classification, so the representation and classification powers can be enhanced. The proposed approaches are more general, and several recent existing low-rank or sparse coding algorithms can be embedded into our problems as special cases. Visual and numerical results demonstrate the effectiveness of our methods for representation and classification. PMID:27046875


    I. P. Prokopenko


    Full Text Available Correctly organized nutritive and pharmacological support is an important component of an athlete's preparation for competitions, an optimal shape maintenance, fast recovery and rehabilitation after traumas and defatigation. Special products of enhanced biological value (BAS for athletes nutrition are used with this purpose. Easy-to-use energy sources are administered into athlete's organism, yielded materials and biologically active substances which regulate and activate exchange reactions which proceed with difficulties during certain physical trainings. The article presents sport supplements classification which can be used before warm-up and trainings, after trainings and in competitions breaks.

  10. Classification of Emergency Scenarios

    Muench, Mathieu


    In most of today's emergency scenarios information plays a crucial role. Therefore, information has to be constantly collected and shared among all rescue team members and this requires new innovative technologies. In this paper a classification of emergency scenarios is presented, describing their special characteristics and common strategies employed by rescue units to handle them. Based on interviews with professional firefighters, requirements for new systems are listed. The goal of this article is to support developers designing new systems by providing them a deeper look into the work of first responders.

  11. Classification of hand eczema

    Agner, T; Aalto-Korte, K; Andersen, K E;


    recruited from nine different tertiary referral centres. All patients underwent examination by specialists in dermatology and were checked using relevant allergy testing. Patients were classified into one of the six diagnostic subgroups of HE: allergic contact dermatitis, irritant contact dermatitis, atopic......%) could not be classified. 38% had one additional diagnosis and 26% had two or more additional diagnoses. Eczema on feet was found in 30% of the patients, statistically significantly more frequently associated with hyperkeratotic and vesicular endogenous eczema. CONCLUSION: We find that the classification...

  12. Classification of smooth Fano polytopes

    Øbro, Mikkel

    A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...... Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm for...... the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for ....

  13. Achieving excellence on shift through teamwork

    Anyone familiar with the nuclear industry realizes the importance of operators. Operators can achieve error-free plant operations, i.e., excellence on shift through teamwork. As a shift supervisor (senior reactor operator/shift technical advisor) the author went through the plant's first cycle of operations with no scrams and no equipment damaged by operator error, having since changed roles (and companies) to one of assessing plant operations. This change has provided the opportunity to see objectively the importance of operators working together and of the team building and teamwork that contribute to the shift's success. This paper uses examples to show the effectiveness of working together and outlines steps for building a group of operators into a team

  14. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    Bryant, C. M.


    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  15. Active Learning for Text Classification

    Hu, Rong


    Text classification approaches are used extensively to solve real-world challenges. The success or failure of text classification systems hangs on the datasets used to train them, without a good dataset it is impossible to build a quality system. This thesis examines the applicability of active learning in text classification for the rapid and economical creation of labelled training data. Four main contributions are made in this thesis. First, we present two novel selection strategies to cho...

  16. Random Forests for Poverty Classification

    Ruben Thoplan


    This paper applies a relatively novel method in data mining to address the issue of poverty classification in Mauritius. The random forests algorithm is applied to the census data in view of improving classification accuracy for poverty status. The analysis shows that the numbers of hours worked, age, education and sex are the most important variables in the classification of the poverty status of an individual. In addition, a clear poverty-gender gap is identified as women have higher chance...

  17. The Revised Classification of Eukaryotes

    Adl, Sina M; Simpson, Alastair G.B.; Lane, Christopher E.; Lukeš, Julius; Bass, David; Bowser, Samuel S.; Brown, Matthew W.; Burki, Fabien; Dunthorn, Micah; Hampl, Vladimir; Heiss, Aaron; Hoppenrath, Mona; Lara, Enrique; Le Gall, Line; Lynn, Denis H.


    This revision of the classification of eukaryotes, which updates that of Adl et al. [J. Eukaryot. Microbiol. 52 (2005) 399], retains an emphasis on the protists and incorporates changes since 2005 that have resolved nodes and branches in phylogenetic trees. Whereas the previous revision was successful in re-introducing name stability to the classification, this revision provides a classification for lineages that were then still unresolved. The supergroups have withstood phylogenetic hypothes...

  18. DCC Briefing Paper: Genre classification

    Abbott, Daisy; Kim, Yunhyong


    Genre classification is the process of grouping objects together based on defined similarities such as subject, format, style, or purpose. Genre classification as a means of managing information is already established in music (e.g. folk, blues, jazz) and text and is used, alongside topic classification, to organise materials in the commercial sector (the children's section of a bookshop) and intellectually (for example, in the Usenet newsgroup directory hierarchy). However, in the case o...

  19. Classification and Labelling for Biocides

    Rubbiani, Maristella


    CLP and biocides The EU Regulation (EC) No 1272/2008 on Classification, Labelling and Packaging of Substances and Mixtures, the CLP-Regulation, entered into force on 20th January, 2009. Since 1st December, 2010 the classification, labelling and packaging of substances has to comply with this Regulation. For mixtures, the rules of this Regulation are mandatory from 1st June, 2015; this means that until this date classification, labelling and packaging could either be carried out according to D...

  20. Medication administration errors for older people in long-term residential care

    Szczepura Ala


    Full Text Available Abstract Background Older people in long-term residential care are at increased risk of medication prescribing and administration errors. The main aim of this study was to measure the incidence of medication administration errors in nursing and residential homes using a barcode medication administration (BCMA system. Methods A prospective study was conducted in 13 care homes (9 residential and 4 nursing. Data on all medication administrations for a cohort of 345 older residents were recorded in real-time using a disguised observation technique. Every attempt by social care and nursing staff to administer medication over a 3-month observation period was analysed using BCMA records to determine the incidence and types of potential medication administration errors (MAEs and whether errors were averted. Error classifications included attempts to administer medication at the wrong time, to the wrong person or discontinued medication. Further analysis compared data for residential and nursing homes. In addition, staff were surveyed prior to BCMA system implementation to assess their awareness of administration errors. Results A total of 188,249 medication administration attempts were analysed using BCMA data. Typically each resident was receiving nine different drugs and was exposed to 206 medication administration episodes every month. During the observation period, 2,289 potential MAEs were recorded for the 345 residents; 90% of residents were exposed to at least one error. The most common (n = 1,021, 45% of errors was attempting to give medication at the wrong time. Over the 3-month observation period, half (52% of residents were exposed to a serious error such as attempting to give medication to the wrong resident. Error incidence rates were 1.43 as high (95% CI 1.32-1.56 p Conclusions The incidence of medication administration errors is high in long-term residential care. A barcode medication administration system can capture medication