Sample records for achieved classification error

  1. A classification of prescription errors.

    Neville, R G; Robertson, F; Livingstone, S.; Crombie, I K


    Three independent methods of study of prescription errors led to the development of a classification of errors based on the potential effects and inconvenience to patients, pharmacists and doctors. Four types of error are described: type A (potentially serious to patient); type B (major nuisance - pharmacist/doctor contact required); type C (minor nuisance - pharmacist must use professional judgement); and type D (trivial). The types of frequency of errors are detailed for a group of eight pr...

  2. Human error classification and data collection

    Analysis of human error data requires human error classification. As the human factors/reliability subject has developed so too has the topic of human error classification. The classifications vary considerably depending on whether it has been developed from a theoretical psychological approach to understanding human behavior or error, or whether it has been based on an empirical practical approach. This latter approach is often adopted by nuclear power plants that need to make practical improvements as soon as possible. This document will review aspects of human error classification and data collection in order to show where potential improvements could be made. It will attempt to show why there are problems with human error classification and data collection schemes and that these problems will not be easy to resolve. The Annex of this document contains the papers presented at the meeting. A separate abstract was prepared for each of these 12 papers. Refs, figs and tabs

  3. Analysis of thematic map classification error matrices.

    Rosenfield, G.H.


    The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author

  4. Classification error of the thresholded independence rule

    Bak, Britta Anker; Fenger-Grøn, Morten; Jensen, Jens Ledet

    We consider classification in the situation of two groups with normally distributed data in the ‘large p small n’ framework. To counterbalance the high number of variables we consider the thresholded independence rule. An upper bound on the classification error is established which is taylored to a...

  5. Correcting Classification Error in Income Mobility

    Jesús Pérez Mayo; M.A. Fajardo Caldera


    The mobility of a categorical variable can be a mix of two different parts: true movement and measurement or classification error. For instance, observed transitions can be hiding a real immobility and, therefore, these changes are caused by measurement error. The Latent Mixed Markov Model is proposed to solve this problem in this paper. Income mobility is a well-known example of categorical variables mobility in Economics. So, the authors think that the Latent Mixed Markov Model is a good op...

  6. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Sun Yanni


    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at and at

  7. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik


    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  8. Reducing Support Vector Machine Classification Error by Implementing Kalman Filter

    Muhsin Hassan


    Full Text Available The aim of this is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine classification errors in classifying pipeline corrosion depth. In pipeline defect classification, it is important to increase the accuracy of the SVM classification so that one can avoid misclassification which can lead to greater problems in monitoring pipeline defect and prediction of pipeline leakage. In this paper, it is found that noisy data can greatly affect the performance of SVM. Hence, Kalman Filter + SVM hybrid technique has been proposed as a solution to reduce SVM classification errors. The datasets has been added with Additive White Gaussian Noise in several stages to study the effect of noise on SVM classification accuracy. Three techniques have been studied in this experiment, namely SVM, hybrid of Discrete Wavelet Transform + SVM and hybrid of Kalman Filter + SVM. Experiment results have been compared to find the most promising techniques among them. MATLAB simulations show Kalman Filter and Support Vector Machine combination in a single system produced higher accuracy compared to the other two techniques.

  9. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Zhigao Zeng


    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  10. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando


    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862

  11. A non-linear learning & classification algorithm that achieves full training accuracy with stellar classification accuracy

    Khogali, Rashid


    A fast Non-linear and non-iterative learning and classification algorithm is synthesized and validated. This algorithm named the "Reverse Ripple Effect(R.R.E)", achieves 100% learning accuracy but is computationally expensive upon classification. The R.R.E is a (deterministic) algorithm that super imposes Gaussian weighted functions on training points. In this work, the R.R.E algorithm is compared against known learning and classification techniques/algorithms such as: the Perceptron Criterio...


    CHEN Jie; GONG Zi-tong; CHEN Zhi-cheng; TAN Man-zhi


    International concerns about the effects of global change on permafrost-affected soils and responses of permafrost terrestrial landscapes to such change have been increasing in the last two decades. To achieve a variety of goals including the determining of soil carbon stocks and dynamics in the Northern Hemisphere, the understanding of soil degradation and the best ways to protect the fragile ecosystems in permafrost environment, further study development on Cryosol classification is being in great demand. In this paper the existing Cryosol classifications contained in three representative soil taxonomies are introduced, and the problems in the practical application of the defining criteria used for category differentiation in these taxonomic systems are discussed. Meanwhile, the resumption and reconstruction of Chinese Cryosol classification within a taxonomic frame is proposed. In dealing with Cryosol classification the advantages that Chinese pedologists have and the challenges that they have to face are analyzed. Finally, several suggestions on the study development of the further taxonomic frame of Cryosol classification are put forward.

  13. The Effect of Error Correction vs. Error Detection on Iranian Pre-Intermediate EFL Learners' Writing Achievement

    Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad


    This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…

  14. Artificial intelligence environment for the analysis and classification of errors in discrete sequential processes

    Ahuja, S.B.


    The study evolved over two phases. First, an existing artificial intelligence technique, heuristic state space search, was used to successfully address and resolve significant issues that have prevented automated error classification in the past. A general method was devised for constructing heuristic functions to guide the search process, which successfully avoided the combinatorial explosion normally associated with search paradigms. A prototype error classifier, SLIPS/I, was tested and evaluated using both real-world data from a databank of speech errors and artificially generated random errors. It showed that heuristic state space search is a viable paradigm for conducting domain-independent error classification within practical limits of memory space and processing time. The second phase considered sequential error classification as a diagnostic process in which a set of disorders (elementary errors) is said to be a classification of an observed set of manifestations (local differences between an intended sequence and the errorful sequence) it if provides a regular cover for them. Using a model of abductive logic based on the set covering theory, this new perspective of error classification as a diagnostic process models human diagnostic reasoning in classifying complex errors. A high level, non-procedural error specification language (ESL) was also designed.

  15. A proposal for the detection and classification of discourse errors

    Eva M. Mestre-Mestre; Carrió Pastor, Mª Luisa


    Our interest lies in error from the point of view of language in context,therefore we will focus on errors produced at the discourse level. The main objective of this paper is to detect discourse competence errors and their implications through the analysis of a corpus of English written texts produced by Higher Education students with a B1 level (following the Common European Framework of Reference for Languages). Further objectives are to propose categories which could help us to c...

  16. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki


    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  17. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Steven Kelly


    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  18. Modifed Minimum Classification Error Learning and Its Application to Neural Networks

    Shimodaira, Hiroshi; Rokui, Jun; Nakai, Mitsuru


    A novel method to improve the generalization performance of the Minimum Classification Error (MCE) / Generalized Probabilistic Descent (GPD) learning is proposed. The MCE/GPD learning proposed by Juang and Katagiri in 1992 results in better recognition performance than the maximum-likelihood (ML) based learning in various areas of pattern recognition. Despite its superiority in recognition performance, as well as other learning algorithms, it still suffers from the problem of "over-fitting...

  19. Standard Errors of Proportions Used in Reporting Changes in School Performance with Achievement Levels.

    Arce-Ferrer, Alvaro; Frisbie, David A.; Kolen, Michael J.


    Studies of the achievement test results for about 490 school districts at grade 4 and about 420 districts at grade 5 show that the error variance of estimates of change at the school level is large enough to interfere with interpretations of annual change estimates. (SLD)

  20. Classification of error in anatomic pathology: a proposal for an evidence-based standard.

    Foucar, Elliott


    Error in anatomic pathology (EAP) is an appropriate problem to consider using the disease model with which all pathologists are familiar. In analogy to medical diseases, diagnostic errors represent a complex constellation of often-baffling deviations from the "normal" condition. Ideally, one would wish to approach such "diseases of diagnosis" with effective treatments or preventative measures, but interventions in the absence of a clear understanding of pathogenesis are often ineffective or even harmful. Medical therapy has its history of "bleeding and purging," and error-prevention has a history of "blaming and shaming." The urge to take action in dealing with either medical illnesses or diagnostic failings is, of course, admirable. However, the principle of primum non nocere should guide one's action in both circumstances. The first step in using the disease model to address EAP is the development of a valid taxonomy to allow for grouping together of abnormalities that have a similar pathogenesis. It is apparent that disease categories such as "tumor" are not valuable until they are further refined by precise and accurate classification. Likewise, "error" is an impossibly broad concept that must be parsed into meaningful subcategories before it can be understood with sufficient clarity to be prevented. One important EAP subtype that has been particularly difficult to understand and classify is knowledge-based interpretative (KBI) error. Not only is the latter sometimes confused with distinctly different error types such as human lapses, but there is danger of mistaking system-wide problems (eg, imprecise or inaccurate diagnostic criteria) for the KBI errors of individual pathologists. This paper presents a theoretically-sound taxonomic system for classification of error that can be used for evidence-based categorization of individual cases. Any taxonomy of error in medicine must distinguish between the various factors that may produce mistakes, and importantly

  1. Classification errors in contingency tables analyzed with hierarchical log-linear models. Technical report No. 20

    Korn, E L


    This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.

  2. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    Cohen, Aaron M


    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here. PMID:17947623

  3. Software platform for managing the classification of error- related potentials of observers

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.


    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  4. Evaluating the Type II error rate in a sediment toxicity classification using the Reference Condition Approach.

    Rodriguez, Pilar; Maestre, Zuriñe; Martinez-Madrid, Maite; Reynoldson, Trefor B


    Sediments from 71 river sites in Northern Spain were tested using the oligochaete Tubifex tubifex (Annelida, Clitellata) chronic bioassay. 47 sediments were identified as reference primarily from macroinvertebrate community characteristics. The data for the toxicological endpoints were examined using non-metric MDS. Probability ellipses were constructed around the reference sites in multidimensional space to establish a classification for assessing test-sediments into one of three categories (Non Toxic, Potentially Toxic, and Toxic). The construction of such probability ellipses sets the Type I error rate. However, we also wished to include in the decision process for identifying pass-fail boundaries the degree of disturbance required to be detected, and the likelihood of being wrong in detecting that disturbance (i.e. the Type II error). Setting the ellipse size to use based on Type I error does not include any consideration of the probability of Type II error. To do this, the toxicological response observed in the reference sediments was manipulated by simulating different degrees of disturbance (simpacted sediments), and measuring the Type II error rate for each set of the simpacted sediments. From this procedure, the frequency at each probability ellipse of identifying impairment using sediments with known level of disturbance is quantified. Thirteen levels of disturbance and seven probability ellipses were tested. Based on the results the decision boundary for Non Toxic and Potentially Toxic was set at the 80% probability ellipse, and the boundary for Potentially Toxic and Toxic at the 95% probability ellipse. Using this approach, 9 test sediments were classified as Toxic, 2 as Potentially Toxic, and 13 as Non Toxic. PMID:20980065

  5. Block-Based Motion Estimation Using the Pixelwise Classification of the Motion Compensation Error

    Jun-Yong Kim


    Full Text Available In this paper, we propose block-based motion estimation (ME algorithms based on the pixelwise classification of two different motion compensation (MC errors: 1 displaced frame difference (DFD and 2 brightness constraint constancy term (BCCT. Block-based ME has drawbacks such as unreliable motion vectors (MVs and blocking artifacts, especially in object boundaries. The proposed block matching algorithm (BMA-based methods attempt to reduce artifacts in object-boundary blocks caused by incorrect assumption of a single rigid (translational motion. They yield more appropriate MVs in boundary blocks under the assumption that there exist up to three nonoverlapping regions with different motions. The proposed algorithms also reduce the blocking artifact in the conventional BMA, in which the overlappedblock motion compensation (OBMC is employed especially to the selected regions to prevent the degradation of details. Experimental results with several test sequences show the effectiveness of theproposed algorithms.

  6. Errors

    Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)

  7. Inborn errors of metabolism with 3-methylglutaconic aciduria as discriminative feature: proper classification and nomenclature.

    Wortmann, Saskia B; Duran, Marinus; Anikster, Yair; Barth, Peter G; Sperl, Wolfgang; Zschocke, Johannes; Morava, Eva; Wevers, Ron A


    Increased urinary 3-methylglutaconic acid excretion is a relatively common finding in metabolic disorders, especially in mitochondrial disorders. In most cases 3-methylglutaconic acid is only slightly elevated and accompanied by other (disease specific) metabolites. There is, however, a group of disorders with significantly and consistently increased 3-methylglutaconic acid excretion, where the 3-methylglutaconic aciduria is a hallmark of the phenotype and the key to diagnosis. Until now these disorders were labelled by roman numbers (I-V) in the order of discovery regardless of pathomechanism. Especially, the so called "unspecified" 3-methylglutaconic aciduria type IV has been ever growing, leading to biochemical and clinical diagnostic confusion. Therefore, we propose the following pathomechanism based classification and a simplified diagnostic flow chart for these "inborn errors of metabolism with 3-methylglutaconic aciduria as discriminative feature". One should distinguish between "primary 3-methylglutaconic aciduria" formerly known as type I (3-methylglutaconyl-CoA hydratase deficiency, AUH defect) due to defective leucine catabolism and the--currently known--three groups of "secondary 3-methylglutaconic aciduria". The latter should be further classified and named by their defective protein or the historical name as follows: i) defective phospholipid remodelling (TAZ defect or Barth syndrome, SERAC1 defect or MEGDEL syndrome) and ii) mitochondrial membrane associated disorders (OPA3 defect or Costeff syndrome, DNAJC19 defect or DCMA syndrome, TMEM70 defect). The remaining patients with significant and consistent 3-methylglutaconic aciduria in whom the above mentioned syndromes have been excluded, should be referred to as "not otherwise specified (NOS) 3-MGA-uria" until elucidation of the underlying pathomechanism enables proper (possibly extended) classification. PMID:23296368

  8. Further results on fault-tolerant distributed classification using error-correcting codes

    Wang, Tsang-Yi; Han, Yunghsiang S.; Varshney, Pramod K.


    In this paper, we consider the distributed classification problem in wireless sensor networks. The DCFECC-SD approach employing the binary code matrix has recently been proposed to cope with the errors caused by both sensor faults and the effect of fading channels. The DCFECC-SD approach extends the DCFECC approach by using soft decision decoding to combat channel fading. However, the performance of the system employing the binary code matrix could be degraded if the distance between different hypotheses can not be kept large. This situation could happen when the number of sensor is small or the number of hypotheses is large. In this paper, we design the DCFECC-SD approach employing the D-ary code matrix, where D>2. Simulation results show that the performance of the DCFECC-SD approach employing the D-ary code matrix is better than that of the DCFECC-SD approach employing the binary code matrix. Performance evaluation of DCFECC-SD using different number of bits of local decision information is also provided when the total channel energy output from each sensor node is fixed.

  9. Recurrent network of perceptrons with three state synapses achieves competitive classification on real inputs

    Amit, Yali; Walker, Jacob


    We describe an attractor network of binary perceptrons receiving inputs from a retinotopic visual feature layer. Each class is represented by a random subpopulation of the attractor layer, which is turned on in a supervised manner during learning of the feed forward connections. These are discrete three state synapses and are updated based on a simple field dependent Hebbian rule. For testing, the attractor layer is initialized by the feedforward inputs and then undergoes asynchronous random updating until convergence to a stable state. Classification is indicated by the sub-population that is persistently activated. The contribution of this paper is two-fold. This is the first example of competitive classification rates of real data being achieved through recurrent dynamics in the attractor layer, which is only stable if recurrent inhibition is introduced. Second, we demonstrate that employing three state synapses with feedforward inhibition is essential for achieving the competitive classification rates due to the ability to effectively employ both positive and negative informative features. PMID:22737121

  10. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli.

    Mandelkow, Hendrik; de Zwart, Jacco A; Duyn, Jeff H


    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  11. New classification of operators' human errors at overseas nuclear power plants and preparation of easy-to-use case sheets

    At nuclear power plants, plant operators examine other human error cases, including those that occurred at other plants, so that they can learn from such experiences and avoid making similar errors again. Although there is little data available on errors made at domestic plants, nuclear operators in foreign countries are reporting even minor irregularities and signs of faults, and a large amount of data on human errors at overseas plants could be collected and examined. However, these overseas data have not been used effectively because most of them are poorly organized or not properly classified and are often hard to understand. Accordingly, we carried out a study on the cases of human errors at overseas power plants in order to help plant personnel clearly understand overseas experiences and avoid repeating similar errors, The study produced the following results, which were put to use at nuclear power plants and other facilities. (1) ''One-Point-Advice'' refers to a practice where a leader gives pieces of advice to his team of operators in order to prevent human errors before starting work. Based on this practice and those used in the aviation industry, we have developed a new method of classifying human errors that consists of four basic actions and three applied actions. (2) We used this new classification method to classify human errors made by operators at overseas nuclear power plants. The results show that the most frequent errors caused not by operators themselves but due to insufficient team monitoring, for which superiors and/or their colleagues were responsible. We therefore analyzed and classified possible factors contributing to insufficient team monitoring, and demonstrated that the frequent errors have also occurred at domestic power plants. (3) Using the new classification formula, we prepared a human error case sheets that is easy for plant personnel to understand. The sheets are designed to make data more understandable and easier to remember

  12. Five-way Smoking Status Classification Using Text Hot-Spot Identification and Error-correcting Output Codes

    Cohen, Aaron M.


    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...

  13. Stochastic analysis of multiple-passband spectral classifications systems affected by observation errors

    Tsokos, C. P.


    The classification of targets viewed by a pushbroom type multiple band spectral scanner by algorithms suitable for implementation in high speed online digital circuits is considered. A class of algorithms suitable for use with a pipelined classifier is investigated through simulations based on observed data from agricultural targets. It is shown that time distribution of target types is an important determining factor in classification efficiency.

  14. Medication errors in outpatient setting of a tertiary care hospital: classification and root cause analysis

    Sunil Basukala


    Conclusions: Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Hence, A focus on easy-to-use and inexpensive techniques for medication error reduction should be used to have the greatest impact. [Int J Basic Clin Pharmacol 2015; 4(6.000: 1235-1240

  15. Time Series Analysis of Temporal Data by Classification using Mean Absolute Error

    Swati Soni


    Full Text Available There has been a lot of research on the application ofdata mining and knowledge discovery technologies into financialmarket prediction area. However, most of the existing researchfocused on mining structured or numeric data such as financialreports, historical quotes, etc. Another kind of data source –unstructured data such as financial news articles, comments onfinancial markets by experts, etc., which is usually of a muchhigher availability, seems to be neglected due to theirinconvenience to be represented as numeric feature vectors forfurther applying data mining algorithms. A new hybrid systemhas been developed for this purpose. It retrieves financial newsarticles from the internet periodically and using classificationmining techniques to categorize those articles into differentcategories according to their expected effects on the marketbehaviors, then the results will be compared with the real marketdata. This classification with 10 cross fold validation combinationof algorithms can be applied to do financial market prediction in the future

  16. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Spinnato, J.; Roubaud, M.-C.; Burle, B.; Torrésani, B.


    Objective. The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. Approach. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  17. Noise in remote-sensing systems - The effect on classification error

    Landgrebe, D. A.; Malaret, E.


    Several types of noise in remote-sensing systems are treated. The purpose is to provide enhanced understanding of the relationship of noise sources to both analysis results and sensor design. The context of optical sensors and spectral pattern recognition analysis methods is used to enable tractability for quantitative results. First, the concept of multispectral classification is reviewed. Next, stochastic models are discussed for both signals and noise, including thermal, shot and quantization noise along with atmospheric effects. A model enabling the study of the combined effect of these sources is presented, and a system performance index is defined. Theoretical results showing the interrelated effects of the noise sources on system performance are given. Results of simulations using the system model are presented for several values of system parameters, using some noise parameters of the Thematic Mapper scanner as an illustration. Results show the relative importance of each of the noise sources on system performance, including how sensor noise interacts with atmospheric effects to degrade accuracy.

  18. Classification

    Clary, Renee; Wandersee, James


    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  19. Classification and Analysis of Human Errors Involved in Test and Maintenance-Related Unplanned Reactor Trip Events

    Test and maintenance (T and M) human errors involved in unplanned reactor trip events in Korean nuclear power plants were analyzed according to James Reason's basic error types, and the characteristics of the T and M human errors by error type were delineated by the distinctive nature of major contributing factors, error modes, and the predictivity of possible errors. Human errors due to a planning failure where a work procedure is provided are dominated by the activities during low-power states or startup operations, and human errors due to a planning failure where a work procedure does not exist are dominated by corrective maintenance activities during full-power states. Human errors during execution of a planned work sequence show conspicuous error patterns; four error modes such as 'wrong object', 'omission', 'too little', and 'wrong action' appeared to be dominant. In view of a human error predictivity, human errors due to a planning failure is deemed to be very difficult to identify in advance, while human errors during execution are sufficiently predictable by using human error prediction or human reliability analysis methods with adequate resources

  20. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.


    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  1. Use of Total Precipitable Water Classification of A Priori Error and Quality Control in Atmospheric Temperature and Water Vapor Sounding Retrieval

    Eun-Han KWON; Jun LI; Jinlong LI; B. J. SOHN; Elisabeth WEISZ


    This study investigates the use of dynamic a priori error information according to atmospheric moistness and the use of quality controls in temperature and water vapor profile retrievals from hyperspectral infrared (IR) sounders.Temperature and water vapor profiles are retrieved from Atmospheric InfraRed Sounder (AIRS) radiance measurements by applying a physical iterative method using regression retrieval as the first guess. Based on the dependency of first-guess errors on the degree of atmospheric moistness,the a priori first-guess errors classified by total precipitable water (TPW) are applied in the AIRS physical retrieval procedure.Compared to the retrieval results from a fixed a priori error,boundary layer moisture retrievals appear to be improved via TPW classification of a priori first-guess errors.Six quality control (QC)tests,which check non-converged or bad retrievals,large residuals,high terrain and desert areas,and large temperature and moisture deviations from the first guess regression retrieval,are also applied in the AIRS physical retrievals.Significantly large errors are found for the retrievals rejected by these six QCs,and the retrieval errors are substantially reduced via QC over land,which suggest the usefulness and high impact of the QCs,especially over land.In conclusion,the use of dynamic a priori error information according to atmospheric moistness,and the use of appropriate QCs dealing with the geographical information and the deviation from the first-guess as well as the conventional inverse performance are suggested to improve temperature and moisture retrievals and their applications.

  2. Maximum mutual information regularized classification

    Wang, Jim Jing-Yan


    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  3. Hybrid evolutionary techniques in feed forward neural network with distributed error for classification of handwritten Hindi `SWARS'

    Kumar, Somesh; Pratap Singh, Manu; Goel, Rajkumar; Lavania, Rajesh


    In this work, the performance of feedforward neural network with a descent gradient of distributed error and the genetic algorithm (GA) is evaluated for the recognition of handwritten 'SWARS' of Hindi curve script. The performance index for the feedforward multilayer neural networks is considered here with distributed instantaneous unknown error i.e. different error for different layers. The objective of the GA is to make the search process more efficient to determine the optimal weight vectors from the population. The GA is applied with the distributed error. The fitness function of the GA is considered as the mean of square distributed error that is different for each layer. Hence the convergence is obtained only when the minimum of different errors is determined. It has been analysed that the proposed method of a descent gradient of distributed error with the GA known as hybrid distributed evolutionary technique for the multilayer feed forward neural performs better in terms of accuracy, epochs and the number of optimal solutions for the given training and test pattern sets of the pattern recognition problem.

  4. The Effects of Motor Coordination Error Duration on Reaction Time and Motivational Achievement Tasks among Young Romanian Psychology Students

    Mihai Aniţei; Mihaela Chraif


    Present study is focused on highlighting the effects of motor coordination error duration on reaction time to multiple stimuli, on motivation from competition and on motivation from personal goals among young psychology students. Method: the participants were 65 undergraduate students, aged between 19 and 24 years old (m= 21.65; S.D. = 1.49), 32 male and 33 female, all from the Faculty of Psychology and Educational Sciences, University of Bucharest, Romania. Instruments were the Determination...

  5. Comparison of maintenance worker's human error events occurred at United States and domestic nuclear power plants. The proposal of the classification method with insufficient knowledge and experience and the classification result of its application

    Human errors by maintenance workers in U.S. nuclear power plants were compared with those in Japanese nuclear power plants for the same period in order to identify the characteristics of such errors. As for U.S. events, cases which occurred during 2006 were selected from the Nuclear Information Database of the Institute to Nuclear Safety System while Japanese cases that occurred during the same period, were extracted from the Nuclear Information Archives (NUCIA) owned by JANTI. The most common cause of human errors was insufficient knowledge or experience' accounting for about 40% for U.S. cases and 50% or more of cases in Japan. To break down 'insufficient knowledge', we classified the contents of knowledge into five categories; method', 'nature', 'reason', 'scope' and 'goal', and classified the level of knowledge into four categories: 'known', 'comprehended', 'applied' and analytic'. By using this classification, the patterns of combination of each item of the content and the level of knowledge were compared. In the U.S. cases, errors due to 'insufficient knowledge of nature and insufficient knowledge of method' were prevalent while three other items', 'reason', scope' and 'goal' which involve work conditions among the contents of knowledge rarely occurred. In Japan, errors arising from 'nature not being comprehended' were rather prevalent while other cases were distributed evenly for all categories including the work conditions. For addressing insufficient knowledge or experience', we consider that the following approaches are valid: according to the knowledge level which is required for the work, the reflection of knowledge on the procedure or education materials, training and confirmation of understanding level, virtual practice and instruction of experience should be implemented. As for the knowledge on the work conditions, it is necessary to enter the work conditions in the procedure and education materials while conducting training or education. (author)

  6. Collection and classification of human error and human reliability data from Indian nuclear power plants for use in PSA

    Complex systems such as NPPs involve a large number of Human Interactions (HIs) in every phase of plant operations. Human Reliability Analysis (HRA) in the context of a PSA, attempts to model the HIs and evaluate/predict their impact on safety and reliability using human error/human reliability data. A large number of HRA techniques have been developed for modelling and integrating HIs into PSA but there is a significant lack of HAR data. In the face of insufficient data, human reliability analysts have had to resort to expert judgement methods in order to extend the insufficient data sets. In this situation, the generation of data from plant operating experience assumes importance. The development of a HRA data bank for Indian nuclear power plants was therefore initiated as part of the programme of work on HRA. Later, with the establishment of the coordinated research programme (CRP) on collection of human reliability data and use in PSA by IAEA in 1994-95, the development was carried out under the aegis of the IAEA research contract No. 8239/RB. The work described in this report covers the activities of development of a data taxonomy and a human error reporting form (HERF) based on it, data structuring, review and analysis of plant event reports, collection of data on human errors, analysis of the data and calculation of human error probabilities (HEPs). Analysis of plant operating experience does yield a good amount of qualitative data but obtaining quantitative data on human reliability in the form of HEPs is seen to be more difficult. The difficulties have been highlighted and some ways to bring about improvements in the data situation have been discussed. The implementation of a data system for HRA is described and useful features that can be incorporated in future systems are also discussed. (author)

  7. Discriminative Structured Dictionary Learning for Image Classification

    王萍; 兰俊花; 臧玉卫; 宋占杰


    In this paper, a discriminative structured dictionary learning algorithm is presented. To enhance the dictionary’s discriminative power, the reconstruction error, classification error and inhomogeneous representation error are integrated into the objective function. The proposed approach learns a single structured dictionary and a linear classifier jointly. The learned dictionary encourages the samples from the same class to have similar sparse codes, and the samples from different classes to have dissimilar sparse codes. The solution to the objective function is achieved by employing a feature-sign search algorithm and Lagrange dual method. Experimental results on three public databases demonstrate that the proposed approach outperforms several recently proposed dictionary learning techniques for classification.

  8. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.


    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  9. Sampling method for monitoring classification of cultivated land in county area based on Kriging estimation error%基于Kriging估计误差的县域耕地等级监测布样方法

    杨建宇; 汤赛; 郧文聚; 张超; 朱德海; 陈彦清


    China, an agricultural country, has a large population but not enough cultivated land. Until 2011, the cultivated land per capita was 1.38 mu (0.09 ha), only 40% of the world average, and it is getting worse with industrialization and urbanization. The next task for the Ministry of Land and Resources:Dynamic monitoring of cultivated land classification in which a number of counties will be sampled; in each county, a sample-based monitoring network would be established that reflects the distribution and its tendency of cultivated land classification in county area and estimates of non-sampled locations. Due to the correlation among samples, traditional methods such as simple random sampling, stratified sampling, and systematic sampling are insufficient to achieve the goal. Therefore, in this paper we introduced a spatial sampling method based on the Kriging estimation error. For our case, natural classifications of cultivated land identified from the last Land Resource Survey and Cultivated Land Evaluation are regarded as the true value and classifications of non-sampled cultivated lands would be predicted by interpolating the sample data. Finally, RMSE (root-mean-square error) of Kriging interpolation is redefined to measure the performance of the network. To be specific, five steps are needed for the monitoring network. First, the optimal sample size is determined by analyzing the variation trend between the number and the accuracy of samples. Then, set up the basic monitoring network using square grids. The suitable grid size can be chosen by comparing the grid sizes and the corresponding RMSEs from the Kriging interpolation of the samples data. Because some centers of grids do not overlap the area of cultivated land, the third step is to add some points near the centers of grids to create the global monitoring network. These points are selected from centroids of cultivated land spots which are closest to the centers and inside the searching circles around the

  10. Classification with High-Dimensional Sparse Samples

    Huang, Dayu


    The task of the binary classification problem is to determine which of two distributions has generated a length-$n$ test sequence. The two distributions are unknown; however two training sequences of length $N$, one from each distribution, are observed. The distributions share an alphabet of size $m$, which is significantly larger than $n$ and $N$. How does $N,n,m$ affect the probability of classification error? We characterize the achievable error rate in a high-dimensional setting in which $N,n,m$ all tend to infinity and $\\max\\{n,N\\}=o(m)$. The results are: * There exists an asymptotically consistent classifier if and only if $m=o(\\min\\{N^2,Nn\\})$. * The best achievable probability of classification error decays as $-\\log(P_e)=J \\min\\{N^2, Nn\\}(1+o(1))/m$ with $J>0$ (shown by achievability and converse results). * A weighted coincidence-based classifier has a non-zero generalized error exponent $J$. * The $\\ell_2$-norm based classifier has a zero generalized error exponent.

  11. Achieving the "triple aim" for inborn errors of metabolism: a review of challenges to outcomes research and presentation of a new practice-based evidence framework.

    Potter, Beth K; Chakraborty, Pranesh; Kronick, Jonathan B; Wilson, Kumanan; Coyle, Doug; Feigenbaum, Annette; Geraghty, Michael T; Karaceper, Maria D; Little, Julian; Mhanni, Aizeddin; Mitchell, John J; Siriwardena, Komudi; Wilson, Brenda J; Syrowatka, Ania


    Across all areas of health care, decision makers are in pursuit of what Berwick and colleagues have called the "triple aim": improving patient experiences with care, improving health outcomes, and managing health system impacts. This is challenging in a rare disease context, as exemplified by inborn errors of metabolism. There is a need for evaluative outcomes research to support effective and appropriate care for inborn errors of metabolism. We suggest that such research should consider interventions at both the level of the health system (e.g., early detection through newborn screening, programs to provide access to treatments) and the level of individual patient care (e.g., orphan drugs, medical foods). We have developed a practice-based evidence framework to guide outcomes research for inborn errors of metabolism. Focusing on outcomes across the triple aim, this framework integrates three priority themes: tailoring care in the context of clinical heterogeneity; a shift from "urgent care" to "opportunity for improvement"; and the need to evaluate the comparative effectiveness of emerging and established therapies. Guided by the framework, a new Canadian research network has been established to generate knowledge that will inform the design and delivery of health services for patients with inborn errors of metabolism and other rare diseases. PMID:23222662

  12. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  13. Achievements in mental health outcome measurement in Australia: Reflections on progress made by the Australian Mental Health Outcomes and Classification Network (AMHOCN

    Burgess Philip


    Full Text Available Abstract Background Australia’s National Mental Health Strategy has emphasised the quality, effectiveness and efficiency of services, and has promoted the collection of outcomes and casemix data as a means of monitoring these. All public sector mental health services across Australia now routinely report outcomes and casemix data. Since late-2003, the Australian Mental Health Outcomes and Classification Network (AMHOCN has received, processed, analysed and reported on outcome data at a national level, and played a training and service development role. This paper documents the history of AMHOCN’s activities and achievements, with a view to providing lessons for others embarking on similar exercises. Method We conducted a desktop review of relevant documents to summarise the history of AMHOCN. Results AMHOCN has operated within a framework that has provided an overarching structure to guide its activities but has been flexible enough to allow it to respond to changing priorities. With no precedents to draw upon, it has undertaken activities in an iterative fashion with an element of ‘trial and error’. It has taken a multi-pronged approach to ensuring that data are of high quality: developing innovative technical solutions; fostering ‘information literacy’; maximising the clinical utility of data at a local level; and producing reports that are meaningful to a range of audiences. Conclusion AMHOCN’s efforts have contributed to routine outcome measurement gaining a firm foothold in Australia’s public sector mental health services.

  14. Band Selection and Classification of Hyperspectral Images using Mutual Information: An algorithm based on minimizing the error probability using the inequality of Fano

    Sarhrouni, ELkebir; Hammouch, Ahmed; Aboutajdine, Driss


    Hyperspectral image is a substitution of more than a hundred images, called bands, of the same region. They are taken at juxtaposed frequencies. The reference image of the region is called Ground Truth map (GT). the problematic is how to find the good bands to classify the pixels of regions; because the bands can be not only redundant, but a source of confusion, and decreasing so the accuracy of classification. Some methods use Mutual Information (MI) and threshold, to select relevant bands. ...

  15. Hyperspectral image classification based on spatial and spectral features and sparse representation

    Yang Jing-Hui; Wang Li-Guo; Qian Jin-Xi


    To minimize the low classification accuracy and low utilization of spatial information in traditional hyperspectral image classification methods, we propose a new hyperspectral image classification method, which is based on the Gabor spatial texture features and nonparametric weighted spectral features, and the sparse representation classification method (Gabor–NWSF and SRC), abbreviated GNWSF–SRC. The proposed (GNWSF–SRC) method first combines the Gabor spatial features and nonparametric weighted spectral features to describe the hyperspectral image, and then applies the sparse representation method. Finally, the classification is obtained by analyzing the reconstruction error. We use the proposed method to process two typical hyperspectral data sets with different percentages of training samples. Theoretical analysis and simulation demonstrate that the proposed method improves the classification accuracy and Kappa coefficient compared with traditional classification methods and achieves better classification performance.

  16. Análisis y Clasificación de Errores Cometidos por Alumnos de Secundaria en los Procesos de Sustitución Formal, Generalización y Modelización en Álgebra (Secondary Students´ Error Analysis and Classification in Formal Substitution, Generalization and Modelling Process in Algebra

    Raquel M. Ruano


    Full Text Available Presentamos un estudio con alumnos de educación secundaria sobre tres procesos específicos del lenguaje algebraico: la sustitución formal, la generalización y la modelización. A partir de las respuestas a un cuestionario, realizamos una clasificación de los errores cometidos y se analizan sus posibles orígenes. Finalmente, formulamos algunas consecuencias didácticas que se derivan de estos resultados. We present a study with secondary students about three specific processes of algebraic language: Formal substitution, generalization, and modelling. Using a test, we develop a students´ errors classifications, and we analyze its possible origins. Finally we present some didactical conclusions from the results.

  17. Ovarian Cancer Classification based on Mass Spectrometry Analysis of Sera

    Baolin Wu


    Full Text Available In our previous study [1], we have compared the performance of a number of widely used discrimination methods for classifying ovarian cancer using Matrix Assisted Laser Desorption Ionization (MALDI mass spectrometry data on serum samples obtained from Reflectron mode. Our results demonstrate good performance with a random forest classifier. In this follow-up study, to improve the molecular classification power of the MALDI platform for ovarian cancer disease, we expanded the mass range of the MS data by adding data acquired in Linear mode and evaluated the resultant decrease in classification error. A general statistical framework is proposed to obtain unbiased classification error estimates and to analyze the effects of sample size and number of selected m/z features on classification errors. We also emphasize the importance of combining biological knowledge and statistical analysis to obtain both biologically and statistically sound results. Our study shows improvement in classification accuracy upon expanding the mass range of the analysis. In order to obtain the best classification accuracies possible, we found that a relatively large training sample size is needed to obviate the sample variations. For the ovarian MS dataset that is the focus of the current study, our results show that approximately 20-40 m/z features are needed to achieve the best classification accuracy from MALDI-MS analysis of sera. Supplementary information can be found at

  18. The Sources of Error in Spanish Writing.

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.


    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  19. Modulation classification based on spectrogram


    The aim of modulation classification (MC) is to identify the modulation type of a communication signal. It plays an important role in many cooperative or noncooperative communication applications. Three spectrogram-based modulation classification methods are proposed. Their reccgnition scope and performance are investigated or evaluated by theoretical analysis and extensive simulation studies. The method taking moment-like features is robust to frequency offset while the other two, which make use of principal component analysis (PCA) with different transformation inputs,can achieve satisfactory accuracy even at low SNR (as low as 2 dB). Due to the properties of spectrogram, the statistical pattern recognition techniques, and the image preprocessing steps, all of our methods are insensitive to unknown phase and frequency offsets, timing errors, and the arriving sequence of symbols.

  20. Pitch Based Sound Classification

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U.


    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classif...

  1. An Analysis of Classification of Psychological Verb Errors of Thai Students Learning Chinese%泰国留学生汉语心理动词偏误类型分析



    在收集到大量的偏误语料基础上,通过定性和定量的方法,对泰国留学生学习汉语心理动词出现的偏误类型进行研究,通过分析笔者发现主要存在两类偏误情况,一类是词语偏误,一类是搭配偏误,本文主要是研究第一类词语偏误,主要是心理动词的遗漏、误加和误代三种类型。%This paper presents the author’ s qualitative and quantitative research on classification of errors of psychological verbs of the Thai students learning Chinese in China based on ample examples collected.According to the author, there are two categories of errors:1) lexicon; and 2) collocation.This paper focuses on the for-mer, i.e.omission, redundancy and wrong substitution of psychological verbs.

  2. Output and error messages

    This document describes the output data and output files that are produced by the SYVAC A/C 1.03 computer program. It also covers the error messages generated by incorrect input data, and the run classification procedure. SYVAC A/C 1.03 simulates the groundwater mediated movement of radionuclides from underground facilities for the disposal of low and intermediate level wastes to the accessible environment, and provides an estimate of the subsequent radiological risk to man. (author)

  3. A New Classification Approach Based on Multiple Classification Rules

    Zhongmei Zhou


    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  4. Earthquake classification, location, and error analysis in a volcanic environment: implications for the magmatic system of the 1989-1990 eruptions at redoubt volcano, Alaska

    Lahr, J.C.; Chouet, B.A.; Stephens, C.D.; Power, J.A.; Page, R.A.


    Determination of the precise locations of seismic events associated with the 1989-1990 eruptions of Redoubt Volcano posed a number of problems, including poorly known crustal velocities, a sparse station distribution, and an abundance of events with emergent phase onsets. In addition, the high relief of the volcano could not be incorporated into the hypoellipse earthquake location algorithm. This algorithm was modified to allow hypocenters to be located above the elevation of the seismic stations. The velocity model was calibrated on the basis of a posteruptive seismic survey, in which four chemical explosions were recorded by eight stations of the permanent network supplemented with 20 temporary seismographs deployed on and around the volcanic edifice. The model consists of a stack of homogeneous horizontal layers; setting the top of the model at the summit allows events to be located anywhere within the volcanic edifice. Detailed analysis of hypocentral errors shows that the long-period (LP) events constituting the vigorous 23-hour swarm that preceded the initial eruption on December 14 could have originated from a point 1.4 km below the crater floor. A similar analysis of LP events in the swarm preceding the major eruption on January 2 shows they also could have originated from a point, the location of which is shifted 0.8 km northwest and 0.7 km deeper than the source of the initial swarm. We suggest this shift in LP activity reflects a northward jump in the pathway for magmatic gases caused by the sealing of the initial pathway by magma extrusion during the last half of December. Volcano-tectonic (VT) earthquakes did not occur until after the initial 23-hour-long swarm. They began slowly just below the LP source and their rate of occurrence increased after the eruption of 01:52 AST on December 15, when they shifted to depths of 6 to 10 km. After January 2 the VT activity migrated gradually northward; this migration suggests northward propagating withdrawal of

  5. The Usability-Error Ontology

    Elkin, Peter L.; Beuscart-zephir, Marie-Catherine; Pelayo, Sylvia;


    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  6. Error estimation for pattern recognition

    Braga Neto, U


    This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to more specialized classifiers, it covers important topics and essential issues pertaining to the scientific validity of pattern classification. Additional features of the book include: * The latest results on the accuracy of error estimation * Performance analysis of resubstitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches * Highly interactive computer-based exercises and end-of-chapter problems

  7. Agricultural Land Use classification from Envisat MERIS

    Brodsky, L.; Kodesova, R.


    This study focuses on evaluation of a crop classification from middle-resolution images (Envisat MERIS) at national level. The main goal of such Land Use product is to provid spatial data for optimisation of monitoring of surface and groundwater pollution in the Czech Republic caused by pesticides use in agriculture. As there is a lack of spatial data on the pesticide use and their distribution, the localisation can be done according to the crop cover on arable land derived from the remote sensing images. Often high resolution data are used for agricultural Land Use classification but only at regional or local level. Envisat MERIS data, due to the wide satellite swath, can be used also at national level. The high temporal and also spectral resolution of MERIS data has indisputable advantage for crop classification. Methodology of a pixel-based MERIS classification applying an artificial neural-network (ANN) technique was proposed and performed at a national level, the Czech Republic. Five crop groups were finally selected - winter crops, spring crops, summer crops and other crops to be classified. Classification models included a linear, radial basis function (RBF) and a multi-layer percepton (MLP) ANN with 50 networks tested in training. The training data set consisted of about 200 samples per class, on which bootstrap resampling was applied. Selection of a subset of independent variables (Meris spectral channels) was used in the procedure. The best selected ANN model (MLP: 3 in, 13 hidden, 3 out) resulted in very good performance (correct classification rate 0.974, error 0.103) applying three crop types data set. In the next step data set with five crop types was evaluated. The ANN model (MLP: 5 in, 12 hidden, 5 out) performance was also very good (correct classification rate 0.930, error 0.370). The study showed, that while accuracy of about 80 % was achieved at pixel level when classifying only three crops, accuracy of about 70 % was achieved for five crop

  8. Research on Software Error Behavior Classification Based on Software Failure Chain%基于软件失效链的软件错误行为分类研究

    刘义颖; 江建慧


    目前软件应用广泛,对软件可靠性要求越来越高,研究软件的缺陷—错误—失效过程,提前预防失效的发生,减小软件失效带来的损失是十分必要的。研究描述软件错误行为的属性有助于独一无二地描述不同的错误行为,为建立软件故障模式库、软件故障预测和软件故障注入提供依据。文中基于软件失效链的理论,分析软件缺陷、软件错误和软件失效构成的因果链,由缺陷—错误—失效链之间的因果关系,进一步分析描述各个阶段异常的属性集合之间的联系。以现有的IEEE软件异常分类标准研究成果为基础,通过缺陷属性集合和失效属性集合来推导出错误属性集合,给出一种软件错误行为的分类方法,并给出属性集合以及参考值,选取基于最小相关和最大依赖度准则的属性约简算法进行实验,验证属性的合理性。%Software applications are more important than before. The requirements of reliability are more and more higher. It is very neces-sary to study the process of software defect-error-failure,to prevent failure happened in advance and reduce losses. It is helpful to de-scribe the unique software error behavior and help developers to communicate about this field. It also provides more support with software fault pattern library,software fault detection and fault injection. Based on software failure chain theory,analyze the causal chain of soft-ware defect-error-failure,further analyzing and describing each stage abnormal relationships between attributes sets. Based on the existing IEEE software anomaly classification standard,give out software error attributes sets and reference values and a way to classify error be-haviors. Verify rationality of attributes by the attribute reduction algorithm of minimal mutual information and maximal dependency.

  9. Sparse group lasso and high dimensional multinomial classification

    Vincent, Martin; Hansen, N.R.


    The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse...... group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  10. Nominal classification

    Senft, G.


    This handbook chapter summarizes some of the problems of nominal classification in language, presents and illustrates the various systems or techniques of nominal classification, and points out why nominal classification is one of the most interesting topics in Cognitive Linguistics.

  11. Error analysis in laparoscopic surgery

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.


    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  12. Pitch Based Sound Classification

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U


    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft...

  13. Signal Classification for Acoustic Neutrino Detection

    Neff, M; Enzenhöfer, A; Graf, K; Hößl, J; Katz, U; Lahmann, R; Richardt, C


    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of one percent is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  14. On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data

    Richards, Joseph W; Butler, Nathaniel R; Bloom, Joshua S; Brewer, John M; Crellin-Quick, Arien; Higgins, Justin; Kennedy, Rachel; Rischard, Maxime


    With the coming data deluge from synoptic surveys, there is a growing need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly-observed variables based on a small number of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics ("feature"), detail methods to robustly estimate periodic light-curve features, introduce tree-ensemble methods for accurate variable star classification, and show how to rigorously evaluate the classification results using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% overall classification error using the random forest classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying sam...

  15. Refractive Errors

    ... the eye keeps you from focusing well. The cause could be the length of the eyeball (longer or shorter), changes in the shape of the cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  16. Medication Errors

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  17. Robust Transmission of H.264/AVC Video Using Adaptive Slice Grouping and Unequal Error Protection

    Thomos, Nikolaos; Argyropoulos, Savvas; Nikolaos V. Boulgouris; Michael G. Strintzis


    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. The optimal classification of macroblocks into slice groups and the optimal channel rate allocation are achieved by iterating two interdependent steps. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms...

  18. Sparse group lasso and high dimensional multinomial classification

    Vincent, Martin


    We present a coordinate gradient descent algorithm for solving the sparse group lasso optimization problem with a broad class of convex loss functions. Convergence of the algorithm is established, and we use it to investigate the performance of the multinomial sparse group lasso classifier. On three different real data examples we find that multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. For the current implementation the time to compute the sparse group lasso solution is of the same order of magnitude as for the multinomial lasso algorithm as implemented in the R-package glmnet, and the implementation scales well with the problem size. One of the examples considered is a 50 class classification problem with 10k features, which amounts to estimating 500k parameters. The implementation is provided as an R package.

  19. ACCUWIND - Methods for classification of cup anemometers

    Dahlberg, J.-Å.; Friis Pedersen, Troels; Busche, P.


    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The EuropeanCLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been imple...

  20. Error calculations statistics in radioactive measurements

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  1. Multiple sparse representations classification

    Plenge, Esben; Klein, Stefan; Niessen, Wiro; Meijering, Erik


    textabstractSparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small...

  2. Multiple Sparse Representations Classification

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik


    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surro...

  3. Bayesian Classification in Medicine: The Transferability Question *

    Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann


    Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...

  4. Game Design Principles based on Human Error

    Guilherme Zaffari


    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  5. Errors in practical measurement in surveying, engineering, and technology

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  6. Single-trial classification of gait intent from human EEG



    Full Text Available Neuroimaging studies provide evidence of cortical involvement immediately before and during gait and during gait-related behaviors such as stepping in place or motor imagery of gait. Here we attempt to perform single-trial classification of gait intent from another movement plan (point intent or from standing in place. Subjects walked naturally from a starting position to a designated ending position, pointed at a designated position from the starting position, or remained standing at the starting position. The 700 ms of recorded EEG before movement onset was used for single-trial classification of trials based on action type and direction (left walk, forward walk, right walk, left point, right point, and stand as well as action type regardless of direction (stand, walk, point. Classification using regularized LDA was performed on PCA reduced feature space composed of levels 1-9 coefficients from a discrete wavelet decomposition using the Daubechies 4 wavelet. We achieved significant classification for all conditions, with errors as low as 17% when averaged across nine subjects. LDA and PCA highly weighted frequency ranges that included MRPs, with smaller contributions from frequency ranges that included mu and beta idle motor rhythms. Additionally, error patterns suggested a spatial structure to the EEG signal. Future applications of the cortical gait intent signal may include an additional dimension of control for prosthetics, preemptive corrective feedback for gait disturbances, or human computer interfaces.

  7. Rademacher Complexity in Neyman-Pearson Classification

    Min HAN; Di Rong CHEN; Zhao Xu SUN


    Neyman-Pearson(NP) criterion is one of the most important ways in hypothesis testing.It is also a criterion for classification. This paper addresses the problem of bounding the estimation error of NP classification, in terms of Rademacher averages. We investigate the behavior of the global and local Rademacher averages, and present new NP classification error bounds which are based on the localized averages, and indicate how the estimation error can be estimated without a priori knowledge of the class at hand.

  8. A deep learning approach to the classification of 3D CAD models

    Fei-wei QIN; Lu-ye LI; Shu-ming GAO; Xiao-ling YANG; Xiang CHEN


    Model classification is essential to the management and reuse of 3D CAD models. Manual model classification is laborious and error prone. At the same time, the automatic classification methods are scarce due to the intrinsic complexity of 3D CAD models. In this paper, we propose an automatic 3D CAD model classification approach based on deep neural networks. According to prior knowledge of the CAD domain, features are selected and extracted from 3D CAD models first, and then pre-processed as high dimensional input vectors for category recognition. By analogy with the thinking process of engineers, a deep neural network classifier for 3D CAD models is constructed with the aid of deep learning techniques. To obtain an optimal solution, multiple strategies are appropriately chosen and applied in the training phase, which makes our classifier achieve better per-formance. We demonstrate the efficiency and effectiveness of our approach through experiments on 3D CAD model datasets.

  9. Error Analysis in Composition of Iranian Lower Intermediate Students

    Taghavi, Mehdi


    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  10. Automated valve condition classification of a reciprocating compressor with seeded faults: experimentation and validation of classification strategy

    This paper deals with automatic valve condition classification of a reciprocating processor with seeded faults. The seeded faults are considered based on observation of valve faults in practice. They include the misplacement of valve and spring plates, incorrect tightness of the bolts for valve cover or valve seat, softening of the spring plate, and cracked or broken spring plate or valve plate. The seeded faults represent various stages of machine health condition and it is crucial to be able to correctly classify the conditions so that preventative maintenance can be performed before catastrophic breakdown of the compressor occurs. Considering the non-stationary characteristics of the system, time–frequency analysis techniques are applied to obtain the vibration spectrum as time develops. A data reduction algorithm is subsequently employed to extract the fault features from the formidable amount of time–frequency data and finally the probabilistic neural network is utilized to automate the classification process without the intervention of human experts. This study shows that the use of modification indices, as opposed to the original indices, greatly reduces the classification error, from about 80% down to about 20% misclassification for the 15 fault cases. Correct condition classification can be further enhanced if the use of similar fault cases is avoided. It is shown that 6.67% classification error is achievable when using the short-time Fourier transform and the mean variation method for the case of seven seeded faults with 10 training samples used. A stunning 100% correct classification can even be realized when the neural network is well trained with 30 training samples being used

  11. Automated valve condition classification of a reciprocating compressor with seeded faults: experimentation and validation of classification strategy

    Lin, Yih-Hwang; Liu, Huai-Sheng; Wu, Chung-Yung


    This paper deals with automatic valve condition classification of a reciprocating processor with seeded faults. The seeded faults are considered based on observation of valve faults in practice. They include the misplacement of valve and spring plates, incorrect tightness of the bolts for valve cover or valve seat, softening of the spring plate, and cracked or broken spring plate or valve plate. The seeded faults represent various stages of machine health condition and it is crucial to be able to correctly classify the conditions so that preventative maintenance can be performed before catastrophic breakdown of the compressor occurs. Considering the non-stationary characteristics of the system, time-frequency analysis techniques are applied to obtain the vibration spectrum as time develops. A data reduction algorithm is subsequently employed to extract the fault features from the formidable amount of time-frequency data and finally the probabilistic neural network is utilized to automate the classification process without the intervention of human experts. This study shows that the use of modification indices, as opposed to the original indices, greatly reduces the classification error, from about 80% down to about 20% misclassification for the 15 fault cases. Correct condition classification can be further enhanced if the use of similar fault cases is avoided. It is shown that 6.67% classification error is achievable when using the short-time Fourier transform and the mean variation method for the case of seven seeded faults with 10 training samples used. A stunning 100% correct classification can even be realized when the neural network is well trained with 30 training samples being used.

  12. ACCUWIND - Methods for classification of cup anemometers

    Dahlberg, J.-Å.; Friis Pedersen, Troels; Busche, P.


    the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implementedin the IEC 61400-12-1 standard on power performance measurements in annex I and J. The...... classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A categoryclassification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the...... theclassification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed....

  13. Medication errors: prescribing faults and prescription errors

    Velo, Giampaolo P; Minuz, Pietro


    Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients.Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common.Inadequate knowledge or competence and ...

  14. Network error correction with unequal link capacities

    Kim, Sukwon; Ho, Tracey; Effros, Michelle; Avestimehr, Amir Salman


    We study network error correction with unequal link capacities. Previous results on network error correction assume unit link capacities. We consider network error correction codes that can correct arbitrary errors occurring on up to z links. We find the capacity of a network consisting of parallel links, and a generalized Singleton outer bound for any arbitrary network. We show by example that linear coding is insufficient for achieving capacity in general. In our exampl...

  15. PSG-Based Classification of Sleep Phases

    Králík, M.


    This work is focused on classification of sleep phases using artificial neural network. The unconventional approach was used for calculation of classification features using polysomnographic data (PSG) of real patients. This approach allows to increase the time resolution of the analysis and, thus, to achieve more accurate results of classification.

  16. Audio Classification from Time-Frequency Texture

    Yu, Guoshen; Slotine, Jean-Jacques


    Time-frequency representations of audio signals often resemble texture images. This paper derives a simple audio classification algorithm based on treating sound spectrograms as texture images. The algorithm is inspired by an earlier visual classification scheme particularly efficient at classifying textures. While solely based on time-frequency texture features, the algorithm achieves surprisingly good performance in musical instrument classification experiments.

  17. Six-port error propagation

    Stelzer, Andreas; Diskus, Christian G.


    In this contribution the various influences on the accuracy of a near range precision radar are described. The front-end is a monostatic design operating at 34 - 36.2 GHz. The hardware configuration enables different modes of operation including FM-CW and interferometric modes. To achieve a highly accurate distance measurement, attention must be paid to various error sources. Due to the use of a six-port it is rather complicated to determine the corresponding error propagation. In the following the results of investigations on how to achieve an exceptional accuracy of +/- 0.1 mm are described.

  18. Classification problem in CBIR

    Tatiana Jaworska


    At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR). Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results ...

  19. Expected energy-based restricted Boltzmann machine for classification.

    Elfwing, S; Uchibe, E; Doya, K


    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. PMID:25318375

  20. Memory efficient hierarchical error diffusion

    He, Zhen; Fan, Zhigang


    Hierarchical Error Diffusion (HED) developed in [14] yields high-quality color halftone by explicitly designing three critical factors: dot overlapping, positioning, and coloring. However, HED requires more error memory buffer than the conventional error diffusion algorithms since the pixel error is diffused in dot-color domain, instead of colorant domain. This can potentially be an issue for certain low-cost hardware implementation. This paper develops a memory-efficient HED algorithm (MEHED). To achieve this goal, the pixel error in dot-color domain is converted backward and diffused to future pixels in input colorant domain, say, CMYK for print applications. Since the error-augmented pixel value is no longer bounded within the range [0, 1.0], the dot overlapping control algorithm developed in [14] needs to be generalized to coherently handle the pixel density of outside the normal range. The key is to carefully split the modified pixel density into three parts: negative, regular, and surplus densities. The determination of regular and surplus densities needs to be dependent on the density of K channel, in order to maintain local color and avoid halftone texture artifact. The resulting dot-color densities are serves as the input to hierarchical thresholding and coloring steps to generate final halftone output. Experimental results demonstrate that MEHED achieves similar image quality compared to HED.

  1. Classification problem in CBIR

    Tatiana Jaworska


    Full Text Available At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR. Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results of fuzzy rule-based classification in our CBIR. Further-more, these results are used to construct a search engine taking into account data mining.

  2. Strategic Classification

    Hardt, Moritz; Megiddo, Nimrod; Papadimitriou, Christos; Wootters, Mary


    Machine learning relies on the assumption that unseen test instances of a classification problem follow the same distribution as observed training data. However, this principle can break down when machine learning is used to make important decisions about the welfare (employment, education, health) of strategic individuals. Knowing information about the classifier, such individuals may manipulate their attributes in order to obtain a better classification outcome. As a result of this behavior...

  3. Robust characterization of leakage errors

    Wallman, Joel J.; Barnhill, Marie; Emerson, Joseph


    Leakage errors arise when the quantum state leaks out of some subspace of interest, for example, the two-level subspace of a multi-level system defining a computational ‘qubit’, the logical code space of a quantum error-correcting code, or a decoherence-free subspace. Leakage errors pose a distinct challenge to quantum control relative to the more well-studied decoherence errors and can be a limiting factor to achieving fault-tolerant quantum computation. Here we present a scalable and robust randomized benchmarking protocol for quickly estimating the leakage rate due to an arbitrary Markovian noise process on a larger system. We illustrate the reliability of the protocol through numerical simulations.

  4. ACCUWIND - Methods for classification of cup anemometers

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.


    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  5. Automated compound classification using a chemical ontology

    Bobach Claudia


    Full Text Available Abstract Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning

  6. A qualitative description of human error

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  7. Does an awareness of differing types of spreadsheet errors aid end-users in identifying spreadsheets errors?

    Purser, Michael


    The research presented in this paper establishes a valid, and simplified, revision of previous spreadsheet error classifications. This investigation is concerned with the results of a web survey and two web-based gender and domain-knowledge free spreadsheet error identification exercises. The participants of the survey and exercises were a test group of professionals (all of whom regularly use spreadsheets) and a control group of students from the University of Greenwich (UK). The findings show that over 85% of users are also the spreadsheet's developer, supporting the revised spreadsheet error classification. The findings also show that spreadsheet error identification ability is directly affected both by spreadsheet experience and by error-type awareness. In particular, that spreadsheet error-type awareness significantly improves the user's ability to identify, the more surreptitious, qualitative error.

  8. COMPARE: classification of morphological patterns using adaptive regional elements.

    Fan, Yong; Shen, Dinggang; Gur, Ruben C; Gur, Raquel E; Davatzikos, Christos


    This paper presents a method for classification of structural brain magnetic resonance (MR) images, by using a combination of deformation-based morphometry and machine learning methods. A morphological representation of the anatomy of interest is first obtained using a high-dimensional mass-preserving template warping method, which results in tissue density maps that constitute local tissue volumetric measurements. Regions that display strong correlations between tissue volume and classification (clinical) variables are extracted using a watershed segmentation algorithm, taking into account the regional smoothness of the correlation map which is estimated by a cross-validation strategy to achieve robustness to outliers. A volume increment algorithm is then applied to these regions to extract regional volumetric features, from which a feature selection technique using support vector machine (SVM)-based criteria is used to select the most discriminative features, according to their effect on the upper bound of the leave-one-out generalization error. Finally, SVM-based classification is applied using the best set of features, and it is tested using a leave-one-out cross-validation strategy. The results on MR brain images of healthy controls and schizophrenia patients demonstrate not only high classification accuracy (91.8% for female subjects and 90.8% for male subjects), but also good stability with respect to the number of features selected and the size of SVM kernel used. PMID:17243588

  9. Vietnamese Document Representation and Classification

    Nguyen, Giang-Son; Gao, Xiaoying; Andreae, Peter

    Vietnamese is very different from English and little research has been done on Vietnamese document classification, or indeed, on any kind of Vietnamese language processing, and only a few small corpora are available for research. We created a large Vietnamese text corpus with about 18000 documents, and manually classified them based on different criteria such as topics and styles, giving several classification tasks of different difficulty levels. This paper introduces a new syllable-based document representation at the morphological level of the language for efficient classification. We tested the representation on our corpus with different classification tasks using six classification algorithms and two feature selection techniques. Our experiments show that the new representation is effective for Vietnamese categorization, and suggest that best performance can be achieved using syllable-pair document representation, an SVM with a polynomial kernel as the learning algorithm, and using Information gain and an external dictionary for feature selection.


    With the coming data deluge from synoptic surveys, there is a need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly observed variables based on small numbers of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics (features), detail methods to robustly estimate periodic features, introduce tree-ensemble methods for accurate variable-star classification, and show how to rigorously evaluate a classifier using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% error rate using the random forest (RF) classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying samples of specific science classes: for pulsational variables used in Milky Way tomography we obtain a discovery efficiency of 98.2% and for eclipsing systems we find an efficiency of 99.1%, both at 95% purity. The RF classifier is superior to other methods in terms of accuracy, speed, and relative immunity to irrelevant features; the RF can also be used to estimate the importance of each feature in classification. Additionally, we present the first astronomical use of hierarchical classification methods to incorporate a known class taxonomy in the classifier, which reduces the catastrophic error rate from 8% to 7.8%. Excluding low-amplitude sources, the overall error rate improves to 14%, with a catastrophic error rate of 3.5%.

  11. Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification

    Bouselmi, Ghazi; Illina, Irina; Haton, Jean-Paul


    In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques.

  12. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis


    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  13. Learning from Errors

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine


    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  14. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Lev V. Utkin


    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  15. Evaluation criteria for software classification inventories, accuracies, and maps

    Jayroe, R. R., Jr.


    Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.

  16. Transporter Classification Database (TCDB)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  17. Refined Error Bounds for Several Learning Algorithms

    Hanneke, Steve


    This article studies the achievable guarantees on the error rates of certain learning algorithms, with particular focus on refining logarithmic factors. Many of the results are based on a general technique for obtaining bounds on the error rates of sample-consistent classifiers with monotonic error regions, in the realizable case. We prove bounds of this type expressed in terms of either the VC dimension or the sample compression size. This general technique also enables us to derive several ...

  18. Classifying Classification

    Novakowski, Janice


    This article describes the experience of a group of first-grade teachers as they tackled the science process of classification, a targeted learning objective for the first grade. While the two-year process was not easy and required teachers to teach in a new, more investigation-oriented way, the benefits were great. The project helped teachers and…

  19. Tissue Classification

    Van Leemput, Koen; Puonti, Oula


    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are no...... software packages such as SPM, FSL, and FreeSurfer....

  20. Neuromuscular disease classification system

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen


    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  1. Detection and Classification of Whale Acoustic Signals

    Xian, Yin

    This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data. Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear. We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale

  2. Robust Model Selection for Classification of Microarrays

    Ikumi Suzuki


    Full Text Available Recently, microarray-based cancer diagnosis systems have been increasingly investigated. However, cost reduction and reliability assurance of such diagnosis systems are still remaining problems in real clinical scenes. To reduce the cost, we need a supervised classifier involving the smallest number of genes, as long as the classifier is sufficiently reliable. To achieve a reliable classifier, we should assess candidate classifiers and select the best one. In the selection process of the best classifier, however, the assessment criterion must involve large variance because of limited number of samples and non-negligible observation noise. Therefore, even if a classifier with a very small number of genes exhibited the smallest leave-one-out cross-validation (LOO error rate, it would not necessarily be reliable because classifiers based on a small number of genes tend to show large variance. We propose a robust model selection criterion, the min-max criterion, based on a resampling bootstrap simulation to assess the variance of estimation of classification error rates. We applied our assessment framework to four published real gene expression datasets and one synthetic dataset. We found that a state- of-the-art procedure, weighted voting classifiers with LOO criterion, had a non-negligible risk of selecting extremely poor classifiers and, on the other hand, that the new min-max criterion could eliminate that risk. These finding suggests that our criterion presents a safer procedure to design a practical cancer diagnosis system.

  3. Field error lottery

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))


    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  4. Multiple Sparse Representations Classification.

    Plenge, Esben; Klein, Stefan; Klein, Stefan S; Niessen, Wiro J; Meijering, Erik


    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and

  5. Inborn errors of metabolism

    ... metabolism. A few of them are: Fructose intolerance Galactosemia Maple sugar urine disease (MSUD) Phenylketonuria (PKU) Newborn ... disorder. Alternative Names Metabolism - inborn errors of Images Galactosemia References Bodamer OA. Approach to inborn errors of ...

  6. Error And Error Analysis In Language Study

    Zakaria, Teuku Azhari


    Students make mistakes during their language learning course; orally, written, listening or reading comprehension. Making mistakes is inevitable and considered natural in ones’ inter-lingual process. Believed to be part of the learning process, making error and mistake are not bad thing; as everybody would experience the same. Both students and teacher will benefit from the event as both will learn what has been done well and what needs to be reviewed and rehearsed. Understanding error and th...

  7. The Error in Total Error Reduction

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.


    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons i...

  8. Medical errors in neurosurgery

    John D Rolston


    23.7-27.8% were technical, related to the execution of the surgery itself, highlighting the importance of systems-level approaches to protecting patients and reducing errors. Conclusions: Overall, the magnitude of medical errors in neurosurgery and the lack of focused research emphasize the need for prospective categorization of morbidity with judicious attribution. Ultimately, we must raise awareness of the impact of medical errors in neurosurgery, reduce the occurrence of medical errors, and mitigate their detrimental effects.

  9. Network error correction with unequal link capacities

    Kim, Sukwon; Effros, Michelle; Avestimehr, Amir Salman


    This paper studies the capacity of single-source single-sink noiseless networks under adversarial or arbitrary errors on no more than z edges. Unlike prior papers, which assume equal capacities on all links, arbitrary link capacities are considered. Results include new upper bounds, network error correction coding strategies, and examples of network families where our bounds are tight. An example is provided of a network where the capacity is 50% greater than the best rate that can be achieved with linear coding. While coding at the source and sink suffices in networks with equal link capacities, in networks with unequal link capacities, it is shown that intermediate nodes may have to do coding, nonlinear error detection, or error correction in order to achieve the network error correction capacity.

  10. Programming Errors in APL.

    Kearsley, Greg P.

    This paper discusses and provides some preliminary data on errors in APL programming. Data were obtained by analyzing listings of 148 complete and partial APL sessions collected from student terminal rooms at the University of Alberta. Frequencies of errors for the various error messages are tabulated. The data, however, are limited because they…

  11. Unsupervised classification of operator workload from brain signals

    Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin


    Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.

  12. Extreme Entropy Machines: Robust information theoretic classification

    Czarnecki, Wojciech Marian; Tabor, Jacek


    Most of the existing classification methods are aimed at minimization of empirical risk (through some simple point-based error measured with loss function) with added regularization. We propose to approach this problem in a more information theoretic way by investigating applicability of entropy measures as a classification model objective function. We focus on quadratic Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the construction of Extreme Entropy Machines (EEM). ...

  13. Deep neural networks for spam classification

    Kasmani, Mohamed Khizer


    This project elucidates the development of a spam filtering method using deep neural networks. A classification model employing algorithms such as Error Back Propagation (EBP) and Restricted Boltzmann Machines (RBM) is used to identify spam and non-spam emails. Moreover, a spam classification system employing deep neural network algorithms is developed, which has been tested on Enron email dataset in order to help users manage large volumes of email and, furthermore, their email folders. The ...

  14. Distributed Maintenance Error Information, Investigation and Intervention

    Zolla, George; Boex, Tony; Flanders, Pat; Nelson, Doug; Tufts, Scott; Schmidt, John K.


    This paper describes a safety information management system designed to capture maintenance factors that contribute to aircraft mishaps. The Human Factors Analysis and Classification System-Maintenance Extension taxonomy (HFACS-ME), an effective framework for classifying and analyzing the presence of maintenance errors that lead to mishaps, incidents, and personal injuries, is the theoretical foundation. An existing desktop mishap application is updated, a prototype we...

  15. Error-prone signalling.

    Johnstone, R A; Grafen, A


    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  16. Habitat Classification of Temperate Marine Macroalgal Communities Using Bathymetric LiDAR

    Richard Zavalas


    Full Text Available Here, we evaluated the potential of using bathymetric Light Detection and Ranging (LiDAR to characterise shallow water (<30 m benthic habitats of high energy subtidal coastal environments. Habitat classification, quantifying benthic substrata and macroalgal communities, was achieved in this study with the application of LiDAR and underwater video groundtruth data using automated classification techniques. Bathymetry and reflectance datasets were used to produce secondary terrain derivative surfaces (e.g., rugosity, aspect that were assumed to influence benthic patterns observed. An automated decision tree classification approach using the Quick Unbiased Efficient Statistical Tree (QUEST was applied to produce substrata, biological and canopy structure habitat maps of the study area. Error assessment indicated that habitat maps produced were primarily accurate (>70%, with varying results for the classification of individual habitat classes; for instance, producer accuracy for mixed brown algae and sediment substrata, was 74% and 93%, respectively. LiDAR was also successful for differentiating canopy structure of macroalgae communities (i.e., canopy structure classification, such as canopy forming kelp versus erect fine branching algae. In conclusion, habitat characterisation using bathymetric LiDAR provides a unique potential to collect baseline information about biological assemblages and, hence, potential reef connectivity over large areas beyond the range of direct observation. This research contributes a new perspective for assessing the structure of subtidal coastal ecosystems, providing a novel tool for the research and management of such highly dynamic marine environments.

  17. Volumetric magnetic resonance imaging classification for Alzheimer's disease based on kernel density estimation of local features

    YAN Hao; WANG Hu; WANG Yong-hui; ZHANG Yu-mei


    Background The classification of Alzheimer's disease (AD) from magnetic resonance imaging (MRI) has been challenged by lack of effective and reliable biomarkers due to inter-subject variability.This article presents a classification method for AD based on kernel density estimation (KDE) of local features.Methods First,a large number of local features were extracted from stable image blobs to represent various anatomical patterns for potential effective biomarkers.Based on distinctive descriptors and locations,the local features were robustly clustered to identify correspondences of the same underlying patterns.Then,the KDE was used to estimate distribution parameters of the correspondences by weighting contributions according to their distances.Thus,biomarkers could be reliably quantified by reducing the effects of further away correspondences which were more likely noises from inter-subject variability.Finally,the Bayes classifier was applied on the distribution parameters for the classification of AD.Results Experiments were performed on different divisions of a publicly available database to investigate the accuracy and the effects of age and AD severity.Our method achieved an equal error classification rate of 0.85 for subject aged 60-80 years exhibiting mild AD and outperformed a recent local feature-based work regardless of both effects.Conclusions We proposed a volumetric brain MRI classification method for neurodegenerative disease based on statistics of local features using KDE.The method may be potentially useful for the computer-aided diagnosis in clinical settings.

  18. 28 CFR 524.73 - Classification procedures.


    ... of Prisons from state or territorial jurisdictions. All state prisoners while solely in service of... classification may be made at any level to achieve the immediate effect of requiring prior clearance for...

  19. Soft Classification of Diffractive Interactions at the LHC

    Multivariate machine learning techniques provide an alternative to the rapidity gap method for event-by-event identification and classification of diffraction in hadron-hadron collisions. Traditionally, such methods assign each event exclusively to a single class producing classification errors in overlap regions of data space. As an alternative to this so called hard classification approach, we propose estimating posterior probabilities of each diffractive class and using these estimates to weigh event contributions to physical observables. It is shown with a Monte Carlo study that such a soft classification scheme is able to reproduce observables such as multiplicity distributions and relative event rates with a much higher accuracy than hard classification.

  20. Development of a classification system for cup anemometers - CLASSCUP

    Friis Pedersen, Troels


    objectives to quantify the errors associated with the use of cup anemometers, and to determine the requirements for an optimum design of a cup anemometer, and to develop a classification system forquantification of systematic errors of cup anemometers. The present report describes this proposed...... classification system. A classification method for cup anemometers has been developed, which proposes general external operational ranges to be used. Anormal category range connected to ideal sites of the IEC power performance standard was made, and another extended category range for complex terrain was...... proposed. General classification indices were proposed for all types of cup anemometers. As a resultof the classification, the cup anemometer will be assigned to a certain class: 0.5, 1, 2, 3 or 5 with corresponding intrinsic errors (%) as a vector instrument (3D) or as a horizontal instrument (2D). The...

  1. Multi-borders classification

    Mills, Peter


    The number of possible methods of generalizing binary classification to multi-class classification increases exponentially with the number of class labels. Often, the best method of doing so will be highly problem dependent. Here we present classification software in which the partitioning of multi-class classification problems into binary classification problems is specified using a recursive control language.

  2. Classification in Australia.

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  3. Classification in context

    Mai, Jens Erik


    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary cla...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  4. A gender-based analysis of Iranian EFL learners' types of written errors

    Faezeh Boroomand


    Full Text Available Committing errors is inevitable in process of language acquisition and learning. Analysis of learners' errors from different perspectives, contributes to the improvement of language learning and teaching. Although the issue of gender differences has received considerable attention in the context of second or foreign language learning and teaching, few studies on the relationship between gender and EFL learners' written errors have been carried out. The present study conducted on 100 Iranian advanced EFL learners' written errors (50 male learners and 50 female learners, presents different classifications and subdivisions of errors, and carries out an analysis on these errors. Detecting the most committed errors in each classification, findings reveal significant differences between error frequencies of the two male and female groups (more error frequency in female written productions.

  5. Uncorrected refractive errors

    Naidoo, Kovin S; Jyoti Jaggernath


    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error S...

  6. Achieving Standardization

    Henningsson, Stefan


    International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...... competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  7. Achieving Standardization

    Henningsson, Stefan


    International e-Customs is going through a standardization process. Driven by the need to increase control in the trade process to address security challenges stemming from threats of terrorists, diseases, and counterfeit products, and to lower the administrative burdens on traders to stay...... competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  8. Confident Predictability: Identifying reliable gene expression patterns for individualized tumor classification using a local minimax kernel algorithm

    Berry Damon


    Full Text Available Abstract Background Molecular classification of tumors can be achieved by global gene expression profiling. Most machine learning classification algorithms furnish global error rates for the entire population. A few algorithms provide an estimate of probability of malignancy for each queried patient but the degree of accuracy of these estimates is unknown. On the other hand local minimax learning provides such probability estimates with best finite sample bounds on expected mean squared error on an individual basis for each queried patient. This allows a significant percentage of the patients to be identified as confidently predictable, a condition that ensures that the machine learning algorithm possesses an error rate below the tolerable level when applied to the confidently predictable patients. Results We devise a new learning method that implements: (i feature selection using the k-TSP algorithm and (ii classifier construction by local minimax kernel learning. We test our method on three publicly available gene expression datasets and achieve significantly lower error rate for a substantial identifiable subset of patients. Our final classifiers are simple to interpret and they can make prediction on an individual basis with an individualized confidence level. Conclusions Patients that were predicted confidently by the classifiers as cancer can receive immediate and appropriate treatment whilst patients that were predicted confidently as healthy will be spared from unnecessary treatment. We believe that our method can be a useful tool to translate the gene expression signatures into clinical practice for personalized medicine.

  9. Error coding simulations

    Noble, Viveca K.


    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  10. Correction for quadrature errors

    Netterstrøm, A.; Christensen, Erik Lintz


    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signals......, which appear as self-clutter in the radar image. When digital techniques are used for generation and processing or the radar signal it is possible to reduce these error signals. In the paper the quadrature devices are analyzed, and two different error compensation methods are considered. The practical...


    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.


    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Brink, Henrik; Crellin-Quick, Arien [Astronomy Department, University of California, Berkeley, CA 94720-3411 (United States); Butler, Nathaniel R., E-mail: [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States)


    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  13. Construction of a Calibrated Probabilistic Classification Catalog: Application to 50k Variable Sources in the All-Sky Automated Survey

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; Brink, Henrik; Crellin-Quick, Arien


    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  14. Evaluation of drug administration errors in a teaching hospital

    Berdot Sarah


    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  15. We need to talk about error: causes and types of error in veterinary practice.

    Oxtoby, C; Ferguson, E; White, K; Mossop, L


    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error. PMID:26489997

  16. Hazard classification methodology

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  17. Remote Sensing Information Classification

    Rickman, Douglas L.


    This viewgraph presentation reviews the classification of Remote Sensing data in relation to epidemiology. Classification is a way to reduce the dimensionality and precision to something a human can understand. Classification changes SCALAR data into NOMINAL data.

  18. Classification and knowledge

    Kurtz, Michael J.


    Automated procedures to classify objects are discussed. The classification problem is reviewed, and the relation of epistemology and classification is considered. The classification of stellar spectra and of resolved images of galaxies is addressed.


    Li Jun; Zhang Shunyi; Lu Yanqing; Yan Junrong


    Accurate and real-time classification of network traffic is significant to network operation and management such as QoS differentiation, traffic shaping and security surveillance. However, with many newly emerged P2P applications using dynamic port numbers, masquerading techniques, and payload encryption to avoid detection, traditional classification approaches turn to be ineffective. In this paper, we present a layered hybrid system to classify current Internet traffic, motivated by variety of network activities and their requirements of traffic classification. The proposed method could achieve fast and accurate traffic classification with low overheads and robustness to accommodate both known and unknown/encrypted applications. Furthermore, it is feasible to be used in the context of real-time traffic classification. Our experimental results show the distinct advantages of the proposed classification system, compared with the one-step Machine Learning (ML) approach.

  20. AR-based Method for ECG Classification and Patient Recognition

    Branislav Vuksanovic


    Full Text Available The electrocardiogram (ECG is the recording of heart activity obtained by measuring the signals from electrical contacts placed on the skin of the patient. By analyzing ECG, it is possible to detect the rate and consistency of heartbeats and identify possible irregularities in heart operation. This paper describes a set of techniques employed to pre-process the ECG signals and extract a set of features – autoregressive (AR signal parameters used to characterise ECG signal. Extracted parameters are in this work used to accomplish two tasks. Firstly, AR features belonging to each ECG signal are classified in groups corresponding to three different heart conditions – normal, arrhythmia and ventricular arrhythmia. Obtained classification results indicate accurate, zero-error classification of patients according to their heart condition using the proposed method. Sets of extracted AR coefficients are then extended by adding an additional parameter – power of AR modelling error and a suitability of developed technique for individual patient identification is investigated. Individual feature sets for each group of detected QRS sections are classified in p clusters where p represents the number of patients in each group. Developed system has been tested using ECG signals available in MIT/BIH and Politecnico of Milano VCG/ECG database. Achieved recognition rates indicate that patient identification using ECG signals could be considered as a possible approach in some applications using the system developed in this work. Pre-processing stages, applied parameter extraction techniques and some intermediate and final classification results are described and presented in this paper.


    Frederique Robert-Inacio


    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  2. Medical error and disclosure.

    White, Andrew A; Gallagher, Thomas H


    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  3. Adaptive Error Resilience for Video Streaming

    Lakshmi R. Siruvuri


    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  4. A predictive cognitive error analysis technique for emergency tasks

    This paper introduces an analysis framework and procedure for the support of cognitive error analysis of emergency tasks in nuclear power plants. The framework provides a new perspective in the utilization of error factors into error prediction. The framework can be characterized by two features. First, error factors that affect the occurrence of human error are classified into three groups, 'task characteristics factors (TCF)', 'situation factors (SF)', and 'performance assisting factors (PAF)', and are utilized in the error prediction. This classification aims to support error prediction from the viewpoint of assessing the adequacy of PAF under given TCF and SF. Second, the assessment of error factors is made in the perspective of the performance of each cognitive function. Through this, error factors assessment is made in an integrative way not independently. Furthermore, it enables analysts to identify vulnerable cognitive functions and error factors, and to obtain specific error reduction strategies. Finally, the framework and procedure was applied to the error analysis of the 'bleed and feed operation' of emergency tasks

  5. KMRR thermal power measurement error estimation

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  6. Correlated errors can lead to better performance of quantum codes

    A formulation for evaluating the performance of quantum error correcting codes for a general error model is presented. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. We classify correlated errors using the system-bath interaction: local versus nonlocal and two-body versus many-body interactions. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery. We also find this timing to be an important factor in the design of a coding system for achieving higher fidelities

  7. Document region classification using low-resolution images: a human visual perception approach

    Chacon Murguia, Mario I.; Jordan, Jay B.


    This paper describes the design of a document region classifier. The regions of a document are classified as large text regions, LTR, and non-LTR. The foundations of the classifier are derived from human visual perception theories. The theories analyzed are texture discrimination based on textons, and perceptual grouping. Based on these theories, the classification task is stated as a texture discrimination problem and is implemented as a preattentive process. Once the foundations of the classifier are defined, engineering techniques are developed to extract features for deciding the class of information contained in the regions. The feature derived from the human visual perception theories is a measurement of periodicity of the blobs of the text regions. This feature is used to design a statistical classifier based on the minimum probability of error criterion to perform the classification of LTR and non-LTR. The method is test on free format low resolution document images achieving 93% of correct recognition.

  8. Uncorrected refractive errors

    Kovin S Naidoo


    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  9. Errors in imaging patients in the emergency setting.

    Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca


    Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955

  10. The analysis of human errors in nuclear power plant operation

    There are basically three different method known to approach human factors in the NPP-operation: - probabilistic error analysis; - analysis of human errors in real plant incidents; - job task analysis. Analysis of human errors occurring during operation and job analysis can be easily converted to operational improvements. From the analysis of human errors and errors' causes and, on the other hand, from the analysis of possible problems, it is possible to came to a derivation of requirements either for modifications of existing working systems or for the design of a new nuclear power plant. Of great importance is to have an established classification system for the error analysis in such a way that requirements can be derived by a set of elements of a matrix. (authors)

  11. Errors and violations

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  12. Minimax Optimal Rates of Convergence for Multicategory Classifications

    Di Rong CHEN; Xu YOU


    In the problem of classification (or pattern recognition),given a set of n samples,weattempt to construct a classifier gn with a small misclassification error.It is important to study the convergence rates of the misclassification error as n tends to infinity.It is known that such a rate can'texist for the set of all distributions.In this paper we obtain the optimal convergence rates for a classof distributions D(λ,ω) in multicategory classification and nonstandard binary classification.

  13. Classification of the web

    Mai, Jens Erik


    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  14. Pronominal Case-Errors

    Kaper, Willem


    Contradicts a previous assertion by C. Tanz that children commit substitution errors usually using objective pronoun forms for nominative ones. Examples from Dutch and German provide evidence that substitutions are made in both directions. (CHK)

  15. Error mode prediction.

    Hollnagel, E; Kaarstad, M; Lee, H C


    The study of accidents ('human errors') has been dominated by efforts to develop 'error' taxonomies and 'error' models that enable the retrospective identification of likely causes. In the field of Human Reliability Analysis (HRA) there is, however, a significant practical need for methods that can predict the occurrence of erroneous actions--qualitatively and quantitatively. The present experiment tested an approach for qualitative performance prediction based on the Cognitive Reliability and Error Analysis Method (CREAM). Predictions of possible erroneous actions were made for operators using different types of alarm systems. The data were collected as part of a large-scale experiment using professional nuclear power plant operators in a full scope simulator. The analysis showed that the predictions were correct in more than 70% of the cases, and also that the coverage of the predictions depended critically on the comprehensiveness of the preceding task analysis. PMID:10582035

  16. Errors in energy bills

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  17. Detecting Errors in Spreadsheets

    Ayalew, Yirsaw; Clermont, Markus; Mittermeir, Roland T.


    The paper presents two complementary strategies for identifying errors in spreadsheet programs. The strategies presented are grounded on the assumption that spreadsheets are software, albeit of a different nature than conventional procedural software. Correspondingly, strategies for identifying errors have to take into account the inherent properties of spreadsheets as much as they have to recognize that the conceptual models of 'spreadsheet programmers' differ from the conceptual models of c...

  18. Smoothing error pitfalls

    T. von Clarmann


    Full Text Available The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.

  19. Thermodynamics of Error Correction

    Sartori, Pablo; Pigolotti, Simone


    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  20. Neural Correlates of Reach Errors

    Diedrichsen, Jörn; Hashambhoy, Yasmin; Rane, Tushar; Shadmehr, Reza


    Reach errors may be broadly classified into errors arising from unpredictable changes in target location, called target errors, and errors arising from miscalibration of internal models, called execution errors. Execution errors may be caused by miscalibration of dynamics (e.g.. when a force field alters limb dynamics) or by miscalibration of kinematics (e.g., when prisms alter visual feedback). While all types of errors lead to similar online corrections, we found that the motor system showe...

  1. Multinomial mixture model with heterogeneous classification probabilities

    Holland, M.D.; Gray, B.R.


    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  2. Motion error compensation of multi-legged walking robots

    Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei


    Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.

  3. Integrating TM and Ancillary Geographical Data with Classification Trees for Land Cover Classification of Marsh Area

    NA Xiaodong; ZHANG Shuqing; ZHANG Huaiqing; LI Xiaofeng; YU Huan; LIU Chunyue


    The main objective of this research is to determine the capacity of land cover classification combining spectral and textural features of Landsat TM imagery with ancillary geographical data in wetlands of the Sanjiang Plain, Heilongjiang Province, China. Semi-variograms and Z-test value were calculated to assess the separability of grey-level co-occurrence texture measures to maximize the difference between land cover types. The degree of spatial autocorrelation showed that window sizes of 3×3 pixels and 11×11 pixels were most appropriate for Landsat TM image texture calculations. The texture analysis showed that co-occurrence entropy, dissimilarity, and variance texture measures, derived from the Landsat TM spectrum bands and vegetation indices provided the most significant statistical differentiation between land cover types. Subsequently, a Classification and Regression Tree (CART) algorithm was applied to three different combinations of predictors: 1) TM imagery alone (TM-only); 2) TM imagery plus image texture (TM+TXT model); and 3) all predictors including TM imagery, image texture and additional ancillary GIS information (TM+TXT+GIS model). Compared with traditional Maximum Likelihood Classification (MLC) supervised classification, three classification trees predictive models reduced the overall error rate significantly. Image texture measures and ancillary geographical variables depressed the speckle noise effectively and reduced classification error rate of marsh obviously. For classification trees model making use of all available predictors, omission error rate was 12.90% and commission error rate was 10.99% for marsh. The developed method is portable, relatively easy to implement and should be applicable in other settings and over larger extents.

  4. Error monitoring in musicians

    Clemens Maidhof


    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  5. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc


    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue.…

  6. Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications

    Carrió Pastor, María Luisa; Mestre-Mestre, Eva María


    Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…

  7. Effect of dose ascertainment errors on observed risk

    Inaccuracies in dose assignments can lead to misclassification in epidemiological studies. The extent of this misclassification is examined for different error functions, classification intervals, and actual dose distributions. The error function model is one which results in a truncated lognormal distribution of the assigned dose for each actual dose. The error function may vary as the actual dose changes. The effect of misclassification on the conclusions about dose effect relationships is examined for the linear and quadratic dose effect models. 10 references, 9 figures, 8 tables

  8. Automatic web services classification based on rough set theory

    陈立; 张英; 宋自林; 苗壮


    With development of web services technology, the number of existing services in the internet is growing day by day. In order to achieve automatic and accurate services classification which can be beneficial for service related tasks, a rough set theory based method for services classification was proposed. First, the services descriptions were preprocessed and represented as vectors. Elicited by the discernibility matrices based attribute reduction in rough set theory and taking into account the characteristic of decision table of services classification, a method based on continuous discernibility matrices was proposed for dimensionality reduction. And finally, services classification was processed automatically. Through the experiment, the proposed method for services classification achieves approving classification result in all five testing categories. The experiment result shows that the proposed method is accurate and could be used in practical web services classification.

  9. Feature extraction and classification in automatic weld seam radioscopy

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM)

  10. Texture Classification Based on Texton Features

    U Ravi Babu


    Full Text Available Texture Analysis plays an important role in the interpretation, understanding and recognition of terrain, biomedical or microscopic images. To achieve high accuracy in classification the present paper proposes a new method on textons. Each texture analysis method depends upon how the selected texture features characterizes image. Whenever a new texture feature is derived it is tested whether it precisely classifies the textures. Here not only the texture features are important but also the way in which they are applied is also important and significant for a crucial, precise and accurate texture classification and analysis. The present paper proposes a new method on textons, for an efficient rotationally invariant texture classification. The proposed Texton Features (TF evaluates the relationship between the values of neighboring pixels. The proposed classification algorithm evaluates the histogram based techniques on TF for a precise classification. The experimental results on various stone textures indicate the efficacy of the proposed method when compared to other methods.

  11. Error Correction in Classroom

    Dr. Grace Zhang


    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  12. Tanks for liquids: calibration and errors assessment

    After a brief reference to some of the problems raised by tanks calibration, two methods, theoretical and experimental are presented, so as to achieve it taking into account measurement errors. The method is applied to the transfer of liquid from one tank to another. Further, a practical example is developed. (author)

  13. Decomposing model systematic error

    Keenlyside, Noel; Shen, Mao-Lin


    Seasonal forecasts made with a single model are generally overconfident. The standard approach to improve forecast reliability is to account for structural uncertainties through a multi-model ensemble (i.e., an ensemble of opportunity). Here we analyse a multi-model set of seasonal forecasts available through ENSEMBLES and DEMETER EU projects. We partition forecast uncertainties into initial value and structural uncertainties, as function of lead-time and region. Statistical analysis is used to investigate sources of initial condition uncertainty, and which regions and variables lead to the largest forecast error. Similar analysis is then performed to identify common elements of model error. Results of this analysis will be used to discuss possibilities to reduce forecast uncertainty and improve models. In particular, better understanding of error growth will be useful for the design of interactive multi-model ensembles.

  14. Random errors revisited

    Jacobsen, Finn


    It is well known that the random errors of sound intensity estimates can be much larger than the theoretical minimum value determined by the BT-product, in particular under reverberant conditions and when there are several sources present. More than ten years ago it was shown that one can predict...... the random errors of estimates of the sound intensity in, say, one-third octave bands from the power and cross power spectra of the signals from an intensity probe determined with a dual channel FFT analyser. This is not very practical, though. In this paper it is demonstrated that one can predict the...... random errors from the power and cross power spectra determined with the same spectral resolution as the sound intensity itself....

  15. Synthesis of approximation errors

    Bareiss, E.H.; Michel, P.


    A method is developed for the synthesis of the error in approximations in the large of regular and irregular functions. The synthesis uses a small class of dimensionless elementary error functions which are weighted by the coefficients of the expansion of the regular part of the function. The question is answered whether a computer can determine the analytical nature of a solution by numerical methods. It is shown that continuous least-squares approximations of irregular functions can be replaced by discrete least-squares approximation and how to select the discrete points. The elementary error functions are used to show how the classical convergence criterions can be markedly improved. There are eight numerical examples included, 30 figures and 74 tables.

  16. Errors in Neonatology

    Antonio Boldrini


    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  17. Learning Interpretable SVMs for Biological Sequence Classification

    Sonnenburg Sören; Rätsch Gunnar; Schäfer Christin


    Abstract Background Support Vector Machines (SVMs) – using a variety of string kernels – have been successfully applied to biological sequence classification problems. While SVMs achieve high classification accuracy they lack interpretability. In many applications, it does not suffice that an algorithm just detects a biological signal in the sequence, but it should also provide means to interpret its solution in order to gain biological insight. Results We propose novel and efficient algorith...

  18. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Huang Kai


    Full Text Available Abstract Background Detailed knowledge of the subcellular location of each expressed protein is critical to a full understanding of its function. Fluorescence microscopy, in combination with methods for fluorescent tagging, is the most suitable current method for proteome-wide determination of subcellular location. Previous work has shown that neural network classifiers can distinguish all major protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Building on these results, we evaluate here new classifiers and features to improve the recognition of protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Results We report here a thorough comparison of the performance on this problem of eight different state-of-the-art classification methods, including neural networks, support vector machines with linear, polynomial, radial basis, and exponential radial basis kernel functions, and ensemble methods such as AdaBoost, Bagging, and Mixtures-of-Experts. Ten-fold cross validation was used to evaluate each classifier with various parameters on different Subcellular Location Feature sets representing both 2D and 3D fluorescence microscope images, including new feature sets incorporating features derived from Gabor and Daubechies wavelet transforms. After optimal parameters were chosen for each of the eight classifiers, optimal majority-voting ensemble classifiers were formed for each feature set. Comparison of results for each image for all eight classifiers permits estimation of the lower bound classification error rate for each subcellular pattern, which we interpret to reflect the fraction of cells whose patterns are distorted by mitosis, cell death or acquisition errors. Overall, we obtained statistically significant improvements in classification accuracy over the best previously published results, with the overall error rate being reduced by one-third to one-half and with the average

  19. Realizing Low-Energy Classification Systems by Implementing Matrix Multiplication Directly Within an ADC.

    Wang, Zhuo; Zhang, Jintao; Verma, Naveen


    In wearable and implantable medical-sensor applications, low-energy classification systems are of importance for deriving high-quality inferences locally within the device. Given that sensor instrumentation is typically followed by A-D conversion, this paper presents a system implementation wherein the majority of the computations required for classification are implemented within the ADC. To achieve this, first an algorithmic formulation is presented that combines linear feature extraction and classification into a single matrix transformation. Second, a matrix-multiplying ADC (MMADC) is presented that enables multiplication between an analog input sample and a digital multiplier, with negligible additional energy beyond that required for A-D conversion. Two systems mapped to the MMADC are demonstrated: (1) an ECG-based cardiac arrhythmia detector; and (2) an image-pixel-based facial gender detector. The RMS error over all multiplication performed, normalized to the RMS of ideal multiplication results is 0.018. Further, compared to idealized versions of conventional systems, the energy savings obtained are estimated to be 13× and 29×, respectively, while achieving similar level of performance. PMID:26849205

  20. Achieving empowerment through information.

    Parmalee, J C; Scholomiti, T O; Whitman, P; Sees, M; Smith, D; Gardner, E; Bastian, C


    Despite the problems we encountered, which are not uncommon with the development and implementation of any data system, we are confident that our success in achieving our goals is due to the following: establishing a reliable information database connecting several related departments; interfacing with registration and billing systems to avoid duplication of data and chance for error; appointing a qualified Systems Manager devoted to the project; developing superusers to include intensive training in the operating system (UNIX), parameters of the information system, and the report writer. We achieved what we set out to accomplish: the development of a reliable database and reports on which to base a variety of hospital decisions; improved hospital utilization; reliable clinical data for reimbursement, quality management, and credentialing; enhanced communication and collaboration among departments; and an increased profile of the departments and staff. Data quality specialists, Utilization Management and Quality Management coordinators, and the Medical Staff Credentialing Supervisor and their managers are relied upon by physicians and administrators to provide timely information. The staff are recognized for their knowledge and expertise in their department-specific information. The most significant reward is the potential for innovation. Users are no longer restricted to narrow information corridors. UNIX programming encourages creativity without demanding a degree in computer science. The capability to reach and use diverse hospital database information is no longer a dream. PMID:10139109

  1. Introduction to precision machine design and error assessment

    Mekid, Samir


    While ultra-precision machines are now achieving sub-nanometer accuracy, unique challenges continue to arise due to their tight specifications. Written to meet the growing needs of mechanical engineers and other professionals to understand these specialized design process issues, Introduction to Precision Machine Design and Error Assessment places a particular focus on the errors associated with precision design, machine diagnostics, error modeling, and error compensation. Error Assessment and ControlThe book begins with a brief overview of precision engineering and applications before introdu

  2. Note on Bessaga-Klee classification

    Cúth, Marek; Kalenda, Ondřej F. K.


    We collect several variants of the proof of the third case of the Bessaga-Klee relative classification of closed convex bodies in topological vector spaces. We were motivated by the fact that we have not found anywhere in the literature a complete correct proof. In particular, we point out an error in the proof given in the book of C.~Bessaga and A.~Pe\\l czy\\'nski (1975). We further provide a simplified version of T.~Dobrowolski's proof of the smooth classification of smooth convex bodies in ...

  3. Classification systems for natural resource management

    Kleckner, Richard L.


    Resource managers employ various types of resource classification systems in their management activities such as inventory, mapping, and data analysis. Classification is the ordering or arranging of objects into groups or sets on the basis of their relationships, and as such, provide the resource managers with a structure for organizing their needed information. In addition of conforming to certain logical principles, resource classifications should be flexible, widely applicable to a variety of environmental conditions, and useable with minimal training. The process of classification may be approached from the bottom up (aggregation) or the top down (subdivision) or a combination of both, depending on the purpose of the classification. Most resource classification systems in use today focus on a single resource and are used for a single, limited purpose. However, resource managers now must employ the concept of multiple use in their management activities. What they need is an integrated, ecologically based approach to resource classification which would fulfill multiple-use mandates. In an effort to achieve resource-data compatibility and data sharing among Federal agencies, and interagency agreement has been signed by five Federal agencies to coordinate and cooperate in the area of resource classification and inventory.

  4. Facts about Refractive Errors

    ... the cornea, or aging of the lens can cause refractive errors. What is refraction? Refraction is the bending of ... for objects at any distance, near or far. Astigmatism is a condition in ... This can cause images to appear blurry and stretched out. Presbyopia ...

  5. Team errors: definition and taxonomy

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  6. Hand eczema classification

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M;


    the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  7. Classification techniques based on AI application to defect classification in cast aluminum

    Platero, Carlos; Fernandez, Carlos; Campoy, Pascual; Aracil, Rafael


    This paper describes the Artificial Intelligent techniques applied to the interpretation process of images from cast aluminum surface presenting different defects. The whole process includes on-line defect detection, feature extraction and defect classification. These topics are discussed in depth through the paper. Data preprocessing process, as well as segmentation and feature extraction are described. At this point, algorithms employed along with used descriptors are shown. Syntactic filter has been developed to modelate the information and to generate the input vector to the classification system. Classification of defects is achieved by means of rule-based systems, fuzzy models and neural nets. Different classification subsystems perform together for the resolution of a pattern recognition problem (hybrid systems). Firstly, syntactic methods are used to obtain the filter that reduces the dimension of the input vector to the classification process. Rule-based classification is achieved associating a grammar to each defect type; the knowledge-base will be formed by the information derived from the syntactic filter along with the inferred rules. The fuzzy classification sub-system uses production rules with fuzzy antecedent and their consequents are ownership rates to every defect type. Different architectures of neural nets have been implemented with different results, as shown along the paper. In the higher classification level, the information given by the heterogeneous systems as well as the history of the process is supplied to an Expert System in order to drive the casting process.

  8. Classification of articulators.

    Rihani, A


    A simple classification in familiar terms with definite, clear characteristics can be adopted. This classification system is based on the number of records used and the adjustments necessary for the articulator to accept these records. The classification divides the articulators into nonadjustable, semiadjustable, and fully adjustable articulators (Table I). PMID:6928204

  9. Automated classification of patients with chronic lymphocytic leukemia and immunocytoma from flow cytometric three-color immunophenotypes.

    Valet, G K; Höffkes, H G


    The goal of this study was the discrimination between chronic lymphocytic leukemia (B-CLL), clinically more aggressive lymphoplasmocytoid immunocytoma (LP-IC) and other low-grade non-Hodgkin's lymphomas (NHL) of the B-cell type by automated analysis of flow cytometric immunophenotypes CD45/14/20, CD4/8/3, kappa/CD19/5, lambda/CD19/5 and CD10/23/19 from peripheral blood and bone marrow aspirate leukocytes using the multiparameter classification program CLASSIF1. The immunophenotype list mode files were exhaustively evaluated by combined lymphocyte, monocyte, and granulocyte (LMG) analysis. The results were introduced into databases and automatically classified in a standardized way. The resulting triple matrix classifiers are laboratory and instrument independent, error tolerant, and robust in the classification of unknown test samples. Practically 100% correct individual patient classification was achievable, and most manually unclassifiable patients were unambiguously classified. It is of interest that the single lambda/CD19/5 antibody triplet provided practically the same information as the full set of the five antibody triplets. This demonstrates that standardized classification can be used to optimize immunophenotype panels. On-line classification of test samples is accessible on the Internet: Immunophenotype panels are usually devised for the detection of the frequency of abnormal cell populations. As shown by computer classification, most the highly discriminant information is, however, not contained in percentage frequency values of cell populations, but rather in total antibody binding, antibody binding ratios, and relative antibody surface density parameters of various lymphocyte, monocyte, and granulocyte cell populations. PMID:9440819

  10. Improving Accuracy of Image Classification Using GIS

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  11. Normalization Benefits Microarray-Based Classification

    Chen Yidong


    Full Text Available When using cDNA microarrays, normalization to correct labeling bias is a common preliminary step before further data analysis is applied, its objective being to reduce the variation between arrays. To date, assessment of the effectiveness of normalization has mainly been confined to the ability to detect differentially expressed genes. Since a major use of microarrays is the expression-based phenotype classification, it is important to evaluate microarray normalization procedures relative to classification. Using a model-based approach, we model the systemic-error process to generate synthetic gene-expression values with known ground truth. These synthetic expression values are subjected to typical normalization methods and passed through a set of classification rules, the objective being to carry out a systematic study of the effect of normalization on classification. Three normalization methods are considered: offset, linear regression, and Lowess regression. Seven classification rules are considered: 3-nearest neighbor, linear support vector machine, linear discriminant analysis, regular histogram, Gaussian kernel, perceptron, and multiple perceptron with majority voting. The results of the first three are presented in the paper, with the full results being given on a complementary website. The conclusion from the different experiment models considered in the study is that normalization can have a significant benefit for classification under difficult experimental conditions, with linear and Lowess regression slightly outperforming the offset method.

  12. Normalization Benefits Microarray-Based Classification

    Edward R. Dougherty


    Full Text Available When using cDNA microarrays, normalization to correct labeling bias is a common preliminary step before further data analysis is applied, its objective being to reduce the variation between arrays. To date, assessment of the effectiveness of normalization has mainly been confined to the ability to detect differentially expressed genes. Since a major use of microarrays is the expression-based phenotype classification, it is important to evaluate microarray normalization procedures relative to classification. Using a model-based approach, we model the systemic-error process to generate synthetic gene-expression values with known ground truth. These synthetic expression values are subjected to typical normalization methods and passed through a set of classification rules, the objective being to carry out a systematic study of the effect of normalization on classification. Three normalization methods are considered: offset, linear regression, and Lowess regression. Seven classification rules are considered: 3-nearest neighbor, linear support vector machine, linear discriminant analysis, regular histogram, Gaussian kernel, perceptron, and multiple perceptron with majority voting. The results of the first three are presented in the paper, with the full results being given on a complementary website. The conclusion from the different experiment models considered in the study is that normalization can have a significant benefit for classification under difficult experimental conditions, with linear and Lowess regression slightly outperforming the offset method.

  13. Classification Accuracy and Consistency under Item Response Theory Models Using the Package classify

    Chris Wheadon


    The R package classify presents a number of useful functions which can be used to estimate the classification accuracy and consistency of assessments. Classification accuracy refers to the probability that an examinees achieved grade classification on an assessment reflects their true grade. Classification consistency refers to the probability that an examinee will be classified into the same grade classification under repeated administrations of an assessment. Understanding the classificatio...

  14. Control by model error estimation

    Likins, P. W.; Skelton, R. E.


    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  15. Error analysis and data reduction for interferometric surface measurements

    Zhou, Ping

    High-precision optical systems are generally tested using interferometry, since it often is the only way to achieve the desired measurement precision and accuracy. Interferometers can generally measure a surface to an accuracy of one hundredth of a wave. In order to achieve an accuracy to the next order of magnitude, one thousandth of a wave, each error source in the measurement must be characterized and calibrated. Errors in interferometric measurements are classified into random errors and systematic errors. An approach to estimate random errors in the measurement is provided, based on the variation in the data. Systematic errors, such as retrace error, imaging distortion, and error due to diffraction effects, are also studied in this dissertation. Methods to estimate the first order geometric error and errors due to diffraction effects are presented. Interferometer phase modulation transfer function (MTF) is another intrinsic error. The phase MTF of an infrared interferometer is measured with a phase Siemens star, and a Wiener filter is designed to recover the middle spatial frequency information. Map registration is required when there are two maps tested in different systems and one of these two maps needs to be subtracted from the other. Incorrect mapping causes wavefront errors. A smoothing filter method is presented which can reduce the sensitivity to registration error and improve the overall measurement accuracy. Interferometric optical testing with computer-generated holograms (CGH) is widely used for measuring aspheric surfaces. The accuracy of the drawn pattern on a hologram decides the accuracy of the measurement. Uncertainties in the CGH manufacturing process introduce errors in holograms and then the generated wavefront. An optimal design of the CGH is provided which can reduce the sensitivity to fabrication errors and give good diffraction efficiency for both chrome-on-glass and phase etched CGHs.

  16. Stellar classification from single-band imaging using machine learning

    Kuntzer, T.; Tewes, M.; Courbin, F.


    Information on the spectral types of stars is of great interest in view of the exploitation of space-based imaging surveys. In this article, we investigate the classification of stars into spectral types using only the shape of their diffraction pattern in a single broad-band image. We propose a supervised machine learning approach to this endeavour, based on principal component analysis (PCA) for dimensionality reduction, followed by artificial neural networks (ANNs) estimating the spectral type. Our analysis is performed with image simulations mimicking the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) in the F606W and F814W bands, as well as the Euclid VIS imager. We first demonstrate this classification in a simple context, assuming perfect knowledge of the point spread function (PSF) model and the possibility of accurately generating mock training data for the machine learning. We then analyse its performance in a fully data-driven situation, in which the training would be performed with a limited subset of bright stars from a survey, and an unknown PSF with spatial variations across the detector. We use simulations of main-sequence stars with flat distributions in spectral type and in signal-to-noise ratio, and classify these stars into 13 spectral subclasses, from O5 to M5. Under these conditions, the algorithm achieves a high success rate both for Euclid and HST images, with typical errors of half a spectral class. Although more detailed simulations would be needed to assess the performance of the algorithm on a specific survey, this shows that stellar classification from single-band images is well possible.

  17. Photon level chemical classification using digital compressive detection

    Highlights: ► A new digital compressive detection strategy is developed. ► Chemical classification demonstrated using as few as ∼10 photons. ► Binary filters are optimal when taking few measurements. - Abstract: A key bottleneck to high-speed chemical analysis, including hyperspectral imaging and monitoring of dynamic chemical processes, is the time required to collect and analyze hyperspectral data. Here we describe, both theoretically and experimentally, a means of greatly speeding up the collection of such data using a new digital compressive detection strategy. Our results demonstrate that detecting as few as ∼10 Raman scattered photons (in as little time as ∼30 μs) can be sufficient to positively distinguish chemical species. This is achieved by measuring the Raman scattered light intensity transmitted through programmable binary optical filters designed to minimize the error in the chemical classification (or concentration) variables of interest. The theoretical results are implemented and validated using a digital compressive detection instrument that incorporates a 785 nm diode excitation laser, digital micromirror spatial light modulator, and photon counting photodiode detector. Samples consisting of pairs of liquids with different degrees of spectral overlap (including benzene/acetone and n-heptane/n-octane) are used to illustrate how the accuracy of the present digital compressive detection method depends on the correlation coefficients of the corresponding spectra. Comparisons of measured and predicted chemical classification score plots, as well as linear and non-linear discriminant analyses, demonstrate that this digital compressive detection strategy is Poisson photon noise limited and outperforms total least squares-based compressive detection with analog filters.

  18. Forward error correction in optical ethernet communications

    Oliveras Boada, Jordi


    [ANGLÈS] A way of incrementing the amount of information sent through an optical fibre is ud-WDM (ultra dense – Wavelength Division Multiplexing). The problem is that the sensitivity of the receiver requires certain SNR (Signal Noise Ratio) that are only achieved in low distances, so to increase them a codification called FEC (Forward Error Correction) can be used. This should reduce the BER (Bit Error Rate) at the receiver letting the signal to be transmitted to longer distances. Another pro...

  19. Manson's triple error.

    F, Delaporte


    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  20. Minimum Error Tree Decomposition

    Liu, L; Ma, Y.; Wilkins, D.; Bian, Z.; Ying, X


    This paper describes a generalization of previous methods for constructing tree-structured belief network with hidden variables. The major new feature of the described method is the ability to produce a tree decomposition even when there are errors in the correlation data among the input variables. This is an important extension of existing methods since the correlational coefficients usually cannot be measured with precision. The technique involves using a greedy search algorithm that locall...

  1. Semiparametric Bernstein–von Mises for the error standard deviation

    Jonge, de, B.; Zanten, van, M.


    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  2. Semiparametric Bernstein-von Mises for the error standard deviation

    Jonge, de, B.; Zanten, van, M.


    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein-von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  3. Progression in nuclear classification

    In this book, summarize the author's achievements of nuclear classification by new method in latest 30 years, new foundational law of nuclear layer in matter world is found. It is explained with a hypothesis of a nucleus which it is made up of two nucleon's clusters with deuteron and triton. Its concrete content is: to advance a new method which analyze data of nuclei with natural abundance using relationship between the numbers of proton and neutron. The relationship of each nucleus increases to 4 sets: S+H=Z H+Z=N Z+N=A and S-H=K. To expand the similarity between proton and neutron to the similarity among p,n, deuteron, triton, and He-5 clusters. According to the distribution law of same kind of nuclei, it obtains that the upper limits of stable region both should be '44s'. New foundational law of nuclear system is 1,2,4,8,16,8,4,2,1. In order to explain new law, a hypothesis which nucleus is made up of deuteron and triton is developing and nuclear field of whole number is built up. And it relates that unity of matter motion, which is the most foundational form atomic nuclear systematic is similar to the most first-class form chromosome numbers of mankind. These achievements will shake the foundations of traditional nuclear science. These achievements will supply new tasks in developing nuclear theory. And shake the ground of which magic number is the basic of nuclear science. It opens up a new field on foundational research. The book will supply new knowledge for researcher, teachers and students in universities and polytechnic schools. Scientific workers read in works of research and technical exploit. It can be stored up for library and laboratory of society and universities. In nowadays of prosperity our nation by science and education, the book is readable for workers of scientific technology and amateurs of natural science

  4. Error Analysis and Its Implication



    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  5. Characterization of the error budget of Alba-NOM

    The Alba-NOM instrument is a high accuracy scanning machine capable of measuring the slope profile of long mirrors with resolution below the nanometer scale and for a wide range of curvatures. We present the characterization of different sources of errors that limit the uncertainty of the instrument. We have investigated three main contributions to the uncertainty of the measurements: errors introduced by the scanning system and the pentaprism, errors due to environmental conditions, and optical errors of the autocollimator. These sources of error have been investigated by measuring the corresponding motion errors with a high accuracy differential interferometer and by simulating their impact on the measurements by means of ray-tracing. Optical error contributions have been extracted from the analysis of redundant measurements of test surfaces. The methods and results are presented, as well as an example of application that has benefited from the achieved accuracy

  6. On the Foundations of Adversarial Single-Class Classification

    El-Yaniv, Ran


    Motivated by authentication, intrusion and spam detection applications we consider single-class classification (SCC) as a two-person game between the learner and an adversary. In this game the learner has a sample from a target distribution and the goal is to construct a classifier capable of distinguishing observations from the target distribution from observations emitted from an unknown other distribution. The ideal SCC classifier must guarantee a given tolerance for the false-positive error (false alarm rate) while minimizing the false negative error (intruder pass rate). Viewing SCC as a two-person zero-sum game we identify both deterministic and randomized optimal classification strategies for different game variants. We demonstrate that randomized classification can provide a significant advantage. In the deterministic setting we show how to reduce SCC to two-class classification where in the two-class problem the other class is a synthetically generated distribution. We provide an efficient and practi...

  7. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu; Wang Xiaosheng


    Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...

  8. Sparse Partial Least Squares Classification for High Dimensional Data*

    Chung, Dongjun; Keles, Sunduz


    Partial least squares (PLS) is a well known dimension reduction method which has been recently adapted for high dimensional classification problems in genome biology. We develop sparse versions of the recently proposed two PLS-based classification methods using sparse partial least squares (SPLS). These sparse versions aim to achieve variable selection and dimension reduction simultaneously. We consider both binary and multicategory classification. We provide analytical and simulation-based i...

  9. Classification of ASKAP Vast Radio Light Curves

    Rebbapragada, Umaa; Lo, Kitty; Wagstaff, Kiri L.; Reed, Colorado; Murphy, Tara; Thompson, David R.


    The VAST survey is a wide-field survey that observes with unprecedented instrument sensitivity (0.5 mJy or lower) and repeat cadence (a goal of 5 seconds) that will enable novel scientific discoveries related to known and unknown classes of radio transients and variables. Given the unprecedented observing characteristics of VAST, it is important to estimate source classification performance, and determine best practices prior to the launch of ASKAP's BETA in 2012. The goal of this study is to identify light curve characterization and classification algorithms that are best suited for archival VAST light curve classification. We perform our experiments on light curve simulations of eight source types and achieve best case performance of approximately 90% accuracy. We note that classification performance is most influenced by light curve characterization rather than classifier algorithm.

  10. Fingerprint Gender Classification using Wavelet Transform and Singular Value Decomposition

    Gnanasivam, P


    A novel method of gender Classification from fingerprint is proposed based on discrete wavelet transform (DWT) and singular value decomposition (SVD). The classification is achieved by extracting the energy computed from all the sub-bands of DWT combined with the spatial features of non-zero singular values obtained from the SVD of fingerprint images. K nearest neighbor (KNN) used as a classifier. This method is experimented with the internal database of 3570 fingerprints finger prints in which 1980 were male fingerprints and 1590 were female fingerprints. Finger-wise gender classification is achieved which is 94.32% for the left hand little fingers of female persons and 95.46% for the left hand index finger of male persons. Gender classification for any finger of male persons tested is attained as 91.67% and 84.69% for female persons respectively. Overall classification rate is 88.28% has been achieved.

  11. Medical error and systems of signaling: conceptual and linguistic definition.

    Smorti, Andrea; Cappelli, Francesco; Zarantonello, Roberta; Tani, Franca; Gensini, Gian Franco


    In recent years the issue of patient safety has been the subject of detailed investigations, particularly as a result of the increasing attention from the patients and the public on the problem of medical error. The purpose of this work is firstly to define the classification of medical errors, which are distinguished between two perspectives: those that are personal, and those that are caused by the system. Furthermore we will briefly review some of the main methods used by healthcare organizations to identify and analyze errors. During this discussion it has been determined that, in order to constitute a practical, coordinated and shared action to counteract the error, it is necessary to promote an analysis that considers all elements (human, technological and organizational) that contribute to the occurrence of a critical event. Therefore, it is essential to create a culture of constructive confrontation that encourages an open and non-punitive debate about the causes that led to error. In conclusion we have thus underlined that in health it is essential to affirm a system discussion that considers the error as a learning source, and as a result of the interaction between the individual and the organization. In this way, one should encourage a non-guilt bearing discussion on evident errors and on those which are not immediately identifiable, in order to create the conditions that recognize and corrects the error even before it produces negative consequences. PMID:25034521

  12. Security classification of information

    Quist, A.S.


    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  13. Recursive heuristic classification

    Wilkins, David C.


    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  14. Aprender de los errores

    Pacheco, José Miguel


    Es interesante constatar que con frecuencia los errores, siempre lamentables, nos enseñan más que la repetición de los éxitos. Vamos a comentar un ejemplo, con la idea de ofrecer una reflexión al profesorado de matemáticas en general. El análisis de los ejercicios de los alumnos puso de relieve la falta de una visión crítica de la enseñanza en cierto puntos clave, como se expondrá en este artículo.

  15. Classification and data acquisition with incomplete data

    Williams, David P.

    In remote-sensing applications, incomplete data can result when only a subset of sensors (e.g., radar, infrared, acoustic) are deployed at certain regions. The limitations of single sensor systems have spurred interest in employing multiple sensor modalities simultaneously. For example, in land mine detection tasks, different sensor modalities are better-suited to capture different aspects of the underlying physics of the mines. Synthetic aperture radar sensors may be better at detecting surface mines, while infrared sensors may be better at detecting buried mines. By employing multiple sensor modalities to address the detection task, the strengths of the disparate sensors can be exploited in a synergistic manner to improve performance beyond that which would be achievable with either single sensor alone. When multi-sensor approaches are employed, however, incomplete data can be manifested. If each sensor is located on a separate platform ( e.g., aircraft), each sensor may interrogate---and hence collect data over---only partially overlapping areas of land. As a result, some data points may be characterized by data (i.e., features) from only a subset of the possible sensors employed in the task. Equivalently, this scenario implies that some data points will be missing features. Increasing focus in the future on using---and fusing data from---multiple sensors will make such incomplete-data problems commonplace. In many applications involving incomplete data, it is possible to acquire the missing data at a cost. In multi-sensor remote-sensing applications, data is acquired by deploying sensors to data points. Acquiring data is usually an expensive, time-consuming task, a fact that necessitates an intelligent data acquisition process. Incomplete data is not limited to remote-sensing applications, but rather, can arise in virtually any data set. In this dissertation, we address the general problem of classification when faced with incomplete data. We also address the

  16. Graded Achievement, Tested Achievement, and Validity

    Brookhart, Susan M.


    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  17. Cost-sensitive classification for rare events: an application to the credit rating model validation for SMEs

    Raffaella Calabrese


    Receiver Operating Characteristic (ROC) curve is used to assess the discriminatory power of credit rating models. To identify the optimal threshold on the ROC curve, the iso-performance lines are used. The ROC curve and the iso-performance line assume equal classification error costs and that the two classification groups are relatively balanced. These assumptions are unrealistic in the application to credit risk. In order to remove these hypotheses, the curve of Classification Error Costs is...

  18. Emotion Classification from Noisy Speech - A Deep Learning Approach

    Rana, Rajib


    This paper investigates the performance of Deep Learning for speech emotion classification when the speech is compounded with noise. It reports on the classification accuracy and concludes with the future directions for achieving greater robustness for emotion recognition from noisy speech.

  19. Classification of titanium dioxide

    In this work the X-ray diffraction (XRD), Scanning Electron Microscopy (Sem) and the X-ray Dispersive Energy Spectroscopy techniques are used with the purpose to achieve a complete identification of phases and mixture of phases of a crystalline material as titanium dioxide. The problem for solving consists of being able to distinguish a sample of titanium dioxide being different than a titanium dioxide pigment. A standard sample of titanium dioxide with NIST certificate is used, which indicates a purity of 99.74% for the TiO2. The following way is recommended to proceed: a)To make an analysis by means of X-ray diffraction technique to the sample of titanium dioxide pigment and on the standard of titanium dioxide waiting not find differences. b) To make a chemical analysis by the X-ray Dispersive Energy Spectroscopy via in a microscope, taking advantage of the high vacuum since it is oxygen which is analysed and if it is concluded that the aluminium oxide appears in a greater proportion to 1% it is established that is a titanium dioxide pigment, but if it is lesser then it will be only titanium dioxide. This type of analysis is an application of the nuclear techniques useful for the tariff classification of merchandise which is considered as of difficult recognition. (Author)

  20. Carotid and Jugular Classification in ARTSENS.

    Sahani, Ashish Kumar; Shah, Malay Ilesh; Joseph, Jayaraj; Sivaprakasam, Mohanasankar


    Over past few years our group has been working on the development of a low-cost device, ARTSENS, for measurement of local arterial stiffness (AS) of the common carotid artery (CCA). This uses a single element ultrasound transducer to obtain A-mode frames from the CCA. It is designed to be fully automatic in its operation such that, a general medical practitioner can use the device without any prior knowledge of ultrasound modality. Placement of the probe over CCA and identification of echo positions corresponding to its two walls are critical steps in the process of measurement of AS. We had reported an algorithm to locate the CCA walls based on their characteristic motion. Unfortunately, in supine position, the internal jugular vein (IJV) expands in the carotid triangle and pulsates in a manner that confounds the existing algorithm and leads to wrong measurements of the AS. Jugular venous pulse (JVP), on its own right, is a very important physiological signal for diagnosis of morbidities of the right side of the heart and there is a lack of noninvasive methods for its accurate estimation. We integrated an ECG device to the existing hardware of ARTSENS and developed a method based on physiology of the vessels, which now enable us to segregate the CCA pulse (CCP) and the JVP. False identification rate is less than 4%. To retain the capabilities of ARTSENS to operate without ECG, we designed another method where the classification can be achieved without an ECG, albeit errors are a bit higher. These improvements enable ARTSENS to perform automatic measurement of AS even in the supine position and make it a unique and handy tool to perform JVP analysis. PMID:25700474

  1. Predicting sample size required for classification performance

    Figueroa Rosa L


    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  2. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Fangyu Pan


    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  3. Classifications for Proliferative Vitreoretinopathy (PVR: An Analysis of Their Use in Publications over the Last 15 Years

    Salvatore Di Lauro


    Full Text Available Purpose. To evaluate the current and suitable use of current proliferative vitreoretinopathy (PVR classifications in clinical publications related to treatment. Methods. A PubMed search was undertaken using the term “proliferative vitreoretinopathy therapy”. Outcome parameters were the reported PVR classification and PVR grades. The way the classifications were used in comparison to the original description was analyzed. Classification errors were also included. It was also noted whether classifications were used for comparison before and after pharmacological or surgical treatment. Results. 138 papers were included. 35 of them (25.4% presented no classification reference or did not use any one. 103 publications (74.6% used a standardized classification. The updated Retina Society Classification, the first Retina Society Classification, and the Silicone Study Classification were cited in 56.3%, 33.9%, and 3.8% papers, respectively. Furthermore, 3 authors (2.9% used modified-customized classifications and 4 (3.8% classification errors were identified. When the updated Retina Society Classification was used, only 10.4% of authors used a full C grade description. Finally, only 2 authors reported PVR grade before and after treatment. Conclusions. Our findings suggest that current classifications are of limited value in clinical practice due to the inconsistent and limited use and that it may be of benefit to produce a revised classification.

  4. New methodology in biomedical science: methodological errors in classical science.

    Skurvydas, Albertas


    The following methodological errors are observed in biomedical sciences: paradigmatic ones; those of exaggerated search for certainty; science dehumanisation; deterministic and linearity; those of making conclusions; errors of reductionism or quality decomposition as well as exaggerated enlargement; errors connected with discarding odd; unexpected or awkward facts; those of exaggerated mathematization; isolation of science; the error of "common sense"; Ceteris Paribus law's ("other things being equal" laws) error; "youth" and common sense; inflexibility of criteria of the truth; errors of restricting the sources of truth and ways of searching for truth; the error connected with wisdom gained post factum; the errors of wrong interpretation of research mission; "laziness" to repeat the experiment as well as the errors of coordination of errors. One of the basic aims for the present-day scholars of biomedicine is, therefore, mastering the new non-linear, holistic, complex way of thinking that will, undoubtedly, enable one to make less errors doing research. The aim of "scientific travelling" will be achieved with greater probability if the "travelling" itself is performed with great probability. PMID:15687745

  5. Payment Error Rate Measurement (PERM)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  6. Skylab water balance error analysis

    Leonard, J. I.


    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  7. Efficient Pairwise Multilabel Classification

    Loza Mencía, Eneldo


    Multilabel classification learning is the task of learning a mapping between objects and sets of possibly overlapping classes and has gained increasing attention in recent times. A prototypical application scenario for multilabel classification is the assignment of a set of keywords to a document, a frequently encountered problem in the text classification domain. With upcoming Web 2.0 technologies, this domain is extended by a wide range of tag suggestion tasks and the trend definitely...

  8. Classiology and soil classification

    Rozhkov, V. A.


    Classiology can be defined as a science studying the principles and rules of classification of objects of any nature. The development of the theory of classification and the particular methods for classifying objects are the main challenges of classiology; to a certain extent, they are close to the challenges of pattern recognition. The methodology of classiology integrates a wide range of methods and approaches: from expert judgment to formal logic, multivariate statistics, and informatics. Soil classification assumes generalization of available data and practical experience, formalization of our notions about soils, and their representation in the form of an information system. As an information system, soil classification is designed to predict the maximum number of a soil's properties from the position of this soil in the classification space. The existing soil classification systems do not completely satisfy the principles of classiology. The violation of logical basis, poor structuring, low integrity, and inadequate level of formalization make these systems verbal schemes rather than classification systems sensu stricto. The concept of classification as listing (enumeration) of objects makes it possible to introduce the notion of the information base of classification. For soil objects, this is the database of soil indices (properties) that might be applied for generating target-oriented soil classification system. Mathematical methods enlarge the prognostic capacity of classification systems; they can be applied to assess the quality of these systems and to recognize new soil objects to be included in the existing systems. The application of particular principles and rules of classiology for soil classification purposes is discussed in this paper.

  9. Classifier in Age classification

    B. Santhi; R.Seethalakshmi


    Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This st...

  10. Efficient multivariate sequence classification

    Kuksa, Pavel P.


    Kernel-based approaches for sequence classification have been successfully applied to a variety of domains, including the text categorization, image classification, speech analysis, biological sequence analysis, time series and music classification, where they show some of the most accurate results. Typical kernel functions for sequences in these domains (e.g., bag-of-words, mismatch, or subsequence kernels) are restricted to {\\em discrete univariate} (i.e. one-dimensional) string data, such ...