WorldWideScience

Sample records for achieved classification error

  1. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  2. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  3. Assessing the Statistical Significance of the Achieved Classification Error of Classifiers Constructed using Serum Peptide Profiles, and a Prescription for Random Sampling Repeated Studies for Massive High-Throughput Genomic and Proteomic Studies

    Directory of Open Access Journals (Sweden)

    William L Bigbee

    2005-01-01

    Full Text Available source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error, that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error.We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  4. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Directory of Open Access Journals (Sweden)

    Sun Yanni

    2011-05-01

    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.

  5. Variability and errors when applying the BIRADS mammography classification.

    Science.gov (United States)

    Boyer, Bruno; Canale, Sandra; Arfi-Rouche, Julia; Monzani, Quentin; Khaled, Wassef; Balleyguier, Corinne

    2013-03-01

    To standardize mammographic reporting, the American College of Radiology mammography developed the Breast Imaging Reporting and Data System (BIRADS) lexicon. However, wide variability is observed in practice in the application of the BIRADS terminology and this leads to classification errors. This review analyses the reasons for variations in BIRADS mammography, describes the types of errors made by readers with illustrated examples, and details BIRADS category 3 which is the most difficult category to use in practice.

  6. Automated Classification of Phonological Errors in Aphasic Language

    Science.gov (United States)

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  7. Small-Sample Error Estimation for Bagged Classification Rules

    Science.gov (United States)

    Vu, T. T.; Braga-Neto, U. M.

    2010-12-01

    Application of ensemble classification rules in genomics and proteomics has become increasingly common. However, the problem of error estimation for these classification rules, particularly for bagging under the small-sample settings prevalent in genomics and proteomics, is not well understood. Breiman proposed the "out-of-bag" method for estimating statistics of bagged classifiers, which was subsequently applied by other authors to estimate the classification error. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, by formulating carefully how the error count is normalized. We also report the results of an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the numerical experiments indicated that the performance of the out-of-bag estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is consistent with their performance with the corresponding single classifiers, as reported in other studies.

  8. Small-Sample Error Estimation for Bagged Classification Rules

    Directory of Open Access Journals (Sweden)

    Vu TT

    2010-01-01

    Full Text Available Application of ensemble classification rules in genomics and proteomics has become increasingly common. However, the problem of error estimation for these classification rules, particularly for bagging under the small-sample settings prevalent in genomics and proteomics, is not well understood. Breiman proposed the "out-of-bag" method for estimating statistics of bagged classifiers, which was subsequently applied by other authors to estimate the classification error. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, by formulating carefully how the error count is normalized. We also report the results of an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the numerical experiments indicated that the performance of the out-of-bag estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is consistent with their performance with the corresponding single classifiers, as reported in other studies.

  9. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  10. Reducing Support Vector Machine Classification Error by Implementing Kalman Filter

    Directory of Open Access Journals (Sweden)

    Muhsin Hassan

    2013-08-01

    Full Text Available The aim of this is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine classification errors in classifying pipeline corrosion depth. In pipeline defect classification, it is important to increase the accuracy of the SVM classification so that one can avoid misclassification which can lead to greater problems in monitoring pipeline defect and prediction of pipeline leakage. In this paper, it is found that noisy data can greatly affect the performance of SVM. Hence, Kalman Filter + SVM hybrid technique has been proposed as a solution to reduce SVM classification errors. The datasets has been added with Additive White Gaussian Noise in several stages to study the effect of noise on SVM classification accuracy. Three techniques have been studied in this experiment, namely SVM, hybrid of Discrete Wavelet Transform + SVM and hybrid of Kalman Filter + SVM. Experiment results have been compared to find the most promising techniques among them. MATLAB simulations show Kalman Filter and Support Vector Machine combination in a single system produced higher accuracy compared to the other two techniques.

  11. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  12. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  13. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Directory of Open Access Journals (Sweden)

    David Ayllón

    Full Text Available Bioimpedance spectroscopy (BIS measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33% and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  14. Mining discriminative class codes for multi-class classification based on minimizing generalization errors

    Science.gov (United States)

    Eiadon, Mongkon; Pipanmaekaporn, Luepol; Kamonsantiroj, Suwatchai

    2016-07-01

    Error Correcting Output Code (ECOC) has emerged as one of promising techniques for solving multi-class classification. In the ECOC framework, a multi-class problem is decomposed into several binary ones with a coding design scheme. Despite this, the suitable multi-class decomposition scheme is still ongoing research in machine learning. In this work, we propose a novel multi-class coding design method to mine the effective and compact class codes for multi-class classification. For a given n-class problem, this method decomposes the classes into subsets by embedding a structure of binary trees. We put forward a novel splitting criterion based on minimizing generalization errors across the classes. Then, a greedy search procedure is applied to explore the optimal tree structure for the problem domain. We run experiments on many multi-class UCI datasets. The experimental results show that our proposed method can achieve better classification performance than the common ECOC design methods.

  15. EEG error potentials detection and classification using time-frequency features for robot reinforcement learning.

    Science.gov (United States)

    Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali

    2015-08-01

    In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.

  16. Measuring the achievable error of query sets under differential privacy

    CERN Document Server

    Li, Chao

    2012-01-01

    A common goal of privacy research is to release synthetic data that satisfies a formal privacy guarantee and can be used by an analyst in place of the original data. To achieve reasonable accuracy, a synthetic data set must be tuned to support a specified set of queries accurately, sacrificing fidelity for other queries. This work considers methods for producing synthetic data under differential privacy and investigates what makes a set of queries "easy" or "hard" to answer. We consider answering sets of linear counting queries using the matrix mechanism, a recent differentially-private mechanism that can reduce error by adding complex correlated noise adapted to a specified workload. Our main result is a novel lower bound on the minimum total error required to simultaneously release answers to a set of workload queries. The bound reveals that the hardness of a query workload is related to the spectral properties of the workload when it is represented in matrix form. The bound is tight and, because it satisfi...

  17. CLASSIFICATION OF CRYOSOLS: SIGNIFICANCE,ACHIEVEMENTS AND CHALLENGES

    Institute of Scientific and Technical Information of China (English)

    CHEN Jie; GONG Zi-tong; CHEN Zhi-cheng; TAN Man-zhi

    2003-01-01

    International concerns about the effects of global change on permafrost-affected soils and responses of permafrost terrestrial landscapes to such change have been increasing in the last two decades. To achieve a variety of goals including the determining of soil carbon stocks and dynamics in the Northern Hemisphere, the understanding of soil degradation and the best ways to protect the fragile ecosystems in permafrost environment, further study development on Cryosol classification is being in great demand. In this paper the existing Cryosol classifications contained in three representative soil taxonomies are introduced, and the problems in the practical application of the defining criteria used for category differentiation in these taxonomic systems are discussed. Meanwhile, the resumption and reconstruction of Chinese Cryosol classification within a taxonomic frame is proposed. In dealing with Cryosol classification the advantages that Chinese pedologists have and the challenges that they have to face are analyzed. Finally, several suggestions on the study development of the further taxonomic frame of Cryosol classification are put forward.

  18. The Effect of Error Correction vs. Error Detection on Iranian Pre-Intermediate EFL Learners' Writing Achievement

    Science.gov (United States)

    Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad

    2010-01-01

    This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…

  19. Establishment and application of medication error classification standards in nursing care based on the International Classification of Patient Safety

    Directory of Open Access Journals (Sweden)

    Xiao-Ping Zhu

    2014-09-01

    Conclusion: Application of this classification system will help nursing administrators to accurately detect system- and process-related defects leading to medication errors, and enable the factors to be targeted to improve the level of patient safety management.

  20. A proposal for the detection and classification of discourse errors

    OpenAIRE

    Mestre, Eva M. Mestre; Carrió-Pastor, María Luisa

    2013-01-01

    Our interest lies in error from the point of view of language in context,therefore we will focus on errors produced at the discourse level. The main objective of this paper is to detect discourse competence errors and their implications through the analysis of a corpus of English written texts produced by Higher Education students with a B1 level (following the Common European Framework of Reference for Languages). Further objectives are to propose categories which could help us to c...

  1. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    Science.gov (United States)

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki

    2013-01-01

    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  2. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  3. Temporal Classification Error Compensation of Convolutional Neural Network for Traffic Sign Recognition

    Science.gov (United States)

    Yoon, Seungjong; Kim, Eungtae

    2017-02-01

    In this paper, we propose the method that classifies the traffic signs by using Convolutional Neural Network(CNN) and compensates the error rate of CNN using the temporal correlation between adjacent successive frames. Instead of applying a conventional CNN architecture with more layers, Temporal Classification Error Compensation(TCEC) is proposed to improve the error rate in the architecture which has less nodes and layers than a conventional CNN. Experimental results show that the complexity of the proposed method could be reduced by 50% compared with that of the conventional CNN with same layers, and the error rate could be improved by about 3%.

  4. Insight into error hiding: exploration of nursing students' achievement goal orientations.

    Science.gov (United States)

    Dunn, Karee E

    2014-02-01

    An estimated 50% of medication errors go unreported, and error hiding is costly to hospitals and patients. This study explored one issue that may facilitate error hiding. Descriptive statistics were used to examine nursing students' achievement goal orientations in a high-fidelity simulation course. Results indicated that although this sample of nursing students held high mastery goal orientations, they also held moderate levels of performance-approach and performance-avoidance goal orientations. These goal orientations indicate that this sample is at high risk for error hiding, which places the benefits that are typically gleaned from a strong mastery orientation at risk. Understanding variables, such as goal orientation, that can be addressed in nursing education to reduce error hiding is an area of research that needs to be further explored. This article discusses the study results and evidence-based instructional practices for this sample's achievement goal orientation profile.

  5. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    Directory of Open Access Journals (Sweden)

    Rong Wang

    2015-01-01

    Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  6. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    Science.gov (United States)

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  7. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  8. Early math and reading achievement are associated with the error positivity.

    Science.gov (United States)

    Kim, Matthew H; Grammer, Jennie K; Marulis, Loren M; Carrasco, Melisa; Morrison, Frederick J; Gehring, William J

    2016-12-01

    Executive functioning (EF) and motivation are associated with academic achievement and error-related ERPs. The present study explores whether early academic skills predict variability in the error-related negativity (ERN) and error positivity (Pe). Data from 113 three- to seven-year-old children in a Go/No-Go task revealed that stronger early reading and math skills predicted a larger Pe. Closer examination revealed that this relation was quadratic and significant for children performing at or near grade level, but not significant for above-average achievers. Early academics did not predict the ERN. These findings suggest that the Pe - which reflects individual differences in motivational processes as well as attention - may be associated with early academic achievement. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Early math and reading achievement are associated with the error positivity

    Directory of Open Access Journals (Sweden)

    Matthew H. Kim

    2016-12-01

    Full Text Available Executive functioning (EF and motivation are associated with academic achievement and error-related ERPs. The present study explores whether early academic skills predict variability in the error-related negativity (ERN and error positivity (Pe. Data from 113 three- to seven-year-old children in a Go/No-Go task revealed that stronger early reading and math skills predicted a larger Pe. Closer examination revealed that this relation was quadratic and significant for children performing at or near grade level, but not significant for above-average achievers. Early academics did not predict the ERN. These findings suggest that the Pe – which reflects individual differences in motivational processes as well as attention – may be associated with early academic achievement.

  10. An Optimization Approach of Deriving Bounds between Entropy and Error from Joint Distribution: Case Study for Binary Classifications

    Directory of Open Access Journals (Sweden)

    Bao-Gang Hu

    2016-02-01

    Full Text Available In this work, we propose a new approach of deriving the bounds between entropy and error from a joint distribution through an optimization means. The specific case study is given on binary classifications. Two basic types of classification errors are investigated, namely, the Bayesian and non-Bayesian errors. The consideration of non-Bayesian errors is due to the facts that most classifiers result in non-Bayesian solutions. For both types of errors, we derive the closed-form relations between each bound and error components. When Fano’s lower bound in a diagram of “Error Probability vs. Conditional Entropy” is realized based on the approach, its interpretations are enlarged by including non-Bayesian errors and the two situations along with independent properties of the variables. A new upper bound for the Bayesian error is derived with respect to the minimum prior probability, which is generally tighter than Kovalevskij’s upper bound.

  11. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    Science.gov (United States)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  12. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images

    Directory of Open Access Journals (Sweden)

    Henry Joutsijoki

    2016-01-01

    Full Text Available The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC colony images can be classified using error-correcting output codes (ECOC. Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT based features in classification. The best accuracy (62.4% is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting. The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  13. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images.

    Science.gov (United States)

    Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti

    2016-01-01

    The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC) colony images can be classified using error-correcting output codes (ECOC). Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good) which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN) searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT) based features in classification. The best accuracy (62.4%) is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting). The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  14. Classification errors in contingency tables analyzed with hierarchical log-linear models. Technical report No. 20

    Energy Technology Data Exchange (ETDEWEB)

    Korn, E L

    1978-08-01

    This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.

  15. Reversible watermarking based on invariant image classification and dynamical error histogram shifting.

    Science.gov (United States)

    Pan, W; Coatrieux, G; Cuppens, N; Cuppens, F; Roux, Ch

    2011-01-01

    In this article, we present a novel reversible watermarking scheme. Its originality stands in identifying parts of the image that can be watermarked additively with the most adapted lossless modulation between: Pixel Histogram Shifting (PHS) or Dynamical Error Histogram Shifting (DEHS). This classification process makes use of a reference image derived from the image itself, a prediction of it, which has the property to be invariant to the watermark addition. In that way, watermark embedded and reader remain synchronized through this image of reference. DEHS is also an original contribution of this work. It shifts predict-errors between the image and its reference image taking care of the local specificities of the image, thus dynamically. Conducted experiments, on different medical image test sets issued from different modalities and some natural images, show that our method can insert more data with lower distortion than the most recent and efficient methods of the literature.

  16. Software platform for managing the classification of error- related potentials of observers

    Science.gov (United States)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  17. Factors that affect large subunit ribosomal DNA amplicon sequencing studies of fungal communities: classification method, primer choice, and error.

    Directory of Open Access Journals (Sweden)

    Teresita M Porter

    Full Text Available Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1 a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN; 2 a composition-based method (Ribosomal Database Project naïve bayesian classifier, NBC; and, 3 a phylogeny-based method (Statistical Assignment Package, SAP. We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50-100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys.

  18. Systematic classification of unseeded batch crystallization systems for achievable shape and size analysis

    Science.gov (United States)

    Acevedo, David; Nagy, Zoltan K.

    2014-05-01

    The purpose of the current work is to develop a systematic classification scheme for crystallization systems considering simultaneous size and shape variations, and to study the effect of temperature profiles on the achievable final shape of crystals for various crystallization systems. A classification method is proposed based on the simultaneous consideration of the effect of temperature profiles on nucleation and growth rates of two different characteristic crystal dimensions. Hence the approach provides direct indication of the extent in which crystal shape may be controlled for a particular system class by manipulating the supersaturation. A multidimensional population balance model (PBM) was implemented for unseeded crystallization processes of four different compounds. The effect between the nucleation and growth mechanisms on the final aspect ratio (AR) was investigated and it was shown that for nucleation dominated systems the AR is independent of the supersaturation profile. The simulation results confirmed experimentally also show that most crystallization systems tend to achieve an equilibrium shape hence the variation in the aspect ratio that can be achieved by manipulating the supersaturation is limited, in particular when nucleation is also taken into account as a competing phenomenon.

  19. Human Error Classification for the Permit to Work System by SHERPA in a Petrochemical Industry

    Directory of Open Access Journals (Sweden)

    Arash Ghasemi

    2015-12-01

    Full Text Available Background & objective: Occupational accidents may occur in any types of activities. Carrying out daily activities such as repairing and maintaining are one of the work phases that have high risck. Despite the issuance of work permits or work license systems for controling the risks of non-routine activities, the high rate of accidents during activity indicates the inadequacy of such systems. A main portion of this lacking is attributed to the human errors. Then, it is necessary to identify and control the probable human errors during issuing permits. Methods: In the present study, the probable errors for four categories of working permits were identified using SHERPA method. Then, an expert team analyzed 25500 issued permits during a period of approximately one year. Most of frequent human errors and their types were determined. Results: The “Excavation” and “Entry to confined space” permit possess the most errors. Approximately, 28.5 present of all errors were related to the excavation permits. The implementation error was recognized as the most frequent error for all types of error taxonomy. For every category of permits, about 40% of all errors were attributed to the implementation errors. Conclusion: The results may indicate the weakness points in the practical training of the licensing system. The human error identification methods can be used to predict and decrease the human errors.

  20. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    Science.gov (United States)

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  1. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    Science.gov (United States)

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  2. Classification of English language learner writing errors using a parallel corpus with SVM

    OpenAIRE

    Flanagan, Brendan; Yin, Chengjiu; Suzuki, Takahiko; Hirokawa, Sachio

    2014-01-01

    In order to overcome mistakes, learners need feedback to prompt reflection on their errors. This is a particularly important issue in education systems as the system effectiveness in finding errors or mistakes could have an impact on learning. Finding errors is essential to providing appropriate guidance in order for learners to overcome their flaws. Traditionally the task of finding errors in writing takes time and effort. The authors of this paper have a long-term research goal of creating ...

  3. The Concurrent Validity of the Diagnostic Analysis of Reading Errors as a Predictor of the English Achievement of Lebanese Students.

    Science.gov (United States)

    Saigh, Philip A.; Khairallah, Shereen

    1983-01-01

    The concurrent validity of the Diagnostic Analysis of Reading Errors (DARE) subtests was studied, based on the responses of Lebanese secondary and postsecondary students relative to their achievement in an English course or on a standardized test of English proficiency. The results indicate that the DARE is not a viable predictor of English…

  4. Stochastic analysis of multiple-passband spectral classifications systems affected by observation errors

    Science.gov (United States)

    Tsokos, C. P.

    1980-01-01

    The classification of targets viewed by a pushbroom type multiple band spectral scanner by algorithms suitable for implementation in high speed online digital circuits is considered. A class of algorithms suitable for use with a pipelined classifier is investigated through simulations based on observed data from agricultural targets. It is shown that time distribution of target types is an important determining factor in classification efficiency.

  5. Medication errors in outpatient setting of a tertiary care hospital: classification and root cause analysis

    Directory of Open Access Journals (Sweden)

    Sunil Basukala

    2015-12-01

    Conclusions: Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Hence, A focus on easy-to-use and inexpensive techniques for medication error reduction should be used to have the greatest impact. [Int J Basic Clin Pharmacol 2015; 4(6.000: 1235-1240

  6. 基于误差模型的混合分类算法%Error-based Hybrid Classification Algorithm

    Institute of Scientific and Technical Information of China (English)

    丛雪燕

    2014-01-01

    A new error-based approach of hybrid classification is presented , when data sets with binary objective variables are classified and it could increase the accuracy of classification .The paper also uses data sets to test the proposed approach and compares with the single classification .The results show that this method greatly improve the property , especially when it is pre-dicted by two methods and the rate of variance is higher , this hybrid approach had demonstrated impressive capacities to improve the prediction accuracy .%针对目标变量为二进制的数据集合进行分类,提出一种新的基于误差模型的混合分类方法,可以提高分类的精度。采用实际数据集作为测试数据,结果表明本文提出的算法性能优于其他的混合算法以及现有的单一使用的分类方法,尤其是当2种方法预测不一致的比率较高时,利用该方法能够显著地改善预测的准确性。

  7. An Integrated Method of Multiradar Quantitative Precipitation Estimation Based on Cloud Classification and Dynamic Error Analysis

    Directory of Open Access Journals (Sweden)

    Yong Huang

    2017-01-01

    Full Text Available Relationships between radar reflectivity factor and rainfall are different in various precipitation cloud systems. In this study, the cloud systems are firstly classified into five categories with radar and satellite data to improve radar quantitative precipitation estimation (QPE algorithm. Secondly, the errors of multiradar QPE algorithms are assumed to be different in convective and stratiform clouds. The QPE data are then derived with methods of Z-R, Kalman filter (KF, optimum interpolation (OI, Kalman filter plus optimum interpolation (KFOI, and average calibration (AC based on error analysis on the Huaihe River Basin. In the case of flood on the early of July 2007, the KFOI is applied to obtain the QPE product. Applications show that the KFOI can improve precision of estimating precipitation for multiple precipitation types.

  8. No margin for error: determinants of achievement among disadvantaged children of immigrants

    Directory of Open Access Journals (Sweden)

    Alejandro Portes

    2014-11-01

    Full Text Available After reviewing the existing literature on second generation adaptation, the paper presents evidence of the process of segmented assimilation on the basis of the latest data from the Children of Immigrants Longitudinal Study (CILS. This evidence serves as a backdrop for the analysis of determinants of educational and occupational achievement among second generation youths who grow up under conditions of severe disadvantage. Based on interviews with a sample of fifty CILS respondents and their families, the analysis identifies four key causal mechanisms that are common to these «success stories» and that offer the basis for theoretical refinements on how the process of second generation adaptation actually unfolds and for policies to address the needs and aspirations of the most disadvantaged members of this population

  9. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Science.gov (United States)

    Spinnato, J.; Roubaud, M.-C.; Burle, B.; Torrésani, B.

    2015-06-01

    Objective. The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. Approach. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  10. Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands

    CERN Document Server

    Dick, Josef

    2010-01-01

    We study numerical approximations of integrals $\\int_{[0,1]^s} f(\\bsx) \\,\\mathrm{d} \\bsx$ by averaging the function at some sampling points. Monte Carlo (MC) sampling yields a convergence of the root mean square error (RMSE) of order $N^{-1/2}$ (where $N$ is the number of samples). Quasi-Monte Carlo (QMC) sampling on the other hand achieves a convergence of order $N^{-1+\\varepsilon}$, for any $\\varepsilon >0$. Randomized QMC (RQMC), a combination of MC and QMC, achieves a RMSE of order $N^{-3/2+\\varepsilon}$. A combination of RQMC with local antithetic sampling achieves a convergence of the RMSE of order $N^{-3/2-1/s+\\varepsilon}$ (where $s \\ge 1$ is the dimension). QMC, RQMC and RQMC with local antithetic sampling require that the integrand has some smoothness (for instance, bounded variation). Stronger smoothness assumptions on the integrand do not improve the convergence of the above algorithms further. This paper introduces a new RQMC algorithm, for which we prove that it achieves a convergence of the RMS...

  11. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  12. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  13. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  14. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Science.gov (United States)

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  15. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Science.gov (United States)

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  16. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  17. Hybrid evolutionary techniques in feed forward neural network with distributed error for classification of handwritten Hindi `SWARS'

    Science.gov (United States)

    Kumar, Somesh; Pratap Singh, Manu; Goel, Rajkumar; Lavania, Rajesh

    2013-12-01

    In this work, the performance of feedforward neural network with a descent gradient of distributed error and the genetic algorithm (GA) is evaluated for the recognition of handwritten 'SWARS' of Hindi curve script. The performance index for the feedforward multilayer neural networks is considered here with distributed instantaneous unknown error i.e. different error for different layers. The objective of the GA is to make the search process more efficient to determine the optimal weight vectors from the population. The GA is applied with the distributed error. The fitness function of the GA is considered as the mean of square distributed error that is different for each layer. Hence the convergence is obtained only when the minimum of different errors is determined. It has been analysed that the proposed method of a descent gradient of distributed error with the GA known as hybrid distributed evolutionary technique for the multilayer feed forward neural performs better in terms of accuracy, epochs and the number of optimal solutions for the given training and test pattern sets of the pattern recognition problem.

  18. Do the Kinds of Achievement Errors Made by Students Diagnosed with ADHD Vary as a Function of Their Reading Ability?

    Science.gov (United States)

    Pagirsky, Matthew S.; Koriakin, Taylor A.; Avitia, Maria; Costa, Michael; Marchis, Lavinia; Maykel, Cheryl; Sassu, Kari; Bray, Melissa A.; Pan, Xingyu

    2017-01-01

    A large body of research has documented the relationship between attention-deficit hyperactivity disorder (ADHD) and reading difficulties in children; however, there have been no studies to date that have examined errors made by students with ADHD and reading difficulties. The present study sought to determine whether the kinds of achievement…

  19. Supervised, Multivariate, Whole-brain Reduction Did Not Help to Achieve High Classification Performance in Schizophrenia Research

    Directory of Open Access Journals (Sweden)

    Eva Janousova

    2016-08-01

    Full Text Available We examined how penalized linear discriminant analysis with resampling, which is a supervised, multivariate, whole-brain reduction technique, can help schizophrenia diagnostics and research. In an experiment with magnetic resonance brain images of 52 first-episode schizophrenia patients and 52 healthy controls, this method allowed us to select brain areas relevant to schizophrenia, such as the left prefrontal cortex, the anterior cingulum, the right anterior insula, the thalamus and the hippocampus. Nevertheless, the classification performance based on such reduced data was not significantly better than the classification of data reduced by mass univariate selection using a t-test or unsupervised multivariate reduction using principal component analysis. Moreover, we found no important influence of the type of imaging features, namely local deformations or grey matter volumes, and the classification method, specifically linear discriminant analysis or linear support vector machines, on the classification results. However, we ascertained significant effect of a cross-validation setting on classification performance as classification results were overestimated even though the resampling was performed during the selection of brain imaging features. Therefore, it is critically important to perform cross-validation in all steps of the analysis (not only during classification in case there is no external validation set to avoid optimistically biasing the results of classification studies.

  20. A multitemporal probabilistic error correction approach to SVM classification of alpine glacier exploiting sentinel-1 images (Conference Presentation)

    Science.gov (United States)

    Callegari, Mattia; Marin, Carlo; Notarnicola, Claudia; Carturan, Luca; Covi, Federico; Galos, Stephan; Seppi, Roberto

    2016-10-01

    In mountain regions and their forelands, glaciers are key source of melt water during the middle and late ablation season, when most of the winter snow has already melted. Furthermore, alpine glaciers are recognized as sensitive indicators of climatic fluctuations. Monitoring glacier extent changes and glacier surface characteristics (i.e. snow, firn and bare ice coverage) is therefore important for both hydrological applications and climate change studies. Satellite remote sensing data have been widely employed for glacier surface classification. Many approaches exploit optical data, such as from Landsat. Despite the intuitive visual interpretation of optical images and the demonstrated capability to discriminate glacial surface thanks to the combination of different bands, one of the main disadvantages of available high-resolution optical sensors is their dependence on cloud conditions and low revisit time frequency. Therefore, operational monitoring strategies relying only on optical data have serious limitations. Since SAR data are insensitive to clouds, they are potentially a valid alternative to optical data for glacier monitoring. Compared to past SAR missions, the new Sentinel-1 mission provides much higher revisit time frequency (two acquisitions each 12 days) over the entire European Alps, and this number will be doubled once the Sentinel1-b will be in orbit (April 2016). In this work we present a method for glacier surface classification by exploiting dual polarimetric Sentinel-1 data. The method consists of a supervised approach based on Support Vector Machine (SVM). In addition to the VV and VH signals, we tested the contribution of local incidence angle, extracted from a digital elevation model and orbital information, as auxiliary input feature in order to account for the topographic effects. By exploiting impossible temporal transition between different classes (e.g. if at a given date one pixel is classified as rock it cannot be classified as

  1. A new design criterion and construction method for space-time trellis codes based on classification of error events

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The known design criterions of Space-Time Trellis Codes (STTC) on slow Rayleigh fading channel are rank, determinant and trace criterion. These criterions are not advantageous not only in operation but also in performance. With classifying the error events of STTC, a new criterion was presented on slow Rayleigh fading channels. Based on the criterion, an effective and straightforward multi-step method is proposed to construct codes with better performance. This method can reduce the computation of search to small enough. Simulation results show that the codes searched by computer have the same or even better performance than the reported codes.

  2. Reducing classification error of grassland overgrowth by combing low-density lidar acquisitions and optical remote sensing data

    Science.gov (United States)

    Pitkänen, T. P.; Käyhkö, N.

    2017-08-01

    Mapping structural changes in vegetation dynamics has, for a long time, been carried out using satellite images, orthophotos and, more recently, airborne lidar acquisitions. Lidar has established its position as providing accurate material for structure-based analyses but its limited availability, relatively short history, and lack of spectral information, however, are generally impeding the use of lidar data for change detection purposes. A potential solution in respect of detecting both contemporary vegetation structures and their previous trajectories is to combine lidar acquisitions with optical remote sensing data, which can substantially extend the coverage, span and spectral range needed for vegetation mapping. In this study, we tested the simultaneous use of a single low-density lidar data set, a series of Landsat satellite frames and two high-resolution orthophotos to detect vegetation succession related to grassland overgrowth, i.e. encroachment of woody plants into semi-natural grasslands. We built several alternative Random Forest models with different sets of variables and tested the applicability of respective data sources for change detection purposes, aiming at distinguishing unchanged grassland and woodland areas from overgrown grasslands. Our results show that while lidar alone provides a solid basis for indicating structural differences between grassland and woodland vegetation, and orthophoto-generated variables alone are better in detecting successional changes, their combination works considerably better than its respective parts. More specifically, a model combining all the used data sets reduces the total error from 17.0% to 11.0% and omission error of detecting overgrown grasslands from 56.9% to 31.2%, when compared to model constructed solely based on lidar data. This pinpoints the efficiency of the approach where lidar-generated structural metrics are combined with optical and multitemporal observations, providing a workable framework to

  3. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Science.gov (United States)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  4. Discriminative Structured Dictionary Learning for Image Classification

    Institute of Scientific and Technical Information of China (English)

    王萍; 兰俊花; 臧玉卫; 宋占杰

    2016-01-01

    In this paper, a discriminative structured dictionary learning algorithm is presented. To enhance the dictionary’s discriminative power, the reconstruction error, classification error and inhomogeneous representation error are integrated into the objective function. The proposed approach learns a single structured dictionary and a linear classifier jointly. The learned dictionary encourages the samples from the same class to have similar sparse codes, and the samples from different classes to have dissimilar sparse codes. The solution to the objective function is achieved by employing a feature-sign search algorithm and Lagrange dual method. Experimental results on three public databases demonstrate that the proposed approach outperforms several recently proposed dictionary learning techniques for classification.

  5. Sparse group lasso and high dimensional multinomial classification

    DEFF Research Database (Denmark)

    Vincent, Martin; Hansen, N.R.

    2014-01-01

    group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  6. Addressing medical coding and billing part II: a strategy for achieving compliance. A risk management approach for reducing coding and billing errors.

    Science.gov (United States)

    Adams, Diane L; Norman, Helen; Burroughs, Valentine J

    2002-06-01

    Medical practice today, more than ever before, places greater demands on physicians to see more patients, provide more complex medical services and adhere to stricter regulatory rules, leaving little time for coding and billing. Yet, the need to adequately document medical records, appropriately apply billing codes and accurately charge insurers for medical services is essential to the medical practice's financial condition. Many physicians rely on office staff and billing companies to process their medical bills without ever reviewing the bills before they are submitted for payment. Some physicians may not be receiving the payment they deserve when they do not sufficiently oversee the medical practice's coding and billing patterns. This article emphasizes the importance of monitoring and auditing medical record documentation and coding application as a strategy for achieving compliance and reducing billing errors. When medical bills are submitted with missing and incorrect information, they may result in unpaid claims and loss of revenue to physicians. Addressing Medical Audits, Part I--A Strategy for Achieving Compliance--CMS, JCAHO, NCQA, published January 2002 in the Journal of the National Medical Association, stressed the importance of preparing the medical practice for audits. The article highlighted steps the medical practice can take to prepare for audits and presented examples of guidelines used by regulatory agencies to conduct both medical and financial audits. The Medicare Integrity Program was cited as an example of guidelines used by regulators to identify coding errors during an audit and deny payment to providers when improper billing occurs. For each denied claim, payments owed to the medical practice are are also denied. Health care is, no doubt, a costly endeavor for health care providers, consumers and insurers. The potential risk to physicians for improper billing may include loss of revenue, fraud investigations, financial sanction

  7. Fabrication of a magnetic-tunnel-junction-based nonvolatile logic-in-memory LSI with content-aware write error masking scheme achieving 92% storage capacity and 79% power reduction

    Science.gov (United States)

    Natsui, Masanori; Tamakoshi, Akira; Endoh, Tetsuo; Ohno, Hideo; Hanyu, Takahiro

    2017-04-01

    A magnetic-tunnel-junction (MTJ)-based video coding hardware with an MTJ-write-error-rate relaxation scheme as well as a nonvolatile storage capacity reduction technique is designed and fabricated in a 90 nm MOS and 75 nm perpendicular MTJ process. The proposed MTJ-oriented dynamic error masking scheme suppresses the effect of write operation errors on the operation result of LSI, which results in the increase in an acceptable MTJ write error rate up to 7.8 times with less than 6% area overhead, while achieving 79% power reduction compared with that of the static-random-access-memory-based one.

  8. Rhythm Analysis by Heartbeat Classification in the Electrocardiogram (Review article of the research achievements of the members of the Centre of Biomedical Engineering, Bulgarian Academy of Sciences

    Directory of Open Access Journals (Sweden)

    Irena Jekova

    2009-08-01

    Full Text Available The morphological and rhythm analysis of the electrocardiogram (ECG is based on ventricular beats detection, wave parameters measurement, as amplitudes, widths, polarities, intervals and relations between them, and a subsequent classification supporting the diagnostic process. Number of algorithms for detection and classification of the QRS complexes have been developed by researchers in the Centre of Biomedical Engineering - Bulgarian Academy of Sciences, and are reviewed in this material. Combined criteria have been introduced dealing with the QRS areas and amplitudes, the waveshapes evaluated by steep slopes and sharp peaks, vectorcardiographic (VCG loop descriptors, RR intervals irregularities. Algorithms have been designed for application on a single ECG lead, a synthesized lead derived by multichannel synchronous recordings, or simultaneous multilead analysis. Some approaches are based on templates matching, cross-correlation or rely on a continuous updating of adaptive thresholds. Various beat classification methods have been designed involving discriminant analysis, the K-th nearest neighbors, fuzzy sets, genetic algorithms, neural networks, etc. The efficiency of the developed methods has been assessed using internationally recognized arrhythmia ECG databases with annotated beats and rhythm disturbances. In general, high values for specificity and sensitivity competitive to those reported in the literature have been achieved.

  9. Random forest for gene selection and microarray data classification.

    Science.gov (United States)

    Moorthy, Kohbalan; Mohamad, Mohd Saberi

    2011-01-01

    A random forest method has been selected to perform both gene selection and classification of the microarray data. In this embedded method, the selection of smallest possible sets of genes with lowest error rates is the key factor in achieving highest classification accuracy. Hence, improved gene selection method using random forest has been proposed to obtain the smallest subset of genes as well as biggest subset of genes prior to classification. The option for biggest subset selection is done to assist researchers who intend to use the informative genes for further research. Enhanced random forest gene selection has performed better in terms of selecting the smallest subset as well as biggest subset of informative genes with lowest out of bag error rates through gene selection. Furthermore, the classification performed on the selected subset of genes using random forest has lead to lower prediction error rates compared to existing method and other similar available methods.

  10. Achieving the "triple aim" for inborn errors of metabolism: a review of challenges to outcomes research and presentation of a new practice-based evidence framework.

    Science.gov (United States)

    Potter, Beth K; Chakraborty, Pranesh; Kronick, Jonathan B; Wilson, Kumanan; Coyle, Doug; Feigenbaum, Annette; Geraghty, Michael T; Karaceper, Maria D; Little, Julian; Mhanni, Aizeddin; Mitchell, John J; Siriwardena, Komudi; Wilson, Brenda J; Syrowatka, Ania

    2013-06-01

    Across all areas of health care, decision makers are in pursuit of what Berwick and colleagues have called the "triple aim": improving patient experiences with care, improving health outcomes, and managing health system impacts. This is challenging in a rare disease context, as exemplified by inborn errors of metabolism. There is a need for evaluative outcomes research to support effective and appropriate care for inborn errors of metabolism. We suggest that such research should consider interventions at both the level of the health system (e.g., early detection through newborn screening, programs to provide access to treatments) and the level of individual patient care (e.g., orphan drugs, medical foods). We have developed a practice-based evidence framework to guide outcomes research for inborn errors of metabolism. Focusing on outcomes across the triple aim, this framework integrates three priority themes: tailoring care in the context of clinical heterogeneity; a shift from "urgent care" to "opportunity for improvement"; and the need to evaluate the comparative effectiveness of emerging and established therapies. Guided by the framework, a new Canadian research network has been established to generate knowledge that will inform the design and delivery of health services for patients with inborn errors of metabolism and other rare diseases.

  11. Achievements in mental health outcome measurement in Australia: Reflections on progress made by the Australian Mental Health Outcomes and Classification Network (AMHOCN

    Directory of Open Access Journals (Sweden)

    Burgess Philip

    2012-05-01

    Full Text Available Abstract Background Australia’s National Mental Health Strategy has emphasised the quality, effectiveness and efficiency of services, and has promoted the collection of outcomes and casemix data as a means of monitoring these. All public sector mental health services across Australia now routinely report outcomes and casemix data. Since late-2003, the Australian Mental Health Outcomes and Classification Network (AMHOCN has received, processed, analysed and reported on outcome data at a national level, and played a training and service development role. This paper documents the history of AMHOCN’s activities and achievements, with a view to providing lessons for others embarking on similar exercises. Method We conducted a desktop review of relevant documents to summarise the history of AMHOCN. Results AMHOCN has operated within a framework that has provided an overarching structure to guide its activities but has been flexible enough to allow it to respond to changing priorities. With no precedents to draw upon, it has undertaken activities in an iterative fashion with an element of ‘trial and error’. It has taken a multi-pronged approach to ensuring that data are of high quality: developing innovative technical solutions; fostering ‘information literacy’; maximising the clinical utility of data at a local level; and producing reports that are meaningful to a range of audiences. Conclusion AMHOCN’s efforts have contributed to routine outcome measurement gaining a firm foothold in Australia’s public sector mental health services.

  12. Research on Controller Errors Classification and Analysis Model Based on Information Processing Theory%基于信息加工的管制人误分类分析模型研究

    Institute of Scientific and Technical Information of China (English)

    罗晓利; 秦凤姣; 孟斌; 李海龙

    2015-01-01

    On the basis of comparing and analyzing the existing human error identification models ,and in the light of controller's task characteristics ,the paper constructs the air traffic controller (ATCo) errors classification and analysis system model integrating the advantages of different human error analysis mod‐els and with cognitive psychology as the basis .The model takes into account the controller‐related factors while working ,also the controller errors can be analyzed from "reason and human error type identifica‐tion","information processing",and"internal and psychological mechenism".After these works ,an unsafe incident case of ATC is investigated by using the model .The results show that the model can be used to recognize ,analyze and prevent controller errors .%在比较分析已有人误因素识别模型的基础上,根据管制员任务特点,以认知心理学为基础,融合不同人误分析模型的优势成分,构建了基于信息加工的管制人误分类分析系统模型。该模型考虑了管制员任务执行过程中的相关条件因素,从“诱因及人误类型辨识”、“信息加工过程”、“内部和心理机制”进行管制员人误致因的辨识分析。最后,采用该模型对一起空管不安全事件进行了分析,结果表明,本模型可用于管制人误的有效识别、分类分析和预防。

  13. Análisis y Clasificación de Errores Cometidos por Alumnos de Secundaria en los Procesos de Sustitución Formal, Generalización y Modelización en Álgebra (Secondary Students´ Error Analysis and Classification in Formal Substitution, Generalization and Modelling Process in Algebra

    Directory of Open Access Journals (Sweden)

    Raquel M. Ruano

    2008-01-01

    Full Text Available Presentamos un estudio con alumnos de educación secundaria sobre tres procesos específicos del lenguaje algebraico: la sustitución formal, la generalización y la modelización. A partir de las respuestas a un cuestionario, realizamos una clasificación de los errores cometidos y se analizan sus posibles orígenes. Finalmente, formulamos algunas consecuencias didácticas que se derivan de estos resultados. We present a study with secondary students about three specific processes of algebraic language: Formal substitution, generalization, and modelling. Using a test, we develop a students´ errors classifications, and we analyze its possible origins. Finally we present some didactical conclusions from the results.

  14. Hyperspectral image classification based on spatial and spectral features and sparse representation

    Institute of Scientific and Technical Information of China (English)

    Yang Jing-Hui; Wang Li-Guo; Qian Jin-Xi

    2014-01-01

    To minimize the low classification accuracy and low utilization of spatial information in traditional hyperspectral image classification methods, we propose a new hyperspectral image classification method, which is based on the Gabor spatial texture features and nonparametric weighted spectral features, and the sparse representation classification method (Gabor–NWSF and SRC), abbreviated GNWSF–SRC. The proposed (GNWSF–SRC) method first combines the Gabor spatial features and nonparametric weighted spectral features to describe the hyperspectral image, and then applies the sparse representation method. Finally, the classification is obtained by analyzing the reconstruction error. We use the proposed method to process two typical hyperspectral data sets with different percentages of training samples. Theoretical analysis and simulation demonstrate that the proposed method improves the classification accuracy and Kappa coefficient compared with traditional classification methods and achieves better classification performance.

  15. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Radatz, Hendrik

    1979-01-01

    Five types of errors in an information-processing classification are discussed: language difficulties; difficulties in obtaining spatial information; deficient mastery of prerequisite skills, facts, and concepts; incorrect associations; and application of irrelevant rules. (MP)

  16. The Research and Appliaction of the Multi-classification Algorithm of Error-Correcting Codes Based on Support Vector Machine%基于SVM的纠错编码多分类算法的研究与应用

    Institute of Scientific and Technical Information of China (English)

    祖文超; 苑津莎; 王峰; 刘磊

    2012-01-01

    In order to enhance the accuracy rate of transformer fault diagnosis,multiclass classification algorithm,which is based upon Error-correcting codes connects with SVM,has been proposedThe mathe-matical model of transformer fault diagnosis is set up according to the theory of Support Vector Machine. Firstly,the Error-correcting codes matrix constructs some irrelevant Support Vector Machine,so that the accuracy rate of classified model can be enhanced.Finally,taking the dissolved gases in the transformer oil as the practise and testing sample of Error-correcting codes and SVM to realize transformer fault diagno- sis.And checking the arithmetic by using UCI data.The multiclass classification algorithm has been verified through VS2008 combined with Libsvm has been verified.And the result shows the method has high ac- curacy of classification.%为了提高变压器故障诊断的准确率,提出了一种基于纠错编码和支持向量机相结合的多分类算法,根据SVM理论建立变压器故障诊断数学模型,首先基于纠错编码矩阵构造出若干个互不相关的子支持向量机,以提高分类模型的分类准确率。最后把变压器油中溶解气体(DGA)作为纠错编码支持向量机的训练以及测试样本,实现变压器的故障诊断,同时用UCI数据对该算法进行验证。通过VS2008和Libsvm相结合对其进行验证,结果表明该方法具有很高的分类精度。

  17. Modulation classification based on spectrogram

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The aim of modulation classification (MC) is to identify the modulation type of a communication signal. It plays an important role in many cooperative or noncooperative communication applications. Three spectrogram-based modulation classification methods are proposed. Their reccgnition scope and performance are investigated or evaluated by theoretical analysis and extensive simulation studies. The method taking moment-like features is robust to frequency offset while the other two, which make use of principal component analysis (PCA) with different transformation inputs,can achieve satisfactory accuracy even at low SNR (as low as 2 dB). Due to the properties of spectrogram, the statistical pattern recognition techniques, and the image preprocessing steps, all of our methods are insensitive to unknown phase and frequency offsets, timing errors, and the arriving sequence of symbols.

  18. 试论洛克白板说思想的得失%Discussion on the Achievements and Errors of the Theory of Tabula Rasa of Lock

    Institute of Scientific and Technical Information of China (English)

    杜晶; 傅长吉

    2014-01-01

    Lock’ s theory of tabula rasa has a far-reaching impact on Modern Western Philosophy. The theory, which opposing the ideology of innate ideas, proposing the thought of initiative of cognitive subject, and exerting an effect to the nature, makes positive influence to the world.It is not difficult to find that Lock partially understands the cognitive origin, destination and process of development and exaggerates or restrains the role of the heart. In spite of this, the theory of tabula rasa still has great significance, which enlightens people to master the correct mode of thinking and understanding in the process of knowing the world.After testing and understanding the developing world, we will achieve the value of ourselves with talent and ability.%洛克白板说思想对近代西方哲学产生深远影响,它反对“天赋观念”,提出认识主体的人具有自主能动性,并论证观念产生发展于自然界,具有积极价值。随着对白板说思想的深入了解,不难发现洛克在探究人的主体性时夸大或抑制人心作用,片面理解认识来源和归宿以及认识的发展过程。尽管如此,洛克白板说思想仍具有重大的意义,它启示人在认识世界的过程中要掌握正确的思维方式和认识方法,应用天赋能力在自然界中检验并发展认识,从而实现自身价值。

  19. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  20. Robust Latent Subspace Learning for Image Classification.

    Science.gov (United States)

    Fang, Xiaozhao; Teng, Shaohua; Lai, Zhihui; He, Zhaoshui; Xie, Shengli; Wong, Wai Keung

    2017-05-10

    This paper proposes a novel method, called robust latent subspace learning (RLSL), for image classification. We formulate an RLSL problem as a joint optimization problem over both the latent SL and classification model parameter predication, which simultaneously minimizes: 1) the regression loss between the learned data representation and objective outputs and 2) the reconstruction error between the learned data representation and original inputs. The latent subspace can be used as a bridge that is expected to seamlessly connect the origin visual features and their class labels and hence improve the overall prediction performance. RLSL combines feature learning with classification so that the learned data representation in the latent subspace is more discriminative for classification. To learn a robust latent subspace, we use a sparse item to compensate error, which helps suppress the interference of noise via weakening its response during regression. An efficient optimization algorithm is designed to solve the proposed optimization problem. To validate the effectiveness of the proposed RLSL method, we conduct experiments on diverse databases and encouraging recognition results are achieved compared with many state-of-the-arts methods.

  1. Research on Software Error Behavior Classification Based on Software Failure Chain%基于软件失效链的软件错误行为分类研究

    Institute of Scientific and Technical Information of China (English)

    刘义颖; 江建慧

    2015-01-01

    目前软件应用广泛,对软件可靠性要求越来越高,研究软件的缺陷—错误—失效过程,提前预防失效的发生,减小软件失效带来的损失是十分必要的。研究描述软件错误行为的属性有助于独一无二地描述不同的错误行为,为建立软件故障模式库、软件故障预测和软件故障注入提供依据。文中基于软件失效链的理论,分析软件缺陷、软件错误和软件失效构成的因果链,由缺陷—错误—失效链之间的因果关系,进一步分析描述各个阶段异常的属性集合之间的联系。以现有的IEEE软件异常分类标准研究成果为基础,通过缺陷属性集合和失效属性集合来推导出错误属性集合,给出一种软件错误行为的分类方法,并给出属性集合以及参考值,选取基于最小相关和最大依赖度准则的属性约简算法进行实验,验证属性的合理性。%Software applications are more important than before. The requirements of reliability are more and more higher. It is very neces-sary to study the process of software defect-error-failure,to prevent failure happened in advance and reduce losses. It is helpful to de-scribe the unique software error behavior and help developers to communicate about this field. It also provides more support with software fault pattern library,software fault detection and fault injection. Based on software failure chain theory,analyze the causal chain of soft-ware defect-error-failure,further analyzing and describing each stage abnormal relationships between attributes sets. Based on the existing IEEE software anomaly classification standard,give out software error attributes sets and reference values and a way to classify error be-haviors. Verify rationality of attributes by the attribute reduction algorithm of minimal mutual information and maximal dependency.

  2. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  3. Applying object-based image analysis and knowledge-based classification to ADS-40 digital aerial photographs to facilitate complex forest land cover classification

    Science.gov (United States)

    Hsieh, Yi-Ta; Chen, Chaur-Tzuhn; Chen, Jan-Chang

    2017-01-01

    In general, considerable human and material resources are required for performing a forest inventory survey. Using remote sensing technologies to save forest inventory costs has thus become an important topic in forest inventory-related studies. Leica ADS-40 digital aerial photographs feature advantages such as high spatial resolution, high radiometric resolution, and a wealth of spectral information. As a result, they have been widely used to perform forest inventories. We classified ADS-40 digital aerial photographs according to the complex forest land cover types listed in the Fourth Forest Resource Survey in an effort to establish a classification method for categorizing ADS-40 digital aerial photographs. Subsequently, we classified the images using the knowledge-based classification method in combination with object-based analysis techniques, decision tree classification techniques, classification parameters such as object texture, shape, and spectral characteristics, a class-based classification method, and geographic information system mapping information. Finally, the results were compared with manually interpreted aerial photographs. Images were classified using a hierarchical classification method comprised of four classification levels (levels 1 to 4). The classification overall accuracy (OA) of levels 1 to 4 is within a range of 64.29% to 98.50%. The final result comparisons showed that the proposed classification method achieved an OA of 78.20% and a kappa coefficient of 0.7597. On the basis of the image classification results, classification errors occurred mostly in images of sunlit crowns because the image values for individual trees varied. Such a variance was caused by the crown structure and the incident angle of the sun. These errors lowered image classification accuracy and warrant further studies. This study corroborates the high feasibility for mapping complex forest land cover types using ADS-40 digital aerial photographs.

  4. Reducing errors in emergency surgery.

    Science.gov (United States)

    Watters, David A K; Truskett, Philip G

    2013-06-01

    Errors are to be expected in health care. Adverse events occur in around 10% of surgical patients and may be even more common in emergency surgery. There is little formal teaching on surgical error in surgical education and training programmes despite their frequency. This paper reviews surgical error and provides a classification system, to facilitate learning. The approach and language used to enable teaching about surgical error was developed through a review of key literature and consensus by the founding faculty of the Management of Surgical Emergencies course, currently delivered by General Surgeons Australia. Errors may be classified as being the result of commission, omission or inition. An error of inition is a failure of effort or will and is a failure of professionalism. The risk of error can be minimized by good situational awareness, matching perception to reality, and, during treatment, reassessing the patient, team and plan. It is important to recognize and acknowledge an error when it occurs and then to respond appropriately. The response will involve rectifying the error where possible but also disclosing, reporting and reviewing at a system level all the root causes. This should be done without shaming or blaming. However, the individual surgeon still needs to reflect on their own contribution and performance. A classification of surgical error has been developed that promotes understanding of how the error was generated, and utilizes a language that encourages reflection, reporting and response by surgeons and their teams. © 2013 The Authors. ANZ Journal of Surgery © 2013 Royal Australasian College of Surgeons.

  5. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  6. Signal classification for acoustic neutrino detection

    Energy Technology Data Exchange (ETDEWEB)

    Neff, M., E-mail: max.neff@physik.uni-erlangen.de [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany); Anton, G.; Enzenhoefer, A.; Graf, K.; Hoessl, J.; Katz, U.; Lahmann, R.; Richardt, C. [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany)

    2012-01-11

    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of 1% is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  7. Signal Classification for Acoustic Neutrino Detection

    CERN Document Server

    Neff, M; Enzenhöfer, A; Graf, K; Hößl, J; Katz, U; Lahmann, R; Richardt, C

    2011-01-01

    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of one percent is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  8. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  9. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  10. On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data

    CERN Document Server

    Richards, Joseph W; Butler, Nathaniel R; Bloom, Joshua S; Brewer, John M; Crellin-Quick, Arien; Higgins, Justin; Kennedy, Rachel; Rischard, Maxime

    2011-01-01

    With the coming data deluge from synoptic surveys, there is a growing need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly-observed variables based on a small number of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics ("feature"), detail methods to robustly estimate periodic light-curve features, introduce tree-ensemble methods for accurate variable star classification, and show how to rigorously evaluate the classification results using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% overall classification error using the random forest classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying sam...

  11. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  12. Sparse group lasso and high dimensional multinomial classification

    DEFF Research Database (Denmark)

    Vincent, Martin; Hansen, N.R.

    2014-01-01

    algorithm is available in the R package msgl. Its performance scales well with the problem size as illustrated by one of the examples considered - a 50 class classification problem with 10 k features, which amounts to estimating 500 k parameters. © 2013 Elsevier Inc. All rights reserved.......The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse...... group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  13. Error detection and reduction in blood banking.

    Science.gov (United States)

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  14. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  15. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  16. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  17. A randomised open-label cross-over study of inhaler errors, preference and time to achieve correct inhaler use in patients with COPD or asthma: comparison of ELLIPTA with other inhaler devices.

    Science.gov (United States)

    van der Palen, Job; Thomas, Mike; Chrystyn, Henry; Sharma, Raj K; van der Valk, Paul Dlpm; Goosens, Martijn; Wilkinson, Tom; Stonham, Carol; Chauhan, Anoop J; Imber, Varsha; Zhu, Chang-Qing; Svedsater, Henrik; Barnes, Neil C

    2016-11-24

    Errors in the use of different inhalers were investigated in patients naive to the devices under investigation in a multicentre, single-visit, randomised, open-label, cross-over study. Patients with chronic obstructive pulmonary disease (COPD) or asthma were assigned to ELLIPTA vs DISKUS (Accuhaler), metered-dose inhaler (MDI) or Turbuhaler. Patients with COPD were also assigned to ELLIPTA vs Handihaler or Breezhaler. Patients demonstrated inhaler use after reading the patient information leaflet (PIL). A trained investigator assessed critical errors (i.e., those likely to result in the inhalation of significantly reduced, minimal or no medication). If the patient made errors, the investigator demonstrated the correct use of the inhaler, and the patient demonstrated inhaler use again. Fewer COPD patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS, 9/171 (5%) vs 75/171 (44%); MDI, 10/80 (13%) vs 48/80 (60%); Turbuhaler, 8/100 (8%) vs 44/100 (44%); Handihaler, 17/118 (14%) vs 57/118 (48%); Breezhaler, 13/98 (13%) vs 45/98 (46%; all PELLIPTA and did not require investigator instruction. Instruction was required for DISKUS (65%), MDI (85%), Turbuhaler (71%), Handihaler (62%) and Breezhaler (56%). Fewer asthma patients made critical errors with ELLIPTA after reading the PIL vs: DISKUS (3/70 (4%) vs 9/70 (13%), P=0.221); MDI (2/32 (6%) vs 8/32 (25%), P=0.074) and significantly fewer vs Turbuhaler (3/60 (5%) vs 20/60 (33%), PELLIPTA over the other devices (all P⩽0.002). Significantly, fewer COPD patients using ELLIPTA made critical errors after reading the PIL vs other inhalers. More asthma and COPD patients preferred ELLIPTA over comparator inhalers.

  18. A new classification of glaucomas

    Directory of Open Access Journals (Sweden)

    Bordeianu CD

    2014-09-01

    Full Text Available Constantin-Dan Bordeianu Private Practice, Ploiesti, Prahova, Romania Purpose: To suggest a new glaucoma classification that is pathogenic, etiologic, and clinical.Methods: After discussing the logical pathway used in criteria selection, the paper presents the new classification and compares it with the classification currently in use, that is, the one issued by the European Glaucoma Society in 2008.Results: The paper proves that the new classification is clear (being based on a coherent and consistently followed set of criteria, is comprehensive (framing all forms of glaucoma, and helps in understanding the sickness understanding (in that it uses a logical framing system. The great advantage is that it facilitates therapeutic decision making in that it offers direct therapeutic suggestions and avoids errors leading to disasters. Moreover, the scheme remains open to any new development.Conclusion: The suggested classification is a pathogenic, etiologic, and clinical classification that fulfills the conditions of an ideal classification. The suggested classification is the first classification in which the main criterion is consistently used for the first 5 to 7 crossings until its differentiation capabilities are exhausted. Then, secondary criteria (etiologic and clinical pick up the relay until each form finds its logical place in the scheme. In order to avoid unclear aspects, the genetic criterion is no longer used, being replaced by age, one of the clinical criteria. The suggested classification brings only benefits to all categories of ophthalmologists: the beginners will have a tool to better understand the sickness and to ease their decision making, whereas the experienced doctors will have their practice simplified. For all doctors, errors leading to therapeutic disasters will be less likely to happen. Finally, researchers will have the object of their work gathered in the group of glaucoma with unknown or uncertain pathogenesis, whereas

  19. Rademacher Complexity in Neyman-Pearson Classification

    Institute of Scientific and Technical Information of China (English)

    Min HAN; Di Rong CHEN; Zhao Xu SUN

    2009-01-01

    Neyman-Pearson(NP) criterion is one of the most important ways in hypothesis testing.It is also a criterion for classification. This paper addresses the problem of bounding the estimation error of NP classification, in terms of Rademacher averages. We investigate the behavior of the global and local Rademacher averages, and present new NP classification error bounds which are based on the localized averages, and indicate how the estimation error can be estimated without a priori knowledge of the class at hand.

  20. Improve mask inspection capacity with Automatic Defect Classification (ADC)

    Science.gov (United States)

    Wang, Crystal; Ho, Steven; Guo, Eric; Wang, Kechang; Lakkapragada, Suresh; Yu, Jiao; Hu, Peter; Tolani, Vikram; Pang, Linyong

    2013-09-01

    As optical lithography continues to extend into low-k1 regime, resolution of mask patterns continues to diminish. The adoption of RET techniques like aggressive OPC, sub-resolution assist features combined with the requirements to detect even smaller defects on masks due to increasing MEEF, poses considerable challenges for mask inspection operators and engineers. Therefore a comprehensive approach is required in handling defects post-inspections by correctly identifying and classifying the real killer defects impacting the printability on wafer, and ignoring nuisance defect and false defects caused by inspection systems. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at the SMIC mask shop for the 40nm technology node. Traditionally, each defect is manually examined and classified by the inspection operator based on a set of predefined rules and human judgment. At SMIC mask shop due to the significant total number of detected defects, manual classification is not cost-effective due to increased inspection cycle time, resulting in constrained mask inspection capacity, since the review has to be performed while the mask stays on the inspection system. Luminescent Technologies Automated Defect Classification (ADC) product offers a complete and systematic approach for defect disposition and classification offline, resulting in improved utilization of the current mask inspection capability. Based on results from implementation of ADC in SMIC mask production flow, there was around 20% improvement in the inspection capacity compared to the traditional flow. This approach of computationally reviewing defects post mask-inspection ensures no yield loss by qualifying reticles without the errors associated with operator mis-classification or human error. The ADC engine retrieves the high resolution inspection images and uses a decision-tree flow to classify a given defect. Some identification mechanisms adopted by ADC to

  1. Improved neural network algorithm for classification of UAV imagery related to Wenchuan earthquake

    Science.gov (United States)

    Lin, Na; Yang, Wunian; Wang, Bin

    2009-06-01

    When Wenchuan earthquake struck, the terrain of the region changed violently. Unmanned aerial vehicles (UAV) remote sensing is effective in extracting first hand information. The high resolution images are of great importance in disaster management and relief operations. Back propagation (BP) neural network is an artificial neural network which combines multi-layer feed-forward network and error back-propagation algorithm. It has a strong input-output mapping capability, and does not require the object to be identified obeying certain distribution law. It has strong non-linear features and error-tolerant capabilities. Remotely-sensed image classification can achieve high accuracy and satisfactory error-tolerant capabilities. But it also has drawbacks such as slow convergence speed and can probably be trapped by local minimum points. In order to solve these problems, we have improved this algorithm through setting up self-adaptive training rate and adding momentum factor. UAV high-resolution aerial image in Taoguan District of Wenchuan County is used as data source. First, we preprocess UAV aerial images and rectify geometric distortion in images. Training samples were selected and purified. The image is then classified using the improved BP neural network algorithm. Finally, we compare such classification result with the maximum likelihood classification (MLC) result. Numerical comparison shows that the overall accuracy of maximum likelihood classification is 83.8%, while the improved BP neural network classification is 89.7%. The testing results indicate that the latter is better.

  2. A Locality-Constrained and Label Embedding Dictionary Learning Algorithm for Image Classification.

    Science.gov (United States)

    Zhengming Li; Zhihui Lai; Yong Xu; Jian Yang; Zhang, David

    2017-02-01

    Locality and label information of training samples play an important role in image classification. However, previous dictionary learning algorithms do not take the locality and label information of atoms into account together in the learning process, and thus their performance is limited. In this paper, a discriminative dictionary learning algorithm, called the locality-constrained and label embedding dictionary learning (LCLE-DL) algorithm, was proposed for image classification. First, the locality information was preserved using the graph Laplacian matrix of the learned dictionary instead of the conventional one derived from the training samples. Then, the label embedding term was constructed using the label information of atoms instead of the classification error term, which contained discriminating information of the learned dictionary. The optimal coding coefficients derived by the locality-based and label-based reconstruction were effective for image classification. Experimental results demonstrated that the LCLE-DL algorithm can achieve better performance than some state-of-the-art algorithms.

  3. A deep learning approach to the classification of 3D CAD models

    Institute of Scientific and Technical Information of China (English)

    Fei-wei QIN; Lu-ye LI; Shu-ming GAO; Xiao-ling YANG; Xiang CHEN

    2014-01-01

    Model classification is essential to the management and reuse of 3D CAD models. Manual model classification is laborious and error prone. At the same time, the automatic classification methods are scarce due to the intrinsic complexity of 3D CAD models. In this paper, we propose an automatic 3D CAD model classification approach based on deep neural networks. According to prior knowledge of the CAD domain, features are selected and extracted from 3D CAD models first, and then pre-processed as high dimensional input vectors for category recognition. By analogy with the thinking process of engineers, a deep neural network classifier for 3D CAD models is constructed with the aid of deep learning techniques. To obtain an optimal solution, multiple strategies are appropriately chosen and applied in the training phase, which makes our classifier achieve better per-formance. We demonstrate the efficiency and effectiveness of our approach through experiments on 3D CAD model datasets.

  4. Error Analysis in Composition of Iranian Lower Intermediate Students

    Science.gov (United States)

    Taghavi, Mehdi

    2012-01-01

    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  5. Binary Error Correcting Network Codes

    CERN Document Server

    Wang, Qiwen; Li, Shuo-Yen Robert

    2011-01-01

    We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.

  6. Error analysis of subaperture processing in 1-D ultrasound arrays.

    Science.gov (United States)

    Zhao, Kang-Qiao; Bjåstad, Tore Gruner; Kristoffersen, Kjell

    2015-04-01

    To simplify the medical ultrasound system and reduce the cost, several techniques have been proposed to reduce the interconnections between the ultrasound probe and the back-end console. Among them, subaperture processing (SAP) is the most straightforward approach and is widely used in commercial products. This paper reviews the most important error sources of SAP, such as static focusing, delay quantization, linear delay profile, and coarse apodization, and the impacts introduced by these errors are shown. We propose to use main lobe coherence loss as a simple classification of the quality of the beam profile for a given design. This figure-ofmerit (FoM) is evaluated by simulations with a 1-D ultrasound subaperture array setup. The analytical expressions and the coherence loss can work as a quick guideline in subaperture design by equalizing the merit degradations from different error sources, as well as minimizing the average or maximum loss over ranges. For the evaluated 1-D array example, a good balance between errors and cost was achieved using a subaperture size of 5 elements, focus at 40 mm range, and a delay quantization step corresponding to a phase of π/4.

  7. Corpus Analysis of the Various Errors Made by the Student

    Institute of Scientific and Technical Information of China (English)

    刘启迪

    2012-01-01

      The software Wordsmith has been commonly used in corpus linguistics. In this paper, the author used the tool of Con⁃cord in Wordsmith to analyze various errors made by a student. Five passages made by the students are used. After annotating on the errors, the author uses Concord to sort out each error maker and made classification chart of the errors. All the errors are clas⁃sified into two categories:errors caused by carelessness and by language ability. After analyzing, there are mainly three kinds of er⁃rors and in the first category and five kinds of errors in the second category.

  8. Experimental demonstration of topological error correction

    OpenAIRE

    2012-01-01

    Scalable quantum computing can only be achieved if qubits are manipulated fault-tolerantly. Topological error correction - a novel method which combines topological quantum computing and quantum error correction - possesses the highest known tolerable error rate for a local architecture. This scheme makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the first experimental demonstration of topological error correction with a...

  9. 山区LIDAR点云数据的阶层次粗差探测与分析%GROSS ERROR DETECTION AND ANALYSIS BY HIERARCHICAL CLASSIFICATION OF MOUNTAINOUS LIDAR DATA

    Institute of Scientific and Technical Information of China (English)

    李芸; 杨志强; 杨博

    2012-01-01

    Gross error detection is one of the important data processing steps of mountainous LIDAR point cloud data. Through analysing the features of gross error distribution, original LIDAR point cloud data can be divided into extreme outliers, outlier clusters and isolated points. On this basis, the idea of hierarchical gross error detection of mountainous LIDAR point cloud data is proposed, and an example of experimental data is verified. Experimental results show that the method can effectively remove gross errors from original mountainous LIDAR point cloud data, and, to a certain extent, improving the effect of pre-processing of point cloud.%针对山区LIDAR原始点云数据粗差的空间分布特性,将粗差分为极值粗差、粗差簇群和孤立点,在此基础上提出了山区机载LIDAR点云数据粗差探测的阶层次处理,并用实验数据进行了验证.实验结果表明,该方法可以有效地去除山区机载LIDAR原始点云数据中的粗差,在一定程度上提高了点云预处理的效果.

  10. ACCUWIND - Methods for classification of cup anemometers

    DEFF Research Database (Denmark)

    Dahlberg, J.-Å.; Friis Pedersen, Troels; Busche, P.

    2006-01-01

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The EuropeanCLASSCUP project posed the objectives to quantify...... the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implementedin the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification...... of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A categoryclassification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result...

  11. 6. SINIF ÖĞRENCİLERİNİN YAZMA BECERİLERİNDE GÖRÜLEN HATALARIN SINIFLANDIRILMASI / CLASSIFICATION OF THE OBSERVED ERRORS IN WRITING SKILLS OF 6TH GRADE STUDENTS

    Directory of Open Access Journals (Sweden)

    Rabia Sena AKBABA

    2016-09-01

    üzeyinde ise en fazla yapıldığı tespit edilen hata, öğrencilerin paragraflar arasında anlamlı bir geçiş sağlayamamalarıdır Bu araştırma, uygulamaya dayalı olması, metinlerde rastlanan hataların alan yazından da yararlanılarak çeşitli başlıklar altında incelenmesi, öğrencilerin yazma hatalarının nerede daha çok olduğunun belirlenmesi ve hataların nasıl giderilebileceğine yönelik öneriler sunulması bakımından önemlidir. / Writing is a skill obtained in primary school period, needed and used life long as other language skills. This skill is expected to develop parallel with class up-grading. In this sense, the student is expected by to develop her / his writing skill producing a text with less errors parallel to developing class grade. To obtain this expectation varies according to each one. While some students produce error-free texts in direct proportion to their writing education, some others write texts below or much more below the expected level. It is needed to evaluate the students who write texts with errors each by each and to decrease those errors to the minimum. To detect the errors in writing education is significant to understand how and in which way those errors will be eliminated. That’s why, primarily, to detect the place of the errors in texts is required. In this study, the texts written by the students have been evaluated in the light of error topics determined by the researchers. The errors are classified under four topics; word, sentence, paragraph and text, and where the errors appear the most is tried to be established. Where the writing errors done the most in the texts examined with data analysis technic is detected. At the end of the analysis, it is observed that the writing errors at the level of word consist of clerical errors. At the level of sentence, the errors under the titles of punctuation errors, and then ambiguity are observed more. At the level of paragraph, it is observed that the students have

  12. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    Science.gov (United States)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  13. Audio Classification from Time-Frequency Texture

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    Time-frequency representations of audio signals often resemble texture images. This paper derives a simple audio classification algorithm based on treating sound spectrograms as texture images. The algorithm is inspired by an earlier visual classification scheme particularly efficient at classifying textures. While solely based on time-frequency texture features, the algorithm achieves surprisingly good performance in musical instrument classification experiments.

  14. Classification problem in CBIR

    OpenAIRE

    Tatiana Jaworska

    2013-01-01

    At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR). Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results ...

  15. Expected energy-based restricted Boltzmann machine for classification.

    Science.gov (United States)

    Elfwing, S; Uchibe, E; Doya, K

    2015-04-01

    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Classification of DNA nucleotides with transverse tunneling currents

    DEFF Research Database (Denmark)

    Pedersen, Jonas Nyvold; Boynton, Paul; Ventra, Massimiliano Di

    2016-01-01

    , however. In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing.......g., the assignment of specific nucleobases to current signals. As the signals from different molecules overlap, unambiguous classification is impossible with a single measurement. We argue that the assignment of molecules to a signal is a standard pattern classification problem and calculation of the error rates......It has been theoretically suggested and experimentally demonstrated that fast and low-cost sequencing of DNA, RNA, and peptide molecules might be achieved by passing such molecules between electrodes embedded in a nanochannel. The experimental realization of this scheme faces major challenges...

  17. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  18. Classification problem in CBIR

    Directory of Open Access Journals (Sweden)

    Tatiana Jaworska

    2013-04-01

    Full Text Available At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR. Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results of fuzzy rule-based classification in our CBIR. Further-more, these results are used to construct a search engine taking into account data mining.

  19. Experimental demonstration of topological error correction.

    Science.gov (United States)

    Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei

    2012-02-22

    Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.

  20. High-Performance Neural Networks for Visual Object Classification

    CERN Document Server

    Cireşan, Dan C; Masci, Jonathan; Gambardella, Luca M; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.

  1. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    OpenAIRE

    Kleiss, R. H. P.; Lazopoulos, A.

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...

  2. ACCUWIND - Methods for classification of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.

    2006-05-15

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  3. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production....... Finally, pedagogical implication of CFL is discussed and future research is suggested. Keywords: error analysis, comparative sentences, comparative structure ‘‘bǐ - 比’, Chinese as a foreign language (CFL), written production...

  4. Automated compound classification using a chemical ontology

    Directory of Open Access Journals (Sweden)

    Bobach Claudia

    2012-12-01

    Full Text Available Abstract Background Classification of chemical compounds into compound classes by using structure derived descriptors is a well-established method to aid the evaluation and abstraction of compound properties in chemical compound databases. MeSH and recently ChEBI are examples of chemical ontologies that provide a hierarchical classification of compounds into general compound classes of biological interest based on their structural as well as property or use features. In these ontologies, compounds have been assigned manually to their respective classes. However, with the ever increasing possibilities to extract new compounds from text documents using name-to-structure tools and considering the large number of compounds deposited in databases, automated and comprehensive chemical classification methods are needed to avoid the error prone and time consuming manual classification of compounds. Results In the present work we implement principles and methods to construct a chemical ontology of classes that shall support the automated, high-quality compound classification in chemical databases or text documents. While SMARTS expressions have already been used to define chemical structure class concepts, in the present work we have extended the expressive power of such class definitions by expanding their structure-based reasoning logic. Thus, to achieve the required precision and granularity of chemical class definitions, sets of SMARTS class definitions are connected by OR and NOT logical operators. In addition, AND logic has been implemented to allow the concomitant use of flexible atom lists and stereochemistry definitions. The resulting chemical ontology is a multi-hierarchical taxonomy of concept nodes connected by directed, transitive relationships. Conclusions A proposal for a rule based definition of chemical classes has been made that allows to define chemical compound classes more precisely than before. The proposed structure-based reasoning

  5. Semiparametric Gaussian copula classification

    OpenAIRE

    Zhao, Yue; Wegkamp, Marten

    2014-01-01

    This paper studies the binary classification of two distributions with the same Gaussian copula in high dimensions. Under this semiparametric Gaussian copula setting, we derive an accurate semiparametric estimator of the log density ratio, which leads to our empirical decision rule and a bound on its associated excess risk. Our estimation procedure takes advantage of the potential sparsity as well as the low noise condition in the problem, which allows us to achieve faster convergence rate of...

  6. Analytic radar micro-Doppler signatures classification

    Science.gov (United States)

    Oh, Beom-Seok; Gu, Zhaoning; Wang, Guan; Toh, Kar-Ann; Lin, Zhiping

    2017-06-01

    Due to its capability of capturing the kinematic properties of a target object, radar micro-Doppler signatures (m-DS) play an important role in radar target classification. This is particularly evident from the remarkable number of research papers published every year on m-DS for various applications. However, most of these works rely on the support vector machine (SVM) for target classification. It is well known that training an SVM is computationally expensive due to its nature of search to locate the supporting vectors. In this paper, the classifier learning problem is addressed by a total error rate (TER) minimization where an analytic solution is available. This largely reduces the search time in the learning phase. The analytically obtained TER solution is globally optimal with respect to the classification total error count rate. Moreover, our empirical results show that TER outperforms SVM in terms of classification accuracy and computational efficiency on a five-category radar classification problem.

  7. Tissue Classification

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  8. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  9. Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification

    CERN Document Server

    Bouselmi, Ghazi; Illina, Irina; Haton, Jean-Paul

    2007-01-01

    In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques.

  10. Improved Algorithm of Pattern Classification and Recognition Applied in a Coal Dust Sensor

    Institute of Scientific and Technical Information of China (English)

    MA Feng-ying; SONG Shu

    2007-01-01

    To resolve the conflicting requirements of measurement precision and real-time performance speed, an improved algorithm for pattern classification and recognition was developed. The angular distribution of diffracted light varies with particle size. These patterns could be classified into groups with an innovative classification based upon reference dust samples. After such classification patterns could be recognized easily and rapidly by minimizing the variance between the reference pattern and dust sample eigenvectors. Simulation showed that the maximum recognition speed improves 20 fold. This enables the use of a single-chip, real-time inversion algorithm. An increased number of reference patterns reduced the errors in total and respiring coal dust measurements. Experiments in coal mine testify that the accuracy of sensor achieves 95%. Results indicate the improved algorithm enhances the precision and real-time capability of the coal dust sensor effectively.

  11. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  12. Error Analysis: Past, Present, and Future

    Science.gov (United States)

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  13. Error Analysis: Past, Present, and Future

    Science.gov (United States)

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  14. Automatic cloud classification of whole sky images

    Directory of Open Access Journals (Sweden)

    A. Heinle

    2010-05-01

    Full Text Available The recently increasing development of whole sky imagers enables temporal and spatial high-resolution sky observations. One application already performed in most cases is the estimation of fractional sky cover. A distinction between different cloud types, however, is still in progress. Here, an automatic cloud classification algorithm is presented, based on a set of mainly statistical features describing the color as well as the texture of an image. The k-nearest-neighbour classifier is used due to its high performance in solving complex issues, simplicity of implementation and low computational complexity. Seven different sky conditions are distinguished: high thin clouds (cirrus and cirrostratus, high patched cumuliform clouds (cirrocumulus and altocumulus, stratocumulus clouds, low cumuliform clouds, thick clouds (cumulonimbus and nimbostratus, stratiform clouds and clear sky. Based on the Leave-One-Out Cross-Validation the algorithm achieves an accuracy of about 97%. In addition, a test run of random images is presented, still outperforming previous algorithms by yielding a success rate of about 75%, or up to 88% if only "serious" errors with respect to radiation impact are considered. Reasons for the decrement in accuracy are discussed, and ideas to further improve the classification results, especially in problematic cases, are investigated.

  15. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  16. Errors in medicine administration - profile of medicines: knowing and preventing

    OpenAIRE

    Reis,Adriano Max Moreira; Marques, Tatiane Cristina; Opitz,Simone Perufo; Silva,Ana Elisa Bauer de Camargo; GIMENES, Fernanda Raphael Escobar; Teixeira,Thalyta Cardoso Alux; LIMA, Rhanna Emanuela Fontenele; Cassiani, Silvia Helena De Bortoli

    2010-01-01

    OBJECTIVES: To describe the pharmacological characteristics of medicines involved in administration errors and determine the frequency of errors with potentially dangerous medicines and low therapeutic index, in clinical units of five teaching hospitals, in Brazil. METHODS: Multicentric study, descriptive and exploratory, using the non-participant observation technique (during the administration of 4958 doses of medicines) and the anatomical therapeutic chemical classification (ATC). RESULTS:...

  17. Xenolog classification.

    Science.gov (United States)

    Darby, Charlotte A; Stolzer, Maureen; Ropp, Patrick J; Barker, Daniel; Durand, Dannie

    2017-03-01

    Orthology analysis is a fundamental tool in comparative genomics. Sophisticated methods have been developed to distinguish between orthologs and paralogs and to classify paralogs into subtypes depending on the duplication mechanism and timing, relative to speciation. However, no comparable framework exists for xenologs: gene pairs whose history, since their divergence, includes a horizontal transfer. Further, the diversity of gene pairs that meet this broad definition calls for classification of xenologs with similar properties into subtypes. We present a xenolog classification that uses phylogenetic reconciliation to assign each pair of genes to a class based on the event responsible for their divergence and the historical association between genes and species. Our classes distinguish between genes related through transfer alone and genes related through duplication and transfer. Further, they separate closely-related genes in distantly-related species from distantly-related genes in closely-related species. We present formal rules that assign gene pairs to specific xenolog classes, given a reconciled gene tree with an arbitrary number of duplications and transfers. These xenology classification rules have been implemented in software and tested on a collection of ∼13 000 prokaryotic gene families. In addition, we present a case study demonstrating the connection between xenolog classification and gene function prediction. The xenolog classification rules have been implemented in N otung 2.9, a freely available phylogenetic reconciliation software package. http://www.cs.cmu.edu/~durand/Notung . Gene trees are available at http://dx.doi.org/10.7488/ds/1503 . durand@cmu.edu. Supplementary data are available at Bioinformatics online.

  18. Optimized Time-Gated Fluorescence Spectroscopy for the Classification and Recycling of Fluorescently Labeled Plastics.

    Science.gov (United States)

    Fomin, Petr; Zhelondz, Dmitry; Kargel, Christian

    2016-08-29

    For the production of high-quality parts from recycled plastics, a very high purity of the plastic waste to be recycled is mandatory. The incorporation of fluorescent tracers ("markers") into plastics during the manufacturing process helps overcome typical problems of non-tracer based optical classification methods. Despite the unique emission spectra of fluorescent markers, the classification becomes difficult when the host plastics exhibit (strong) autofluorescence that spectrally overlaps the marker fluorescence. Increasing the marker concentration is not an option from an economic perspective and might also adversely affect the properties of the plastics. A measurement approach that suppresses the autofluorescence in the acquired signal is time-gated fluorescence spectroscopy (TGFS). Unfortunately, TGFS is associated with a lower signal-to-noise (S/N) ratio, which results in larger classification errors. In order to optimize the S/N ratio we investigate and validate the best TGFS parameters-derived from a model for the fluorescence signal-for plastics labeled with four specifically designed fluorescent markers. In this study we also demonstrate the implementation of TGFS on a measurement and classification prototype system and determine its performance. Mean values for a sensitivity of [Formula: see text] = 99.93% and precision [Formula: see text] = 99.80% were achieved, proving that a highly reliable classification of plastics can be achieved in practice.

  19. Neuromuscular disease classification system

    Science.gov (United States)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  20. Evaluation criteria for software classification inventories, accuracies, and maps

    Science.gov (United States)

    Jayroe, R. R., Jr.

    1976-01-01

    Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.

  1. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Directory of Open Access Journals (Sweden)

    Lev V. Utkin

    2012-01-01

    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  2. Spelling in adolescents with dyslexia: errors and modes of assessment.

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia.

  3. Research on the technology for processing errors of photoelectric theodolite based on error design idea

    Science.gov (United States)

    Guo, Xiaosong; Pu, Pengcheng; Zhou, Zhaofa; Wang, Kunming

    2012-10-01

    The errors existing in photoelectric theodolite were studied according to the error design idea , that is - the correction of theodolite errors was achieved by analyzing the effect of errors actively instead of processing the data with error passively. Aiming at the shafting error, the relationship between different errors was analyzed by the error model based on coordinate transformation, and the real-time error compensation method based on the normal-reversed measuring method and levelness auto-detection was supposed. As to the eccentric error of dial, the idea of eccentric residual error was presented and its influence to measuring precision was studied, then the dynamic compensation model was build, so the influence of eccentric error of dial to measuring precision can be eliminated. For the centering deviation in the process of measuring angle, the compensation method based on the error model was supposed, in which the centering deviation was detected automatically based on computer vision. The above method based on error design idea reduced the influence to measuring result by software compensation method effectively, and improved the automation degree of azimuth angle measuring of theodolite, at the same time the precision was not depressed.

  4. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2014-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...

  5. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2016-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...

  6. Achieving professionalism

    Directory of Open Access Journals (Sweden)

    R.A.E. Thompson

    1983-09-01

    Full Text Available In approaching the subject of professionalism the author has chosen to focus on the practical aspects rather than the philosophical issues. In so doing an attempt is made to identify criteria which demonstrate the achievement of the essence of professionalism.

  7. A proposal for the tagging of grammatical and pragmatic errors

    OpenAIRE

    Carrió Pastor, Mª Luisa; Mestre-Mestre, Eva M.

    2013-01-01

    This journal provides open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Errors should be viewed as a key feature of language learning and language use. In this paper, we focus on the identification and classification of errors that are related to students grammar acquisition and pragmatic competence. Our objectives are, first, to propose the tagging of grammatical errors and pragmatic e...

  8. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  9. The application of Aronson's taxonomy to medication errors in nursing.

    Science.gov (United States)

    Johnson, Maree; Young, Helen

    2011-01-01

    Medication administration is a frequent nursing activity that is prone to error. In this study of 318 self-reported medication incidents (including near misses), very few resulted in patient harm-7% required intervention or prolonged hospitalization or caused temporary harm. Aronson's classification system provided an excellent framework for analysis of the incidents with a close connection between the type of error and the change strategy to minimize medication incidents. Taking a behavioral approach to medication error classification has provided helpful strategies for nurses such as nurse-call cards on patient lockers when patients are absent and checking of medication sign-off by outgoing and incoming staff at handover.

  10. Detection and Classification of Whale Acoustic Signals

    Science.gov (United States)

    Xian, Yin

    This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data. Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear. We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale

  11. AN ANALYSIS OF GRAMMATICAL ERRORS ON SPEAKING ACTIVITIES

    Directory of Open Access Journals (Sweden)

    Merlyn Simbolon

    2015-09-01

    Full Text Available This study aims to analyze the grammatical errors and to provide description of errors on speaking activities using simple present and present progressive tenses made by the second year students of English Education Department, Palangka Raya University. The subject for this study was 30 students. This research applied qualitative research to describe the types, source and causes of students’ errors taken from oral essay test which consisted of questions using the tenses of simple present and present progressive. The errors were indentified and classified according to Linguistic Category Taxonomy and Richard’s classification, well as the possible sources and causes of errors. The findings showed that the errors made by students were in 6 aspects; errors in production of verb groups, errors in the distribution of verb groups, errors in the use of article, errors in the use of preposition, errors in the use of questions and miscellaneous errors. In regard to resource and causes, it was found that intra-lingual interference was the major source of errors (82.55% where overgeneralization took place as the major cause of the errors with total percentage of 44.71%. Keywords: grammatical errors, speaking skill, speaking activities

  12. Multilingual documentation and classification.

    Science.gov (United States)

    Donnelly, Kevin

    2008-01-01

    Health care providers around the world have used classification systems for decades as a basis for documentation, communications, statistical reporting, reimbursement and research. In more recent years machine-readable medical terminologies have taken on greater importance with the adoption of electronic health records and the need for greater granularity of data in clinical systems. Use of a clinical terminology harmonised with classifications, implemented within a clinical information system, will enable the delivery of many patient health benefits including electronic clinical decision support, disease screening and enhanced patient safety. In order to be usable these systems must be translated into the language of use, without losing meaning. It is evident that today one system cannot meet all requirements which call for collaboration and harmonisation in order to achieve true interoperability on a multilingual basis.

  13. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  14. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  15. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  16. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    Science.gov (United States)

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  17. Unsupervised classification of operator workload from brain signals

    Science.gov (United States)

    Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin

    2016-06-01

    Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.

  18. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  19. Exploring diversity in ensemble classification: Applications in large area land cover mapping

    Science.gov (United States)

    Mellor, Andrew; Boukir, Samia

    2017-07-01

    Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area

  20. Improvement of the classification quality in detection of Hashimoto's disease with a combined classifier approach.

    Science.gov (United States)

    Omiotek, Zbigniew

    2017-08-01

    The purpose of the study was to construct an efficient classifier that, along with a given reduced set of discriminant features, could be used as a part of the computer system in automatic identification and classification of ultrasound images of the thyroid gland, which is aimed to detect cases affected by Hashimoto's thyroiditis. A total of 10 supervised learning techniques and a majority vote for the combined classifier were used. Two models were proposed as a result of the classifier's construction. The first one is based on the K-nearest neighbours method (for K = 7). It uses three discriminant features and affords sensitivity equal to 88.1%, specificity of 66.7% and classification error at a level of 21.8%. The second model is a combined classifier, which was constructed using three-component classifiers. They are based on the K-nearest neighbours method (for K = 7), linear discriminant analysis and a boosting algorithm. The combined classifier is based on 48 discriminant features. It allows to achieve the classification sensitivity equal to 88.1%, specificity of 69.4% and classification error at a level of 20.5%. The combined classifier allows to improve the classification quality compared to the single model. The models, built as a part of the automatic computer system, may support the physician, especially in first-contact hospitals, in diagnosis of cases that are difficult to recognise based on ultrasound images. The high sensitivity of constructed classification models indicates high detection accuracy of the sick cases, and this is beneficial to the patients from a medical point of view.

  1. PDF text classification to leverage information extraction from publication reports.

    Science.gov (United States)

    Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha

    2016-06-01

    Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (pPDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Three-Class Mammogram Classification Based on Descriptive CNN Features

    Science.gov (United States)

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  3. Providing an Approach to Locating the Semantic Error of Application Using Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Abdollah Rahimi

    2016-12-01

    Full Text Available Regardless of the efforts taken to produce a computer program, the program may still have some bugs and defects. In fact, the larger and more complex programs are more likely to contain errors. The purpose of this paper is to present an approach to detect erroneous performance of application using clustering technique. Because the program paased different execution paths based on different inputs, there is impossible to discover all errors in the program before delivery the software. Monitoring all execution paths before delivery of program is very difficult or maybe impossible, so a lot of errors are hidden in the program and is revealed after delivery. Solutions that have been proposed to achieve this goal are trying to compare the information in the implementation of the program to be successful or unsuccessful which called determinant and introduces the points suspended to the error to programmer. But the main problem is that the analysis carried out at the decisive time information regardless of affiliation between predicate, leading to the inability of these methods to detect certain types of errors. To solve these problems, in this paper a new solution based on behavior analysis and runtime of executable paths in the form of taking into account the interactions between determinants are provided. For this purpose, a clustering method was used for classification of graphs based on the similarities and the ultimate determination of areas suspected of error in the erroneous code paths. Assessment of the proposed strategy on the collection of real programs shows the success of the proposed approach more accurate in detecting errors compared to previous.

  4. Algorithms for Hyperspectral Signature Classification in Non-resolved Object Characterization Using Tabular Nearest Neighbor Encoding

    Science.gov (United States)

    Schmalz, M.; Key, G.

    Accurate spectral signature classification is key to the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications, signature classification accuracy depends on accurate spectral endmember determination [1]. However, in selected target recognition (ATR) applications, it is possible to circumvent the endmember detection problem by employing a Bayesian classifier. Previous approaches to Bayesian classification of spectral signatures have been rule- based, or predicated on a priori parameterized information obtained from offline training, as in the case of neural networks [1,2]. Unfortunately, class separation and classifier refinement results in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of inputs. This can lead to potentially significant classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an emerging technology for nonimaging spectral signature classfication based on a highly accurate but computationally efficient search engine called Tabular Nearest Neighbor Encoding (TNE) [3]. Based on prior results, TNE can optimize its classifier performance to track input nonergodicities, as well as yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., the neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement algorithms. This allows the TNE programmer or user to determine parameters for classification accuracy, and to mathematically analyze the signatures for which TNE did not obtain classification matches. This dual approach to analysis (i.e., correct vs. incorrect classification) has been shown to

  5. Noise-Tolerant Hyperspectral Signature Classification in Unresolved Object Detection with Adaptive Tabular Nearest Neighbor Encoding

    Science.gov (United States)

    Schmalz, M.; Key, G.

    Accurate spectral signature classification is a crucial step in the nonimaging detection and recognition of spaceborne objects. In classical hyperspectral recognition applications, especially where linear mixing models are employed, signature classification accuracy depends on accurate spectral endmember discrimination. In selected target recognition (ATR) applications, previous non-adaptive techniques for signature classification have yielded class separation and classifier refinement results that tend to be suboptimal. In practice, the number of signatures accurately classified often depends linearly on the number of inputs. This can lead to potentially severe classification errors in the presence of noise or densely interleaved signatures. In this paper, we present an enhancement of an emerging technology for nonimaging spectral signature classification based on a highly accurate, efficient search engine called Tabular Nearest Neighbor Encoding (TNE). Adaptive TNE can optimize its classifier performance to track input nonergodicities and yield measures of confidence or caution for evaluation of classification results. Unlike neural networks, TNE does not have a hidden intermediate data structure (e.g., a neural net weight matrix). Instead, TNE generates and exploits a user-accessible data structure called the agreement map (AM), which can be manipulated by Boolean logic operations to effect accurate classifier refinement through programmable algorithms. The open architecture and programmability of TNE's pattern-space (AM) processing allows a TNE developer to determine the qualitative and quantitative reasons for classification accuracy, as well as characterize in detail the signatures for which TNE does not obtain classification matches, and why such mis-matches occur. In this study AM-based classification has been modified to partially compensate for input statistical changes, in response to performance metrics such as probability of correct classification (Pd

  6. Analysis of the influencing factors of nursing related medication errors based on the conceptual framework of international classification of patient safety%基于 ICPS 分类法框架的护理相关药物事件的影响因素分析

    Institute of Scientific and Technical Information of China (English)

    朱晓萍; 田梅梅; 施雁; 孙晓; 龚美芳; 毛雅芬

    2014-01-01

    Objective To identify the influencing factors of nursing related medication errors , and put forward the effective prevention and control measures .Methods One thousand three hundred and forty-three cases with medication errors from 15 tertiary hospitals in Shanghai Nursing Quality Control Center were chosen . The influencing factors were analyzed by the research tools which were constructed by the level of influencing factors depending on the international classification of patient safety ( ICPS ) through the method of content analysis.Results The occurrences of medication errors were most frequent (62.84%) in the early morning (8:00-16:00), and were related to the most therapeutic nursing .The nursing related medication errors happened frequently in elderly patients with more than 70 years old (32.45%), and which suggested that the abilities of self-care and communication in elderly patients were weak , and the elderly patients were the highest risk of medication errors .The results of safety events of patients in the ICPS were divided into 5 levels including the no, mild, moderate, severe, death.The percent of medication errors including the no , mild, moderate, severe, death in 1 343 cases were respectively 91.88%, 3.35%, 2.76%, 2.01%, 0%.The frequencies of influencing factors of nursing related medication errors in 1 343 cases were 3 185, and the frequencies from high to low were respectively routine violations ,“negligence” and “fault” in the technical mistakes ,“misapplication of good rules” in the error of rule, knowledge-based mistake, communication and illusion.Conclusions Application of the influencing factor of ICPS is helpful to discriminate the system and process defect from the perspective of human error in the nursing managers , and can improve the level of management of patient safety .%目的:识别护理相关药物事件的影响因素,提出有效的预控措施。方法随机抽取上海市15所三级医疗机构,对选取

  7. EXPLICIT ERROR ESTIMATES FOR MIXED AND NONCONFORMING FINITE ELEMENTS

    Institute of Scientific and Technical Information of China (English)

    Shipeng Mao; Zhong-Ci Shi

    2009-01-01

    In this paper, we study the explicit expressions of the constants in the error estimates of the lowest order mixed and nonconforming finite element methods. We start with an ex-plicit relation between the error constant of the lowest order Raviart-Thomas interpolation error and the geometric characters of the triangle. This gives an explicit error constant of the lowest order mixed finite element method. Furthermore, similar results can be ex-tended to the nonconforming P1 scheme based on its close connection with the lowest order Raviart-Thomas method. Meanwhile, such explicit a priori error estimates can be used as computable error bounds, which are also consistent with the maximal angle condition for the optimal error estimates of mixed and nonconforming finite element methods.Mathematics subject classification: 65N12, 65N15, 65N30, 65N50.

  8. Acetylcholine mediates behavioral and neural post-error control

    NARCIS (Netherlands)

    Danielmeier, C.; Allen, E.A.; Jocham, G.; Onur, O.A.; Eichele, T.; Ullsperger, M.

    2015-01-01

    Humans often commit errors when they are distracted by irrelevant information and no longer focus on what is relevant to the task at hand. Adjustments following errors are essential for optimizing goal achievement. The posterior medial frontal cortex (pMFC), a key area for monitoring errors, has bee

  9. Habitat Classification of Temperate Marine Macroalgal Communities Using Bathymetric LiDAR

    Directory of Open Access Journals (Sweden)

    Richard Zavalas

    2014-03-01

    Full Text Available Here, we evaluated the potential of using bathymetric Light Detection and Ranging (LiDAR to characterise shallow water (<30 m benthic habitats of high energy subtidal coastal environments. Habitat classification, quantifying benthic substrata and macroalgal communities, was achieved in this study with the application of LiDAR and underwater video groundtruth data using automated classification techniques. Bathymetry and reflectance datasets were used to produce secondary terrain derivative surfaces (e.g., rugosity, aspect that were assumed to influence benthic patterns observed. An automated decision tree classification approach using the Quick Unbiased Efficient Statistical Tree (QUEST was applied to produce substrata, biological and canopy structure habitat maps of the study area. Error assessment indicated that habitat maps produced were primarily accurate (>70%, with varying results for the classification of individual habitat classes; for instance, producer accuracy for mixed brown algae and sediment substrata, was 74% and 93%, respectively. LiDAR was also successful for differentiating canopy structure of macroalgae communities (i.e., canopy structure classification, such as canopy forming kelp versus erect fine branching algae. In conclusion, habitat characterisation using bathymetric LiDAR provides a unique potential to collect baseline information about biological assemblages and, hence, potential reef connectivity over large areas beyond the range of direct observation. This research contributes a new perspective for assessing the structure of subtidal coastal ecosystems, providing a novel tool for the research and management of such highly dynamic marine environments.

  10. Volumetric magnetic resonance imaging classification for Alzheimer's disease based on kernel density estimation of local features

    Institute of Scientific and Technical Information of China (English)

    YAN Hao; WANG Hu; WANG Yong-hui; ZHANG Yu-mei

    2013-01-01

    Background The classification of Alzheimer's disease (AD) from magnetic resonance imaging (MRI) has been challenged by lack of effective and reliable biomarkers due to inter-subject variability.This article presents a classification method for AD based on kernel density estimation (KDE) of local features.Methods First,a large number of local features were extracted from stable image blobs to represent various anatomical patterns for potential effective biomarkers.Based on distinctive descriptors and locations,the local features were robustly clustered to identify correspondences of the same underlying patterns.Then,the KDE was used to estimate distribution parameters of the correspondences by weighting contributions according to their distances.Thus,biomarkers could be reliably quantified by reducing the effects of further away correspondences which were more likely noises from inter-subject variability.Finally,the Bayes classifier was applied on the distribution parameters for the classification of AD.Results Experiments were performed on different divisions of a publicly available database to investigate the accuracy and the effects of age and AD severity.Our method achieved an equal error classification rate of 0.85 for subject aged 60-80 years exhibiting mild AD and outperformed a recent local feature-based work regardless of both effects.Conclusions We proposed a volumetric brain MRI classification method for neurodegenerative disease based on statistics of local features using KDE.The method may be potentially useful for the computer-aided diagnosis in clinical settings.

  11. Multi-Level Audio Classification Architecture

    Directory of Open Access Journals (Sweden)

    Jozef Vavrek

    2015-01-01

    Full Text Available A multi-level classification architecture for solving binary discrimination problem is proposed in this paper. The main idea of proposed solution is derived from the fact that solving one binary discrimination problem multiple times can reduce the overall miss-classification error. We aimed our effort towards building the classification architecture employing the combination of multiple binary SVM (Support Vector Machine classifiers for solving two-class discrimination problem. Therefore, we developed a binary discrimination architecture employing the SVM classifier (BDASVM with intention to use it for classification of broadcast news (BN audio data. The fundamental element of BDASVM is the binary decision (BD algorithm that performs discrimination between each pair of acoustic classes utilizing decision function modeled by separating hyperplane. The overall classification accuracy is conditioned by finding the optimal parameters for discrimination function resulting in higher computational complexity. The final form of proposed BDASVM is created by combining four BDSVM discriminators supplemented by decision table. Experimental results show that the proposed classification architecture can decrease the overall classification error in comparison with binary decision trees SVM (BDTSVM architecture.

  12. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  13. Classification in Australia.

    Science.gov (United States)

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  14. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  15. A gender-based analysis of Iranian EFL learners' types of written errors

    Directory of Open Access Journals (Sweden)

    Faezeh Boroomand

    2013-05-01

    Full Text Available Committing errors is inevitable in process of language acquisition and learning. Analysis of learners' errors from different perspectives, contributes to the improvement of language learning and teaching. Although the issue of gender differences has received considerable attention in the context of second or foreign language learning and teaching, few studies on the relationship between gender and EFL learners' written errors have been carried out. The present study conducted on 100 Iranian advanced EFL learners' written errors (50 male learners and 50 female learners, presents different classifications and subdivisions of errors, and carries out an analysis on these errors. Detecting the most committed errors in each classification, findings reveal significant differences between error frequencies of the two male and female groups (more error frequency in female written productions.

  16. Evaluation of drug administration errors in a teaching hospital.

    Science.gov (United States)

    Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre

    2012-03-12

    Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  17. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  18. Ambiguity and Concepts in Real Time Online Internet Traffic Classification

    Directory of Open Access Journals (Sweden)

    Hamza Awad Hamza Ibrahim

    2014-03-01

    Full Text Available Internet traffic classification gained significant attention in the last few years. Identifying the Internet applications in the real time is one of the most significant challenges in network traffic classification. Most of the proposed classification methods are limited to offline classification and cannot support online classification. This paper aims to highlight the ambiguity in the definition of online classification. Therefore, some of the previous online classification works are discussed and analyzed. This analysing is to check how far the real time online classification was achieved. The results indicate that most of the previous works consider a real Internet traffic but did not consider a real time online classification. In addition, the paper provides a real time classifier which was proposed and used in [1] [2] [3], to show how to perform a real time online classification.

  19. Maintenance error reduction strategies in nuclear power plants, using root cause analysis.

    Science.gov (United States)

    Wu, T M; Hwang, S L

    1989-06-01

    This study proposes a conceptual model of maintenance tasks to facilitate the identification of root causes of human errors in carrying out such tasks in nuclear power plants. Based on this model, an external/internal classification scheme was developed to discover the root causes of human errors. As a consequence, certain policies pertaining to human error prevention or correction were proposed.

  20. We need to talk about error: causes and types of error in veterinary practice.

    Science.gov (United States)

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error.

  1. Selective ablation of Copper-Indium-Diselenide solar cells monitored by laser-induced breakdown spectroscopy and classification methods

    Energy Technology Data Exchange (ETDEWEB)

    Diego-Vallejo, David [Technische Universität Berlin, Institute of Optics and Atomic Physics, Straße des 17, Juni 135, 10623 Berlin (Germany); Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany); Ashkenasi, David, E-mail: d.ashkenasi@lmtb.de [Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany); Lemke, Andreas [Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany); Eichler, Hans Joachim [Technische Universität Berlin, Institute of Optics and Atomic Physics, Straße des 17, Juni 135, 10623 Berlin (Germany); Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany)

    2013-09-01

    Laser-induced breakdown spectroscopy (LIBS) and two classification methods, i.e. linear correlation and artificial neural networks (ANN), are used to monitor P1, P2 and P3 scribing steps of Copper-Indium-Diselenide (CIS) solar cells. Narrow channels featuring complete removal of desired layers with minimum damage on the underlying film are expected to enhance efficiency of solar cells. The monitoring technique is intended to determine that enough material has been removed to reach the desired layer based on the analysis of plasma emission acquired during multiple pass laser scribing. When successful selective scribing is achieved, a high degree of similarity between test and reference spectra has to be identified by classification methods in order to stop the scribing procedure and avoid damaging the bottom layer. Performance of linear correlation and artificial neural networks is compared and evaluated for two spectral bandwidths. By using experimentally determined combinations of classifier and analyzed spectral band for each step, classification performance achieves errors of 7, 1 and 4% for steps P1, P2 and P3, respectively. The feasibility of using plasma emission for the supervision of processing steps of solar cell manufacturing is demonstrated. This method has the potential to be implemented as an online monitoring procedure assisting the production of solar cells. - Highlights: • LIBS and two classification methods were used to monitor CIS solar cells processing. • Selective ablation of thin-film solar cells was improved with inspection system. • Customized classification method and analyzed spectral band enhanced performance.

  2. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  3. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    CERN Document Server

    Kleiss, R H

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  4. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  5. Implications of Error Analysis Studies for Academic Interventions

    Science.gov (United States)

    Mather, Nancy; Wendling, Barbara J.

    2017-01-01

    We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

  6. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  7. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  8. Systematic error mitigation in multiple field astrometry

    CERN Document Server

    Gai, Mario

    2011-01-01

    Combination of more than two fields provides constraints on the systematic error of simultaneous observations. The concept is investigated in the context of the Gravitation Astrometric Measurement Experiment (GAME), which aims at measurement of the PPN parameter $\\gamma$ at the $10^{-7}-10^{-8}$ level. Robust self-calibration and control of systematic error is crucial to the achievement of the precision goal. The present work is focused on the concept investigation and practical implementation strategy of systematic error control over four simultaneously observed fields, implementing a "double differential" measurement technique. Some basic requirements on geometry, observing and calibration strategy are derived, discussing the fundamental characteristics of the proposed concept.

  9. Measurement Error with Different Computer Vision Techniques

    Science.gov (United States)

    Icasio-Hernández, O.; Curiel-Razo, Y. I.; Almaraz-Cabral, C. C.; Rojas-Ramirez, S. R.; González-Barbosa, J. J.

    2017-09-01

    The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  10. MEASUREMENT ERROR WITH DIFFERENT COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    O. Icasio-Hernández

    2017-09-01

    Full Text Available The goal of this work is to offer a comparative of measurement error for different computer vision techniques for 3D reconstruction and allow a metrological discrimination based on our evaluation results. The present work implements four 3D reconstruction techniques: passive stereoscopy, active stereoscopy, shape from contour and fringe profilometry to find the measurement error and its uncertainty using different gauges. We measured several dimensional and geometric known standards. We compared the results for the techniques, average errors, standard deviations, and uncertainties obtaining a guide to identify the tolerances that each technique can achieve and choose the best.

  11. Linear approximation for measurement errors in phase shifting interferometry

    Science.gov (United States)

    van Wingerden, Johannes; Frankena, Hans J.; Smorenburg, Cornelis

    1991-07-01

    This paper shows how measurement errors in phase shifting interferometry (PSI) can be described to a high degree of accuracy in a linear approximation. System error sources considered here are light source instability, imperfect reference phase shifting, mechanical vibrations, nonlinearity of the detector, and quantization of the detector signal. The measurement inaccuracies resulting from these errors are calculated in linear approximation for several formulas commonly used for PSI. The results are presented in tables for easy calculation of the measurement error magnitudes for known system errors. In addition, this paper discusses the measurement error reduction which can be achieved by choosing an appropriate phase calculation formula.

  12. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  13. SHIP CLASSIFICATION FROM MULTISPECTRAL VIDEOS

    Directory of Open Access Journals (Sweden)

    Frederique Robert-Inacio

    2012-05-01

    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  14. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  15. Sleep stage classification with cross frequency coupling.

    Science.gov (United States)

    Sanders, Teresa H; McCurry, Mark; Clements, Mark A

    2014-01-01

    Sleep is a key requirement for an individual's health, though currently the options to study sleep rely largely on manual visual classification methods. In this paper we propose a new scheme for automated offline classification based upon cross-frequency-coupling (CFC) and compare it to the traditional band power estimation and the more recent preferential frequency band information estimation. All three approaches allowed sleep stage classification and provided whole-night visualization of sleep stages. Surprisingly, the simple average power in band classification achieved better overall performance than either the preferential frequency band information estimation or the CFC approach. However, combined classification with both average power and CFC features showed improved classification over either approach used singly.

  16. HYBRID INTERNET TRAFFIC CLASSIFICATION TECHNIQUE1

    Institute of Scientific and Technical Information of China (English)

    Li Jun; Zhang Shunyi; Lu Yanqing; Yan Junrong

    2009-01-01

    Accurate and real-time classification of network traffic is significant to network operation and management such as QoS differentiation, traffic shaping and security surveillance. However, with many newly emerged P2P applications using dynamic port numbers, masquerading techniques, and payload encryption to avoid detection, traditional classification approaches turn to be ineffective. In this paper, we present a layered hybrid system to classify current Internet traffic, motivated by variety of network activities and their requirements of traffic classification. The proposed method could achieve fast and accurate traffic classification with low overheads and robustness to accommodate both known and unknown/encrypted applications. Furthermore, it is feasible to be used in the context of real-time traffic classification. Our experimental results show the distinct advantages of the proposed classification system, compared with the one-step Machine Learning (ML) approach.

  17. Neural network classification - A Bayesian interpretation

    Science.gov (United States)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  18. Errors and mistakes in breast ultrasound diagnostics.

    Science.gov (United States)

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  19. Errors and mistakes in breast ultrasound diagnostics

    Science.gov (United States)

    Jakubowski, Wiesław; Migda, Bartosz

    2012-01-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions. PMID:26675358

  20. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  1. Errors generated with the use of rectangular collimation

    Energy Technology Data Exchange (ETDEWEB)

    Parks, E.T. (Department of Allied Health, Western Kentucky University, Bowling Green (USA))

    1991-04-01

    This study was designed to determine whether various techniques for achieving rectangular collimation generate different numbers and types of errors and remakes and to determine whether operator skill level influences errors and remakes. Eighteen students exposed full-mouth series of radiographs on manikins with the use of six techniques. The students were grouped according to skill level. The radiographs were evaluated for errors and remakes resulting from errors in the following categories: cone cutting, vertical angulation, and film placement. Significant differences were found among the techniques in cone cutting errors and remakes, vertical angulation errors and remakes, and total errors and remakes. Operator skill did not appear to influence the number or types of errors or remakes generated. Rectangular collimation techniques produced more errors than did the round collimation techniques. However, only one rectangular collimation technique generated significantly more remakes than the other techniques.

  2. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  3. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  4. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  5. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  6. A Literature Review of Research on Error Analysis Abroad

    Institute of Scientific and Technical Information of China (English)

    肖倩

    2014-01-01

    Error constitutes an important part of interlanguage.Error analysis is an approach influenced by behaviorism,it based on the cognitive theory. The aim of error analysis is to explore the errors made by second language learners, exploring the mental process of learners’second language acquisition,which is of great importance to both learners and teachers. However,as a research tool,error analysis has its limitations. In order to better understand and make best use of error analysis,its background, definition, basic assumptions, classification, procedure, explanation, implication as well as its application will be illustrated. Its limitations will be analyzed from the prospectives of its nature, definition categories.The literature review abroad sheds insight on implication for second language teaching.

  7. Featureless Classification of Light Curves

    CERN Document Server

    Kügler, Sven Dennis; Polsterer, Kai Lars

    2015-01-01

    In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data can not be represented naturally as a plain vector which directly can be fed into a classifier. In the literature, various statistical features derived from time series serve as a representation. Typically, the usefulness of the derived features is judged in an empirical fashion according to their predictive power. In this work, an alternative to the feature-based approach is investigated. In this new representation the time series is described by a density model. Similarity between each pair of time series is quantified by the distance between their respective models. The density model captures all the information available, also including measurement errors. Hence, we view this model as a generalisation to the static features which directly can be derived, e.g., as ...

  8. Motion error compensation of multi-legged walking robots

    Science.gov (United States)

    Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei

    2012-07-01

    Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.

  9. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  10. Minimax Optimal Rates of Convergence for Multicategory Classifications

    Institute of Scientific and Technical Information of China (English)

    Di Rong CHEN; Xu YOU

    2007-01-01

    In the problem of classification (or pattern recognition),given a set of n samples,weattempt to construct a classifier gn with a small misclassification error.It is important to study the convergence rates of the misclassification error as n tends to infinity.It is known that such a rate can'texist for the set of all distributions.In this paper we obtain the optimal convergence rates for a classof distributions D(λ,ω) in multicategory classification and nonstandard binary classification.

  11. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  12. Contextualizing Object Detection and Classification.

    Science.gov (United States)

    Chen, Qiang; Song, Zheng; Dong, Jian; Huang, Zhongyang; Hua, Yang; Yan, Shuicheng

    2015-01-01

    We investigate how to iteratively and mutually boost object classification and detection performance by taking the outputs from one task as the context of the other one. While context models have been quite popular, previous works mainly concentrate on co-occurrence relationship within classes and few of them focus on contextualization from a top-down perspective, i.e. high-level task context. In this paper, our system adopts a new method for adaptive context modeling and iterative boosting. First, the contextualized support vector machine (Context-SVM) is proposed, where the context takes the role of dynamically adjusting the classification score based on the sample ambiguity, and thus the context-adaptive classifier is achieved. Then, an iterative training procedure is presented. In each step, Context-SVM, associated with the output context from one task (object classification or detection), is instantiated to boost the performance for the other task, whose augmented outputs are then further used to improve the former task by Context-SVM. The proposed solution is evaluated on the object classification and detection tasks of PASCAL Visual Object Classes Challenge (VOC) 2007, 2010 and SUN09 data sets, and achieves the state-of-the-art performance.

  13. Multinomial mixture model with heterogeneous classification probabilities

    Science.gov (United States)

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  14. AUTOMATIC CLASSIFICATION OF QUESTION TURNS IN SPONTANEOUS SPEECH USING LEXICAL AND PROSODIC EVIDENCE.

    Science.gov (United States)

    Ananthakrishnan, Sankaranarayanan; Ghosh, Prasanta; Narayanan, Shrikanth

    2008-01-01

    The ability to identify speech acts reliably is desirable in any spoken language system that interacts with humans. Minimally, such a system should be capable of distinguishing between question-bearing turns and other types of utterances. However, this is a non-trivial task, since spontaneous speech tends to have incomplete syntactic, and even ungrammatical, structure and is characterized by disfluencies, repairs and other non-linguistic vocalizations that make simple rule based pattern learning difficult. In this paper, we present a system for identifying question-bearing turns in spontaneous multi-party speech (ICSI Meeting Corpus) using lexical and prosodic evidence. On a balanced test set, our system achieves an accuracy of 71.9% for the binary question vs. non-question classification task. Further, we investigate the robustness of our proposed technique to uncertainty in the lexical feature stream (e.g. caused by speech recognition errors). Our experiments indicate that classification accuracy of the proposed method is robust to errors in the text stream, dropping only about 0.8% for every 10% increase in word error rate (WER).

  15. Minimally-sized balanced decomposition schemes for multi-class classification

    NARCIS (Netherlands)

    Smirnov, E.N.; Moed, M.; Nalbantov, G.I.; Sprinkhuizen-Kuyper, I.G.

    2011-01-01

    Error-Correcting Output Coding (ECOC) is a well-known class of decomposition schemes for multi-class classification. It allows representing any multiclass classification problem as a set of binary classification problems. Due to code redundancy ECOC schemes can significantly improve generalization p

  16. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  17. Quadratic dynamical decoupling with nonuniform error suppression

    Energy Technology Data Exchange (ETDEWEB)

    Quiroz, Gregory; Lidar, Daniel A. [Department of Physics and Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, California 90089 (United States); Departments of Electrical Engineering, Chemistry, and Physics, and Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, California 90089 (United States)

    2011-10-15

    We analyze numerically the performance of the near-optimal quadratic dynamical decoupling (QDD) single-qubit decoherence errors suppression method [J. West et al., Phys. Rev. Lett. 104, 130501 (2010)]. The QDD sequence is formed by nesting two optimal Uhrig dynamical decoupling sequences for two orthogonal axes, comprising N{sub 1} and N{sub 2} pulses, respectively. Varying these numbers, we study the decoherence suppression properties of QDD directly by isolating the errors associated with each system basis operator present in the system-bath interaction Hamiltonian. Each individual error scales with the lowest order of the Dyson series, therefore immediately yielding the order of decoherence suppression. We show that the error suppression properties of QDD are dependent upon the parities of N{sub 1} and N{sub 2}, and near-optimal performance is achieved for general single-qubit interactions when N{sub 1}=N{sub 2}.

  18. Integrating TM and Ancillary Geographical Data with Classification Trees for Land Cover Classification of Marsh Area

    Institute of Scientific and Technical Information of China (English)

    NA Xiaodong; ZHANG Shuqing; ZHANG Huaiqing; LI Xiaofeng; YU Huan; LIU Chunyue

    2009-01-01

    The main objective of this research is to determine the capacity of land cover classification combining spectral and textural features of Landsat TM imagery with ancillary geographical data in wetlands of the Sanjiang Plain, Heilongjiang Province, China. Semi-variograms and Z-test value were calculated to assess the separability of grey-level co-occurrence texture measures to maximize the difference between land cover types. The degree of spatial autocorrelation showed that window sizes of 3×3 pixels and 11×11 pixels were most appropriate for Landsat TM image texture calculations. The texture analysis showed that co-occurrence entropy, dissimilarity, and variance texture measures, derived from the Landsat TM spectrum bands and vegetation indices provided the most significant statistical differentiation between land cover types. Subsequently, a Classification and Regression Tree (CART) algorithm was applied to three different combinations of predictors: 1) TM imagery alone (TM-only); 2) TM imagery plus image texture (TM+TXT model); and 3) all predictors including TM imagery, image texture and additional ancillary GIS information (TM+TXT+GIS model). Compared with traditional Maximum Likelihood Classification (MLC) supervised classification, three classification trees predictive models reduced the overall error rate significantly. Image texture measures and ancillary geographical variables depressed the speckle noise effectively and reduced classification error rate of marsh obviously. For classification trees model making use of all available predictors, omission error rate was 12.90% and commission error rate was 10.99% for marsh. The developed method is portable, relatively easy to implement and should be applicable in other settings and over larger extents.

  19. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  20. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  1. Wind Power Error Estimation in Resource Assessments

    Science.gov (United States)

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  2. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  3. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    of the Aachen Aphasia Test (AAT). First a coarse classification was achieved by using an assessment of spontaneous speech of the patient. This classifier produced correct results in 87% of the test cases. For a second test, data analysis tools were used to select four features out of the 30 available test...... be done in about half an hour in a free interview. The results of the classifiers were analyzed regarding their accuracy dependent on the diagnosis....

  4. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  5. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  6. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  7. Rates of computational errors for scoring the SIRS primary scales.

    Science.gov (United States)

    Tyner, Elizabeth A; Frederick, Richard I

    2013-12-01

    We entered item scores for the Structured Interview of Reported Symptoms (SIRS; Rogers, Bagby, & Dickens, 1991) into a spreadsheet and compared computed scores with those hand-tallied by examiners. We found that about 35% of the tests had at least 1 scoring error. Of SIRS scale scores tallied by examiners, about 8% were incorrectly summed. When the errors were corrected, only 1 SIRS classification was reclassified in the fourfold scheme used by the SIRS. We note that mistallied scores on psychological tests are common, and we review some strategies for reducing scale score errors on the SIRS. (c) 2013 APA, all rights reserved.

  8. Floating-Point Numbers with Error Estimates (revised)

    CERN Document Server

    Masotti, Glauco

    2012-01-01

    The study addresses the problem of precision in floating-point (FP) computations. A method for estimating the errors which affect intermediate and final results is proposed and a summary of many software simulations is discussed. The basic idea consists of representing FP numbers by means of a data structure collecting value and estimated error information. Under certain constraints, the estimate of the absolute error is accurate and has a compact statistical distribution. By monitoring the estimated relative error during a computation (an ad-hoc definition of relative error has been used), the validity of results can be ensured. The error estimate enables the implementation of robust algorithms, and the detection of ill-conditioned problems. A dynamic extension of number precision, under the control of error estimates, is advocated, in order to compute results within given error bounds. A reduced time penalty could be achieved by a specialized FP processor. The realization of a hardwired processor incorporat...

  9. Investigation of fluorescence spectra disturbances influencing the classification performance of fluorescently labeled plastic flakes

    Science.gov (United States)

    Fomin, Petr; Brunner, Siegfried; Kargel, Christian

    2013-04-01

    The recycling of plastic products becomes increasingly attractive not only from an environmental point of view, but also economically. For recycled (engineering) plastic products with the highest possible quality, plastic sorting technologies must provide clean and virtually mono-fractional compositions from a mixture of many different types of (shredded) plastics. In order to put this high quality sorting into practice, the labeling of virgin plastics with specific fluorescent markers at very low concentrations (ppm level or less) during their manufacturing process is proposed. The emitted fluorescence spectra represent "optical fingerprints" - each being unique for a particular plastic - which we use for plastic identification and classification purposes. In this study we quantify the classification performance using our prototype measurement system and 15 different plastic types when various influence factors most relevant in practice cause disturbances of the fluorescence spectra emitted from the labeled plastics. The results of these investigations help optimize the development and incorporation of appropriate fluorescent markers as well as the classification algorithms and overall measurement system in order to achieve the lowest possible classification error rates.

  10. Risk factors and birth prevalence of birth defects and inborn errors of ...

    African Journals Online (AJOL)

    The principal BD as per the International Classification of Diseases-10 (ICD-10) ... On the other hand, consanguinity and low birth weight were associated with ... Key words: Birth defects, inborn errors of metabolism, newborn, Saudi Arabia ...

  11. Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications

    Science.gov (United States)

    Carrió Pastor, María Luisa; Mestre-Mestre, Eva María

    2014-01-01

    Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…

  12. Error Reduction for Visible Watermarking in Still Images

    Institute of Scientific and Technical Information of China (English)

    LjubisaRadunovic; 王朔中; 等

    2002-01-01

    Different digital watermarking techniques and their applications are briefly reviewed.Solution to a practical problem with visible image marking is presented,together with experimental results and discussion.Main focusis on reduction of error caused by the mark addition and subtraction.Image classification based on its mean gray level and adjustment of out-of-range gray levels are implemented.

  13. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  14. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue.…

  15. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  16. Introduction to precision machine design and error assessment

    CERN Document Server

    Mekid, Samir

    2008-01-01

    While ultra-precision machines are now achieving sub-nanometer accuracy, unique challenges continue to arise due to their tight specifications. Written to meet the growing needs of mechanical engineers and other professionals to understand these specialized design process issues, Introduction to Precision Machine Design and Error Assessment places a particular focus on the errors associated with precision design, machine diagnostics, error modeling, and error compensation. Error Assessment and ControlThe book begins with a brief overview of precision engineering and applications before introdu

  17. Metric learning for automatic sleep stage classification.

    Science.gov (United States)

    Phan, Huy; Do, Quan; Do, The-Luan; Vu, Duc-Lung

    2013-01-01

    We introduce in this paper a metric learning approach for automatic sleep stage classification based on single-channel EEG data. We show that learning a global metric from training data instead of using the default Euclidean metric, the k-nearest neighbor classification rule outperforms state-of-the-art methods on Sleep-EDF dataset with various classification settings. The overall accuracy for Awake/Sleep and 4-class classification setting are 98.32% and 94.49% respectively. Furthermore, the superior accuracy is achieved by performing classification on a low-dimensional feature space derived from time and frequency domains and without the need for artifact removal as a preprocessing step.

  18. Hyperspectral image classification using functional data analysis.

    Science.gov (United States)

    Li, Hong; Xiao, Guangrun; Xia, Tian; Tang, Y Y; Li, Luoqing

    2014-09-01

    The large number of spectral bands acquired by hyperspectral imaging sensors allows us to better distinguish many subtle objects and materials. Unlike other classical hyperspectral image classification methods in the multivariate analysis framework, in this paper, a novel method using functional data analysis (FDA) for accurate classification of hyperspectral images has been proposed. The central idea of FDA is to treat multivariate data as continuous functions. From this perspective, the spectral curve of each pixel in the hyperspectral images is naturally viewed as a function. This can be beneficial for making full use of the abundant spectral information. The relevance between adjacent pixel elements in the hyperspectral images can also be utilized reasonably. Functional principal component analysis is applied to solve the classification problem of these functions. Experimental results on three hyperspectral images show that the proposed method can achieve higher classification accuracies in comparison to some state-of-the-art hyperspectral image classification methods.

  19. Automatic web services classification based on rough set theory

    Institute of Scientific and Technical Information of China (English)

    陈立; 张英; 宋自林; 苗壮

    2013-01-01

    With development of web services technology, the number of existing services in the internet is growing day by day. In order to achieve automatic and accurate services classification which can be beneficial for service related tasks, a rough set theory based method for services classification was proposed. First, the services descriptions were preprocessed and represented as vectors. Elicited by the discernibility matrices based attribute reduction in rough set theory and taking into account the characteristic of decision table of services classification, a method based on continuous discernibility matrices was proposed for dimensionality reduction. And finally, services classification was processed automatically. Through the experiment, the proposed method for services classification achieves approving classification result in all five testing categories. The experiment result shows that the proposed method is accurate and could be used in practical web services classification.

  20. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  1. Radar clutter classification

    Science.gov (United States)

    Stehwien, Wolfgang

    1989-11-01

    The problem of classifying radar clutter as found on air traffic control radar systems is studied. An algorithm based on Bayes decision theory and the parametric maximum a posteriori probability classifier is developed to perform this classification automatically. This classifier employs a quadratic discriminant function and is optimum for feature vectors that are distributed according to the multivariate normal density. Separable clutter classes are most likely to arise from the analysis of the Doppler spectrum. Specifically, a feature set based on the complex reflection coefficients of the lattice prediction error filter is proposed. The classifier is tested using data recorded from L-band air traffic control radars. The Doppler spectra of these data are examined; the properties of the feature set computed using these data are studied in terms of both the marginal and multivariate statistics. Several strategies involving different numbers of features, class assignments, and data set pretesting according to Doppler frequency and signal to noise ratio were evaluated before settling on a workable algorithm. Final results are presented in terms of experimental misclassification rates and simulated and classified plane position indicator displays.

  2. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  3. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  4. Real-life GOLD 2011 implementation: the management of COPD lacks correct classification and adequate treatment.

    Directory of Open Access Journals (Sweden)

    Vladimir Koblizek

    Full Text Available Chronic obstructive pulmonary disease (COPD is a serious, yet preventable and treatable, disease. The success of its treatment relies largely on the proper implementation of recommendations, such as the recently released Global Strategy for Diagnosis, Management, and Prevention of COPD (GOLD 2011, of late December 2011. The primary objective of this study was to examine the extent to which GOLD 2011 is being used correctly among Czech respiratory specialists, in particular with regard to the correct classification of patients. The secondary objective was to explore what effect an erroneous classification has on inadequate use of inhaled corticosteroids (ICS. In order to achieve these goals, a multi-center, cross-sectional study was conducted, consisting of a general questionnaire and patient-specific forms. A subjective classification into the GOLD 2011 categories was examined, and then compared with the objectively computed one. Based on 1,355 patient forms, a discrepancy between the subjective and objective classifications was found in 32.8% of cases. The most common reason for incorrect classification was an error in the assessment of symptoms, which resulted in underestimation in 23.9% of cases, and overestimation in 8.9% of the patients' records examined. The specialists seeing more than 120 patients per month were most likely to misclassify their condition, and were found to have done so in 36.7% of all patients seen. While examining the subjectively driven ICS prescription, it was found that 19.5% of patients received ICS not according to guideline recommendations, while in 12.2% of cases the ICS were omitted, contrary to guideline recommendations. Furthermore, with consideration to the objectively-computed classification, it was discovered that 15.4% of patients received ICS unnecessarily, whereas in 15.8% of cases, ICS were erroneously omitted. It was therefore concluded that Czech specialists tend either to under-prescribe or overuse

  5. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases th...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  6. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    Full Text Available Abstract Background Detailed knowledge of the subcellular location of each expressed protein is critical to a full understanding of its function. Fluorescence microscopy, in combination with methods for fluorescent tagging, is the most suitable current method for proteome-wide determination of subcellular location. Previous work has shown that neural network classifiers can distinguish all major protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Building on these results, we evaluate here new classifiers and features to improve the recognition of protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Results We report here a thorough comparison of the performance on this problem of eight different state-of-the-art classification methods, including neural networks, support vector machines with linear, polynomial, radial basis, and exponential radial basis kernel functions, and ensemble methods such as AdaBoost, Bagging, and Mixtures-of-Experts. Ten-fold cross validation was used to evaluate each classifier with various parameters on different Subcellular Location Feature sets representing both 2D and 3D fluorescence microscope images, including new feature sets incorporating features derived from Gabor and Daubechies wavelet transforms. After optimal parameters were chosen for each of the eight classifiers, optimal majority-voting ensemble classifiers were formed for each feature set. Comparison of results for each image for all eight classifiers permits estimation of the lower bound classification error rate for each subcellular pattern, which we interpret to reflect the fraction of cells whose patterns are distorted by mitosis, cell death or acquisition errors. Overall, we obtained statistically significant improvements in classification accuracy over the best previously published results, with the overall error rate being reduced by one-third to one-half and with the average

  7. Chaotic genetic algorithm for gene selection and classification problems.

    Science.gov (United States)

    Chuang, Li-Yeh; Yang, Cheng-San; Li, Jung-Chike; Yang, Cheng-Hong

    2009-10-01

    Pattern recognition techniques suffer from a well-known curse, the dimensionality problem. The microarray data classification problem is a classical complex pattern recognition problem. Selecting relevant genes from microarray data poses a formidable challenge to researchers due to the high-dimensionality of features, multiclass categories being involved, and the usually small sample size. The goal of feature (gene) selection is to select those subsets of differentially expressed genes that are potentially relevant for distinguishing the sample classes. In this paper, information gain and chaotic genetic algorithm are proposed for the selection of relevant genes, and a K-nearest neighbor with the leave-one-out crossvalidation method serves as a classifier. The chaotic genetic algorithm is modified by using the chaotic mutation operator to increase the population diversity. The enhanced population diversity expands the GA's search ability. The proposed approach is tested on 10 microarray data sets from the literature. The experimental results show that the proposed method not only effectively reduced the number of gene expression levels, but also achieved lower classification error rates than other methods.

  8. Classification of DNA nucleotides with transverse tunneling currents

    Science.gov (United States)

    Nyvold Pedersen, Jonas; Boynton, Paul; Di Ventra, Massimiliano; Jauho, Antti-Pekka; Flyvbjerg, Henrik

    2017-01-01

    It has been theoretically suggested and experimentally demonstrated that fast and low-cost sequencing of DNA, RNA, and peptide molecules might be achieved by passing such molecules between electrodes embedded in a nanochannel. The experimental realization of this scheme faces major challenges, however. In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing, though current fluctuations due to Brownian motion of the molecule average out during the required integration time. Moreover, data acquisition equipment introduces noise, and electronic filters create correlations in time-series data. We discuss how these effects must be included in the analysis of, e.g., the assignment of specific nucleobases to current signals. As the signals from different molecules overlap, unambiguous classification is impossible with a single measurement. We argue that the assignment of molecules to a signal is a standard pattern classification problem and calculation of the error rates is straightforward. The ideas presented here can be extended to other sequencing approaches of current interest.

  9. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  10. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Decentralized estimation of sensor systematic error andtarget state vector

    Institute of Scientific and Technical Information of China (English)

    贺明科; 王正明; 朱炬波

    2003-01-01

    An accurate estimation of the sensor systematic error is significant for improving the performance of target tracking system. The existing methods usually append the bias states directly to the variable states to form augmented state vectors and utilize the conventional Kalman estimator to achieve state vectors estimate. So doing is expensive in computation, and much work is devoted to decoupling variable states and systematic error. But the decentralied estimation of systematic errors and reduction of the amount of computation as well as decentralied track fusion are far from being realized. This paper addresses distributed track fusion problem in multi-sensor tracking system in the presence of sensor bias. By this method, variable states and systematic error is decoupled. Decentralized systematic error estimation and track fusion are achieved. Simulation results verify that this method can get accurate estimation of systematic error and state vector.

  12. Classification of cultivated plants.

    NARCIS (Netherlands)

    Brandenburg, W.A.

    1986-01-01

    Agricultural practice demands principles for classification, starting from the basal entity in cultivated plants: the cultivar. In establishing biosystematic relationships between wild, weedy and cultivated plants, the species concept needs re-examination. Combining of botanic classification, based

  13. A hardware error estimate for floating-point computations

    Science.gov (United States)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However

  14. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  15. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  16. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  17. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  18. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  19. Stellar classification from single-band imaging using machine learning

    Science.gov (United States)

    Kuntzer, T.; Tewes, M.; Courbin, F.

    2016-06-01

    Information on the spectral types of stars is of great interest in view of the exploitation of space-based imaging surveys. In this article, we investigate the classification of stars into spectral types using only the shape of their diffraction pattern in a single broad-band image. We propose a supervised machine learning approach to this endeavour, based on principal component analysis (PCA) for dimensionality reduction, followed by artificial neural networks (ANNs) estimating the spectral type. Our analysis is performed with image simulations mimicking the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) in the F606W and F814W bands, as well as the Euclid VIS imager. We first demonstrate this classification in a simple context, assuming perfect knowledge of the point spread function (PSF) model and the possibility of accurately generating mock training data for the machine learning. We then analyse its performance in a fully data-driven situation, in which the training would be performed with a limited subset of bright stars from a survey, and an unknown PSF with spatial variations across the detector. We use simulations of main-sequence stars with flat distributions in spectral type and in signal-to-noise ratio, and classify these stars into 13 spectral subclasses, from O5 to M5. Under these conditions, the algorithm achieves a high success rate both for Euclid and HST images, with typical errors of half a spectral class. Although more detailed simulations would be needed to assess the performance of the algorithm on a specific survey, this shows that stellar classification from single-band images is well possible.

  20. Raptor Codes for Use in Opportunistic Error Correction

    NARCIS (Netherlands)

    Zijnge, T.; Schiphorst, R.; Shao, X.; Slump, C.H.; Goseling, Jasper; Weber, Jos H.

    2010-01-01

    In this paper a Raptor code is developed and applied in an opportunistic error correction (OEC) layer for Coded OFDM systems. Opportunistic error correction [3] tries to recover information when it is available with the least effort. This is achieved by using Fountain codes in a COFDM system, which

  1. VOCAL SEGMENT CLASSIFICATION IN POPULAR MUSIC

    DEFF Research Database (Denmark)

    Feng, Ling; Nielsen, Andreas Brinch; Hansen, Lars Kai

    2008-01-01

    This paper explores the vocal and non-vocal music classification problem within popular songs. A newly built labeled database covering 147 popular songs is announced. It is designed for classifying signals from 1sec time windows. Features are selected for this particular task, in order to capture......-validated training and test setup. The database is divided in two different ways: with/without artist overlap between training and test sets, so as to study the so called ‘artist effect’. The performance and results are analyzed in depth: from error rates to sample-to-sample error correlation. A voting scheme...

  2. Recursive training of neural networks for classification.

    Science.gov (United States)

    Aladjem, M

    2000-01-01

    A method for recursive training of neural networks for classification is proposed. It searches for the discriminant functions corresponding to several small local minima of the error function. The novelty of the proposed method lies in the transformation of the data into new training data with a deflated minimum of the error function and iteration to obtain the next solution. A simulation study and a character recognition application indicate that the proposed method has the potential to escape from local minima and to direct the local optimizer to new solutions.

  3. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  4. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  5. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  6. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  7. Classification of refrigerants; Classification des fluides frigorigenes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  8. Random Forest Classification of Wetland Landcovers from Multi-Sensor Data in the Arid Region of Xinjiang, China

    Directory of Open Access Journals (Sweden)

    Shaohong Tian

    2016-11-01

    Full Text Available The wetland classification from remotely sensed data is usually difficult due to the extensive seasonal vegetation dynamics and hydrological fluctuation. This study presents a random forest classification approach for the retrieval of the wetland landcover in the arid regions by fusing the Pléiade-1B data with multi-date Landsat-8 data. The segmentation of the Pléiade-1B multispectral image data was performed based on an object-oriented approach, and the geometric and spectral features were extracted for the segmented image objects. The normalized difference vegetation index (NDVI series data were also calculated from the multi-date Landsat-8 data, reflecting vegetation phenological changes in its growth cycle. The feature set extracted from the two sensors data was optimized and employed to create the random forest model for the classification of the wetland landcovers in the Ertix River in northern Xinjiang, China. Comparison with other classification methods such as support vector machine and artificial neural network classifiers indicates that the random forest classifier can achieve accurate classification with an overall accuracy of 93% and the Kappa coefficient of 0.92. The classification accuracy of the farming lands and water bodies that have distinct boundaries with the surrounding land covers was improved 5%–10% by making use of the property of geometric shapes. To remove the difficulty in the classification that was caused by the similar spectral features of the vegetation covers, the phenological difference and the textural information of co-occurrence gray matrix were incorporated into the classification, and the main wetland vegetation covers in the study area were derived from the two sensors data. The inclusion of phenological information in the classification enables the classification errors being reduced down, and the overall accuracy was improved approximately 10%. The results show that the proposed random forest

  9. Texture Image Classification Based on Gabor Wavelet

    Institute of Scientific and Technical Information of China (English)

    DENG Wei-bing; LI Hai-fei; SHI Ya-li; YANG Xiao-hui

    2014-01-01

    For a texture image, by recognizining the class of every pixel of the image, it can be partitioned into disjoint regions of uniform texture. This paper proposed a texture image classification algorithm based on Gabor wavelet. In this algorithm, characteristic of every image is obtained through every pixel and its neighborhood of this image. And this algorithm can achieve the information transform between different sizes of neighborhood. Experiments on standard Brodatz texture image dataset show that our proposed algorithm can achieve good classification rates.

  10. Feature extraction and classification algorithms for high dimensional data

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  11. Achievable Precision for Optical Ranging Systems

    Science.gov (United States)

    Moision, Bruce; Erkmen, Baris I.

    2012-01-01

    Achievable RMS errors in estimating the phase, frequency, and intensity of a direct-detected intensity-modulated optical pulse train are presented. For each parameter, the Cramer-Rao-Bound (CRB) is derived and the performance of the Maximum Likelihood estimator is illustrated. Approximations to the CRBs are provided, enabling an intuitive understanding of estimator behavior as a function of the signaling parameters. The results are compared to achievable RMS errors in estimating the same parameters from a sinusoidal waveform in additive white Gaussian noise. This establishes a framework for a performance comparison of radio frequency (RF) and optical science. Comparisons are made using parameters for state-of-the-art deep-space RF and optical links. Degradations to the achievable errors due to clock phase noise and detector jitter are illustrated.

  12. Errors Made by Elementary Fourth Grade Students When Modelling Word Problems and the Elimination of Those Errors through Scaffolding

    Science.gov (United States)

    Ulu, Mustafa

    2017-01-01

    This study aims to identify errors made by primary school students when modelling word problems and to eliminate those errors through scaffolding. A 10-question problem-solving achievement test was used in the research. The qualitative and quantitative designs were utilized together. The study group of the quantitative design comprises 248…

  13. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  14. Efficient Image Transmission Through Analog Error Correction

    CERN Document Server

    Liu, Yang; Li,; Xie, Kai

    2011-01-01

    This paper presents a new paradigm for image transmission through analog error correction codes. Conventional schemes rely on digitizing images through quantization (which inevitably causes significant bandwidth expansion) and transmitting binary bit-streams through digital error correction codes (which do not automatically differentiate the different levels of significance among the bits). To strike a better overall performance in terms of transmission efficiency and quality, we propose to use a single analog error correction code in lieu of digital quantization, digital code and digital modulation. The key is to get analog coding right. We show that this can be achieved by cleverly exploiting an elegant "butterfly" property of chaotic systems. Specifically, we demonstrate a tail-biting triple-branch baker's map code and its maximum-likelihood decoding algorithm. Simulations show that the proposed analog code can actually outperform digital turbo code, one of the best codes known to date. The results and fin...

  15. Distance-based classification of keystroke dynamics

    Science.gov (United States)

    Tran Nguyen, Ngoc

    2016-07-01

    This paper uses the keystroke dynamics in user authentication. The relationship between the distance metrics and the data template, for the first time, was analyzed and new distance based algorithm for keystroke dynamics classification was proposed. The results of the experiments on the CMU keystroke dynamics benchmark dataset1 were evaluated with an equal error rate of 0.0614. The classifiers using the proposed distance metric outperform existing top performing keystroke dynamics classifiers which use traditional distance metrics.

  16. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  17. Automated spectral classification and the GAIA project

    Science.gov (United States)

    Lasala, Jerry; Kurtz, Michael J.

    1995-01-01

    Two dimensional spectral types for each of the stars observed in the global astrometric interferometer for astrophysics (GAIA) mission would provide additional information for the galactic structure and stellar evolution studies, as well as helping in the identification of unusual objects and populations. The classification of the large quantity generated spectra requires that automated techniques are implemented. Approaches for the automatic classification are reviewed, and a metric-distance method is discussed. In tests, the metric-distance method produced spectral types with mean errors comparable to those of human classifiers working at similar resolution. Data and equipment requirements for an automated classification survey, are discussed. A program of auxiliary observations is proposed to yield spectral types and radial velocities for the GAIA-observed stars.

  18. Classification rules for Indian Rice diseases

    Directory of Open Access Journals (Sweden)

    A. Nithya

    2011-01-01

    Full Text Available Many techniques have been developed for learning rules and relationships automatically from diverse data sets, to simplify the often tedious and error-prone process of acquiring knowledge from empirical data. Decision tree is one of learning algorithm which posses certain advantages that make it suitable for discovering the classification rule for data mining applications. Normally Decision trees widely used learning method and do not require any prior knowledge of data distribution, works well on noisy data .It has been applied to classify Rice disease based on the symptoms. This paper intended to discover classification rules for the Indian rice diseases using the c4.5 decision trees algorithm. Expert systems have been used in agriculture since the early 1980s. Several systems have been developed in different countries including the USA, Europe, and Egypt for plant-disorder diagnosis, management and other production aspects. This paper explores what Classification rule can do in the agricultural domain.

  19. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  20. Neural network parameters affecting image classification

    Directory of Open Access Journals (Sweden)

    K.C. Tiwari

    2001-07-01

    Full Text Available The study is to assess the behaviour and impact of various neural network parameters and their effects on the classification accuracy of remotely sensed images which resulted in successful classification of an IRS-1B LISS II image of Roorkee and its surrounding areas using neural network classification techniques. The method can be applied for various defence applications, such as for the identification of enemy troop concentrations and in logistical planning in deserts by identification of suitable areas for vehicular movement. Five parameters, namely training sample size, number of hidden layers, number of hidden nodes, learning rate and momentum factor were selected. In each case, sets of values were decided based on earlier works reported. Neural network-based classifications were carried out for as many as 450 combinations of these parameters. Finally, a graphical analysis of the results obtained was carried out to understand the relationship among these parameters. A table of recommended values for these parameters for achieving 90 per cent and higher classification accuracy was generated and used in classification of an IRS-1B LISS II image. The analysis suggests the existence of an intricate relationship among these parameters and calls for a wider series of classification experiments as also a more intricate analysis of the relationships.

  1. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  2. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1989-09-01

    Certain governmental information must be classified for national security reasons. However, the national security benefits from classifying information are usually accompanied by significant costs -- those due to a citizenry not fully informed on governmental activities, the extra costs of operating classified programs and procuring classified materials (e.g., weapons), the losses to our nation when advances made in classified programs cannot be utilized in unclassified programs. The goal of a classification system should be to clearly identify that information which must be protected for national security reasons and to ensure that information not needing such protection is not classified. This document was prepared to help attain that goal. This document is the first of a planned four-volume work that comprehensively discusses the security classification of information. Volume 1 broadly describes the need for classification, the basis for classification, and the history of classification in the United States from colonial times until World War 2. Classification of information since World War 2, under Executive Orders and the Atomic Energy Acts of 1946 and 1954, is discussed in more detail, with particular emphasis on the classification of atomic energy information. Adverse impacts of classification are also described. Subsequent volumes will discuss classification principles, classification management, and the control of certain unclassified scientific and technical information. 340 refs., 6 tabs.

  3. Fingerprint Gender Classification using Wavelet Transform and Singular Value Decomposition

    CERN Document Server

    Gnanasivam, P

    2012-01-01

    A novel method of gender Classification from fingerprint is proposed based on discrete wavelet transform (DWT) and singular value decomposition (SVD). The classification is achieved by extracting the energy computed from all the sub-bands of DWT combined with the spatial features of non-zero singular values obtained from the SVD of fingerprint images. K nearest neighbor (KNN) used as a classifier. This method is experimented with the internal database of 3570 fingerprints finger prints in which 1980 were male fingerprints and 1590 were female fingerprints. Finger-wise gender classification is achieved which is 94.32% for the left hand little fingers of female persons and 95.46% for the left hand index finger of male persons. Gender classification for any finger of male persons tested is attained as 91.67% and 84.69% for female persons respectively. Overall classification rate is 88.28% has been achieved.

  4. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Fangyu Pan

    2013-08-01

    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  5. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    Science.gov (United States)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  6. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  7. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  8. Algorithms for Hyperspectral Endmember Extraction and Signature Classification with Morphological Dendritic Networks

    Science.gov (United States)

    Schmalz, M.; Ritter, G.

    Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and

  9. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  10. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  11. Fast Wavelet-Based Visual Classification

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a simple attention-like feedback mechanism, significantly improving recognition and robustness in multiple-object scenes. In experiments, the proposed algorithm achieves or exceeds state-of-the-art success rate on object recognition, texture and satellite image classification, language identification and sound classification.

  12. Ontologies vs. Classification Systems

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2009-01-01

    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta d...... classification systems and meta data taxonomies, should be based on ontologies.......What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...

  13. Aphasia Classification Using Neural Networks

    DEFF Research Database (Denmark)

    Axer, H.; Jantzen, Jan; Berks, G.

    2000-01-01

    of the Aachen Aphasia Test (AAT). First a coarse classification was achieved by using an assessment of spontaneous speech of the patient. This classifier produced correct results in 87% of the test cases. For a second test, data analysis tools were used to select four features out of the 30 available test...... features to yield a more accurate diagnosis. This second classifier produced correct results in 92% of the test cases. This test requires four AAT scores as input for the multilayer perceptron. In practice, the second test requires hours of work on behalf of the clinician, whereas the first test can...

  14. Information gathering for CLP classification

    OpenAIRE

    Ida Marcello; Felice Giordano; Francesca Marina Costamagna

    2011-01-01

    Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP). If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requireme...

  15. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  16. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  17. Feature Referenced Error Correction Apparatus.

    Science.gov (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  18. Error Correction of Loudspeakers

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde

    Throughout this thesis, the topic of electrodynamic loudspeaker unit design and modelling are reviewed. The research behind this project has been to study loudspeaker design, based on new possibilities introduced by including digital signal processing, and thereby achieving more freedom in loudsp...

  19. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo

    2017-06-14

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  1. Automatic classification of background EEG activity in healthy and sick neonates

    Science.gov (United States)

    Löfhede, Johan; Thordstein, Magnus; Löfgren, Nils; Flisberg, Anders; Rosa-Zurera, Manuel; Kjellmer, Ingemar; Lindecrantz, Kaj

    2010-02-01

    The overall aim of our research is to develop methods for a monitoring system to be used at neonatal intensive care units. When monitoring a baby, a range of different types of background activity needs to be considered. In this work, we have developed a scheme for automatic classification of background EEG activity in newborn babies. EEG from six full-term babies who were displaying a burst suppression pattern while suffering from the after-effects of asphyxia during birth was included along with EEG from 20 full-term healthy newborn babies. The signals from the healthy babies were divided into four behavioural states: active awake, quiet awake, active sleep and quiet sleep. By using a number of features extracted from the EEG together with Fisher's linear discriminant classifier we have managed to achieve 100% correct classification when separating burst suppression EEG from all four healthy EEG types and 93% true positive classification when separating quiet sleep from the other types. The other three sleep stages could not be classified. When the pathological burst suppression pattern was detected, the analysis was taken one step further and the signal was segmented into burst and suppression, allowing clinically relevant parameters such as suppression length and burst suppression ratio to be calculated. The segmentation of the burst suppression EEG works well, with a probability of error around 4%.

  2. Linear time distances between fuzzy sets with applications to pattern matching and classification.

    Science.gov (United States)

    Lindblad, Joakim; Sladoje, Nataša

    2014-01-01

    We present four novel point-to-set distances defined for fuzzy or gray-level image data, two based on integration over α-cuts and two based on the fuzzy distance transform. We explore their theoretical properties. Inserting the proposed point-to-set distances in existing definitions of set-to-set distances, among which are the Hausdorff distance and the sum of minimal distances, we define a number of distances between fuzzy sets. These set distances are directly applicable for comparing gray-level images or fuzzy segmented objects, but also for detecting patterns and matching parts of images. The distance measures integrate shape and intensity/membership of observed entities, providing a highly applicable tool for image processing and analysis. Performance evaluation of derived set distances in real image processing tasks is conducted and presented. It is shown that the considered distances have a number of appealing theoretical properties and exhibit very good performance in template matching and object classification for fuzzy segmented images as well as when applied directly on gray-level intensity images. Examples include recognition of hand written digits and identification of virus particles. The proposed set distances perform excellently on the MNIST digit classification task, achieving the best reported error rate for classification using only rigid body transformations and a kNN classifier.

  3. Error rate information in attention allocation pilot models

    Science.gov (United States)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  4. Computationally efficient target classification in multispectral image data with Deep Neural Networks

    Science.gov (United States)

    Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca

    2016-10-01

    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.

  5. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  6. Beta systems error analysis

    Science.gov (United States)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  7. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  8. Concepts of Classification and Taxonomy. Phylogenetic Classification

    CERN Document Server

    Fraix-Burnet, Didier

    2016-01-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 1 Why phylogenetic tools in astrophysics? 1.1 History of classification The need for classifying living organisms is very ancient, and the first classification system can be dated back to the Greeks. The goal was very practical since it was intended to distinguish between eatable and toxic aliments, or kind and dangerous animals. Simple resemblance was used and has been used for centuries. Basically, until the XVIIIth...

  9. Deep Learning in Label-free Cell Classification

    National Research Council Canada - National Science Library

    Chen, Claire Lifan; Mahjoubfar, Ata; Tai, Li-Chia; Blaby, Ian K; Huang, Allen; Niazi, Kayvan Reza; Jalali, Bahram

    2016-01-01

    .... Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification...

  10. Support vector classification algorithm based on variable parameter linear programming

    Institute of Scientific and Technical Information of China (English)

    Xiao Jianhua; Lin Jian

    2007-01-01

    To solve the problems of SVM in dealing with large sample size and asymmetric distributed samples, a support vector classification algorithm based on variable parameter linear programming is proposed.In the proposed algorithm, linear programming is employed to solve the optimization problem of classification to decrease the computation time and to reduce its complexity when compared with the original model.The adjusted punishment parameter greatly reduced the classification error resulting from asymmetric distributed samples and the detailed procedure of the proposed algorithm is given.An experiment is conducted to verify whether the proposed algorithm is suitable for asymmetric distributed samples.

  11. Hot complaint intelligent classification based on text mining

    Directory of Open Access Journals (Sweden)

    XIA Haifeng

    2013-10-01

    Full Text Available The complaint recognizer system plays an important role in making sure the correct classification of the hot complaint,improving the service quantity of telecommunications industry.The customers’ complaint in telecommunications industry has its special particularity which should be done in limited time,which cause the error in classification of hot complaint.The paper presents a model of complaint hot intelligent classification based on text mining,which can classify the hot complaint in the correct level of the complaint navigation.The examples show that the model can be efficient to classify the text of the complaint.

  12. Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification

    Directory of Open Access Journals (Sweden)

    R. Sathya

    2013-02-01

    Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.

  13. Experimental repetitive quantum error correction.

    Science.gov (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  14. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  15. Classifications for Proliferative Vitreoretinopathy (PVR: An Analysis of Their Use in Publications over the Last 15 Years

    Directory of Open Access Journals (Sweden)

    Salvatore Di Lauro

    2016-01-01

    Full Text Available Purpose. To evaluate the current and suitable use of current proliferative vitreoretinopathy (PVR classifications in clinical publications related to treatment. Methods. A PubMed search was undertaken using the term “proliferative vitreoretinopathy therapy”. Outcome parameters were the reported PVR classification and PVR grades. The way the classifications were used in comparison to the original description was analyzed. Classification errors were also included. It was also noted whether classifications were used for comparison before and after pharmacological or surgical treatment. Results. 138 papers were included. 35 of them (25.4% presented no classification reference or did not use any one. 103 publications (74.6% used a standardized classification. The updated Retina Society Classification, the first Retina Society Classification, and the Silicone Study Classification were cited in 56.3%, 33.9%, and 3.8% papers, respectively. Furthermore, 3 authors (2.9% used modified-customized classifications and 4 (3.8% classification errors were identified. When the updated Retina Society Classification was used, only 10.4% of authors used a full C grade description. Finally, only 2 authors reported PVR grade before and after treatment. Conclusions. Our findings suggest that current classifications are of limited value in clinical practice due to the inconsistent and limited use and that it may be of benefit to produce a revised classification.

  16. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  17. Prediction of discretization error using the error transport equation

    Science.gov (United States)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  18. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  19. Contextual segment-based classification of airborne laser scanner data

    Science.gov (United States)

    Vosselman, George; Coenen, Maximilian; Rottensteiner, Franz

    2017-06-01

    Classification of point clouds is needed as a first step in the extraction of various types of geo-information from point clouds. We present a new approach to contextual classification of segmented airborne laser scanning data. Potential advantages of segment-based classification are easily offset by segmentation errors. We combine different point cloud segmentation methods to minimise both under- and over-segmentation. We propose a contextual segment-based classification using a Conditional Random Field. Segment adjacencies are represented by edges in the graphical model and characterised by a range of features of points along the segment borders. A mix of small and large segments allows the interaction between nearby and distant points. Results of the segment-based classification are compared to results of a point-based CRF classification. Whereas only a small advantage of the segment-based classification is observed for the ISPRS Vaihingen dataset with 4-7 points/m2, the percentage of correctly classified points in a 30 points/m2 dataset of Rotterdam amounts to 91.0% for the segment-based classification vs. 82.8% for the point-based classification.

  20. Hyperspectral image classification based on NMF Features Selection Method

    Science.gov (United States)

    Abe, Bolanle T.; Jordaan, J. A.

    2013-12-01

    Hyperspectral instruments are capable of collecting hundreds of images corresponding to wavelength channels for the same area on the earth surface. Due to the huge number of features (bands) in hyperspectral imagery, land cover classification procedures are computationally expensive and pose a problem known as the curse of dimensionality. In addition, higher correlation among contiguous bands increases the redundancy within the bands. Hence, dimension reduction of hyperspectral data is very crucial so as to obtain good classification accuracy results. This paper presents a new feature selection technique. Non-negative Matrix Factorization (NMF) algorithm is proposed to obtain reduced relevant features in the input domain of each class label. This aimed to reduce classification error and dimensionality of classification challenges. Indiana pines of the Northwest Indiana dataset is used to evaluate the performance of the proposed method through experiments of features selection and classification. The Waikato Environment for Knowledge Analysis (WEKA) data mining framework is selected as a tool to implement the classification using Support Vector Machines and Neural Network. The selected features subsets are subjected to land cover classification to investigate the performance of the classifiers and how the features size affects classification accuracy. Results obtained shows that performances of the classifiers are significant. The study makes a positive contribution to the problems of hyperspectral imagery by exploring NMF, SVMs and NN to improve classification accuracy. The performances of the classifiers are valuable for decision maker to consider tradeoffs in method accuracy versus method complexity.

  1. Automated protein subfamily identification and classification.

    Directory of Open Access Journals (Sweden)

    Duncan P Brown

    2007-08-01

    Full Text Available Function prediction by homology is widely used to provide preliminary functional annotations for genes for which experimental evidence of function is unavailable or limited. This approach has been shown to be prone to systematic error, including percolation of annotation errors through sequence databases. Phylogenomic analysis avoids these errors in function prediction but has been difficult to automate for high-throughput application. To address this limitation, we present a computationally efficient pipeline for phylogenomic classification of proteins. This pipeline uses the SCI-PHY (Subfamily Classification in Phylogenomics algorithm for automatic subfamily identification, followed by subfamily hidden Markov model (HMM construction. A simple and computationally efficient scoring scheme using family and subfamily HMMs enables classification of novel sequences to protein families and subfamilies. Sequences representing entirely novel subfamilies are differentiated from those that can be classified to subfamilies in the input training set using logistic regression. Subfamily HMM parameters are estimated using an information-sharing protocol, enabling subfamilies containing even a single sequence to benefit from conservation patterns defining the family as a whole or in related subfamilies. SCI-PHY subfamilies correspond closely to functional subtypes defined by experts and to conserved clades found by phylogenetic analysis. Extensive comparisons of subfamily and family HMM performances show that subfamily HMMs dramatically improve the separation between homologous and non-homologous proteins in sequence database searches. Subfamily HMMs also provide extremely high specificity of classification and can be used to predict entirely novel subtypes. The SCI-PHY Web server at http://phylogenomics.berkeley.edu/SCI-PHY/ allows users to upload a multiple sequence alignment for subfamily identification and subfamily HMM construction. Biologists wishing to

  2. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  3. Common errors in disease mapping

    Directory of Open Access Journals (Sweden)

    Ricardo Ocaña-Riola

    2010-05-01

    Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.

  4. Solid hydrometeor classification and riming degree estimation from pictures collected with a Multi-Angle Snowflake Camera

    Science.gov (United States)

    Praz, Christophe; Roulet, Yves-Alain; Berne, Alexis

    2017-04-01

    A new method to automatically classify solid hydrometeors based on Multi-Angle Snowflake Camera (MASC) images is presented. For each individual image, the method relies on the calculation of a set of geometric and texture-based descriptors to simultaneously identify the hydrometeor type (among six predefined classes), estimate the degree of riming and detect melting snow. The classification tasks are achieved by means of a regularized multinomial logistic regression (MLR) model trained over more than 3000 MASC images manually labeled by visual inspection. In a second step, the probabilistic information provided by the MLR is weighed on the three stereoscopic views of the MASC in order to assign a unique label to each hydrometeor. The accuracy and robustness of the proposed algorithm is evaluated on data collected in the Swiss Alps and in Antarctica. The algorithm achieves high performance, with a hydrometeor-type classification accuracy and Heidke skill score of 95 % and 0.93, respectively. The degree of riming is evaluated by introducing a riming index ranging between zero (no riming) and one (graupel) and characterized by a probable error of 5.5 %. A validation study is conducted through a comparison with an existing classification method based on two-dimensional video disdrometer (2DVD) data and shows that the two methods are consistent.

  5. Library Classification 2020

    Science.gov (United States)

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  6. Multiple sparse representations classification

    NARCIS (Netherlands)

    E. Plenge (Esben); S.K. Klein (Stefan); W.J. Niessen (Wiro); E. Meijering (Erik)

    2015-01-01

    textabstractSparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In t

  7. Library Classification 2020

    Science.gov (United States)

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  8. Work complexity assessment, nursing interventions classification, and nursing outcomes classification: making connections.

    Science.gov (United States)

    Scherb, Cindy A; Weydt, Alice P

    2009-01-01

    When nurses understand what interventions are needed to achieve desired patient outcomes, they can more easily define their practice. Work Complexity Assessment (WCA) is a process that helps nurses to identify interventions performed on a routine basis for their specific patient population. This article describes the WCA process and links it to the Nursing Interventions Classification (NIC) and the Nursing Outcomes Classification (NOC). WCA, NIC, and NOC are all tools that help nurses understand the work they do and the outcomes they achieve, and that thereby acknowledge and validate nursing's contribution to patient care.

  9. Neural Networks for Emotion Classification

    CERN Document Server

    Sun, Yafei

    2011-01-01

    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  10. Classifier in Age classification

    Directory of Open Access Journals (Sweden)

    B. Santhi

    2012-12-01

    Full Text Available Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This study addresses the limitations in the existing classifiers, as it uses the Grey Level Co-occurrence Matrix (GLCM for feature extraction and Support Vector Machine (SVM for classification. This improves the accuracy of the classification as it outperforms the existing methods.

  11. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  12. Leader as achiever.

    Science.gov (United States)

    Dienemann, Jacqueline

    2002-01-01

    This article examines one outcome of leadership: productive achievement. Without achievement one is judged to not truly be a leader. Thus, the ideal leader must be a visionary, a critical thinker, an expert, a communicator, a mentor, and an achiever of organizational goals. This article explores the organizational context that supports achievement, measures of quality nursing care, fiscal accountability, leadership development, rewards and punishments, and the educational content and teaching strategies to prepare graduates to be achievers.

  13. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  14. Comparison of analytical error and sampling error for contaminated soil.

    Science.gov (United States)

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  15. Design and evaluation of neural classifiers application to skin lesion classification

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan

    1995-01-01

    Addresses design and evaluation of neural classifiers for the problem of skin lesion classification. By using Gauss Newton optimization for the entropic cost function in conjunction with pruning by Optimal Brain Damage and a new test error estimate, the authors show that this scheme is capable...... of optimizing the architecture of neural classifiers. Furthermore, error-reject tradeoff theory indicates, that the resulting neural classifiers for the skin lesion classification problem are near-optimal...

  16. Current error estimates for LISA spurious accelerations

    Energy Technology Data Exchange (ETDEWEB)

    Stebbins, R T [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Bender, P L [JILA-University of Colorado, Boulder, CO (United States); Hanson, J [Stanford University, Stanford, CA (United States); Hoyle, C D [University of Trento, Trento (Italy); Schumaker, B L [Jet Propulsion Laboratory, Pasadena, CA (United States); Vitale, S [University of Trento, Trento (Italy)

    2004-03-07

    The performance of the LISA gravitational wave detector depends critically on limiting spurious accelerations of the fiducial masses. Consequently, the requirements on allowable acceleration levels must be carefully allocated based on estimates of the achievable limits on spurious accelerations from all disturbances. Changes in the allocation of requirements are being considered, and are proposed here. The total spurious acceleration error requirement would remain unchanged, but a few new error sources would be added, and the allocations for some specific error sources would be changed. In support of the recommended revisions in the requirements budget, estimates of plausible acceleration levels for 17 of the main error sources are discussed. In most cases, the formula for calculating the size of the effect is known, but there may be questions about the values of various parameters to use in the estimates. Different possible parameter values have been discussed, and a representative set is presented. Improvements in our knowledge of the various experimental parameters will come from planned experimental and modelling studies, supported by further theoretical work.

  17. ERRORS AND DIFFICULTIES IN TRANSLATING LEGAL TEXTS

    Directory of Open Access Journals (Sweden)

    Camelia, CHIRILA

    2014-11-01

    Full Text Available Nowadays the accurate translation of legal texts has become highly important as the mistranslation of a passage in a contract, for example, could lead to lawsuits and loss of money. Consequently, the translation of legal texts to other languages faces many difficulties and only professional translators specialised in legal translation should deal with the translation of legal documents and scholarly writings. The purpose of this paper is to analyze translation from three perspectives: translation quality, errors and difficulties encountered in translating legal texts and consequences of such errors in professional translation. First of all, the paper points out the importance of performing a good and correct translation, which is one of the most important elements to be considered when discussing translation. Furthermore, the paper presents an overview of the errors and difficulties in translating texts and of the consequences of errors in professional translation, with applications to the field of law. The paper is also an approach to the differences between languages (English and Romanian that can hinder comprehension for those who have embarked upon the difficult task of translation. The research method that I have used to achieve the objectives of the paper was the content analysis of various Romanian and foreign authors' works.

  18. Kappa Coefficients for Circular Classifications

    NARCIS (Netherlands)

    Warrens, Matthijs J.; Pratiwi, Bunga C.

    2016-01-01

    Circular classifications are classification scales with categories that exhibit a certain periodicity. Since linear scales have endpoints, the standard weighted kappas used for linear scales are not appropriate for analyzing agreement between two circular classifications. A family of kappa coefficie

  19. Bioradiolocation-based sleep stage classification.

    Science.gov (United States)

    Tataraidze, Alexander; Korostovtseva, Lyudmila; Anishchenko, Lesya; Bochkarev, Mikhail; Sviryaev, Yurii; Ivashov, Sergey

    2016-08-01

    This paper presents a method for classifying wakefulness, REM, light and deep sleep based on the analysis of respiratory activity and body motions acquired by a bioradar. The method was validated using data of 32 subjects without sleep-disordered breathing, who underwent a polysomnography study in a sleep laboratory. We achieved Cohen's kappa of 0.49 in the wake-REM-light-deep sleep classification, 0.55 for the wake-REM-NREM classification and 0.57 for the sleep/wakefulness determination. The results might be useful for the development of unobtrusive sleep monitoring systems for diagnostics, prevention, and management of sleep disorders.

  20. Role of clinical pharmacists' interventions in detection and prevention of medication errors in a medical ward.

    Science.gov (United States)

    Khalili, Hossein; Farsaei, Shadi; Rezaee, Haleh; Dashti-Khavidaki, Simin

    2011-04-01

    Frequency and type of medication errors and role of clinical pharmacists in detection and prevention of these errors were evaluated in this study. During this interventional study, clinical pharmacists monitored 861 patients' medical records and detected, reported, and prevented medication errors in the infectious disease ward of a major referral teaching hospital in Tehran, Iran. Error was defined as any preventable events that lead to inappropriate medication use related to the health care professionals or patients regardless of outcomes. Classification of the errors was done based on Pharmaceutical Care Network Europe Foundation drug-related problem coding. During the study period, 112 medication errors (0.13 errors per patient) were detected by clinical pharmacists. Physicians, nurses, and patients were responsible for 55 (49.1%), 54 (48.2%), and 3 (2.7%) of medication errors, respectively. Drug dosing, choice, use and interactions were the most causes of error in medication processes, respectively. All of these errors were detected, reported, and prevented by infectious diseases ward clinical pharmacists. Medication errors occur frequently in medical wards. Clinical pharmacists' interventions can effectively prevent these errors. The types of errors indicate the need for continuous education and implementation of clinical pharmacist's interventions.

  1. Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Spatial baseline determination is a key technology for interferometric synthetic aperture radar (InSAR missions. Based on the intersatellite baseline measurement using dual-frequency GPS, errors induced by InSAR spatial baseline measurement are studied in detail. The classifications and characters of errors are analyzed, and models for errors are set up. The simulations of single factor and total error sources are selected to evaluate the impacts of errors on spatial baseline measurement. Single factor simulations are used to analyze the impact of the error of a single type, while total error sources simulations are used to analyze the impacts of error sources induced by GPS measurement, baseline transformation, and the entire spatial baseline measurement, respectively. Simulation results show that errors related to GPS measurement are the main error sources for the spatial baseline determination, and carrier phase noise of GPS observation and fixing error of GPS receiver antenna are main factors of errors related to GPS measurement. In addition, according to the error values listed in this paper, 1 mm level InSAR spatial baseline determination should be realized.

  2. The development of the TNM classification of gastric cancer.

    Science.gov (United States)

    Wittekind, Christian

    2015-08-01

    The first tumor, node, metastasis (TNM) classification for stomach tumors was published in the second edition of the TNM classification of malignant tumors in 1974 and was followed by additional editions up to the seventh edition published in 2010. In the Buffalo Meeting 2008 a harmonization between the Eastern (Japanese) and Western stomach tumor classification was achieved with only minor remaing differences. The present TNM classification of stomach tumors has been criticized but it can be considered generally accepted worldwide. For generating data based on this new TNM classification it is important to correctly use TNM and pTNM. The decions on therapy and the estimation of prognosis are based on TNM. New molecular factor studies will be correlated and based on the results of the TNM classification. © 2015 Japanese Society of Pathology and Wiley Publishing Asia Pty Ltd.

  3. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  4. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  5. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  6. Inborn errors of metabolism: a clinical overview

    Directory of Open Access Journals (Sweden)

    Ana Maria Martins

    1999-11-01

    Full Text Available CONTEXT: Inborn errors of metabolism cause hereditary metabolic diseases (HMD and classically they result from the lack of activity of one or more specific enzymes or defects in the transportation of proteins. OBJECTIVES: A clinical review of inborn errors of metabolism (IEM to give a practical approach to the physician with figures and tables to help in understanding the more common groups of these disorders. DATA SOURCE: A systematic review of the clinical and biochemical basis of IEM in the literature, especially considering the last ten years and a classic textbook (Scriver CR et al, 1995. SELECTION OF STUDIES: A selection of 108 references about IEM by experts in the subject was made. Clinical cases are presented with the peculiar symptoms of various diseases. DATA SYNTHESIS: IEM are frequently misdiagnosed because the general practitioner, or pediatrician in the neonatal or intensive care units, does not think about this diagnosis until the more common cause have been ruled out. This review includes inheritance patterns and clinical and laboratory findings of the more common IEM diseases within a clinical classification that give a general idea about these disorders. A summary of treatment types for metabolic inherited diseases is given. CONCLUSIONS: IEM are not rare diseases, unlike previous thinking about them, and IEM patients form part of the clientele in emergency rooms at general hospitals and in intensive care units. They are also to be found in neurological, pediatric, obstetrics, surgical and psychiatric clinics seeking diagnoses, prognoses and therapeutic or supportive treatment.

  7. Discriminant analysis with errors in variables

    CERN Document Server

    Loustau, Sébastien

    2012-01-01

    The effect of measurement error in discriminant analysis is investigated. Given observations $Z=X+\\epsilon$, where $\\epsilon$ denotes a random noise, the goal is to predict the density of $X$ among two possible candidates $f$ and $g$. We suppose that we have at our disposal two learning samples. The aim is to approach the best possible decision rule $G^*$ defined as a minimizer of the Bayes risk. In the free-noise case $(\\epsilon=0)$, minimax fast rates of convergence are well-known under the margin assumption in discriminant analysis (see \\cite{mammen}) or in the more general classification framework (see \\cite{tsybakov2004,AT}). In this paper we intend to establish similar results in the noisy case, i.e. when dealing with errors in variables. In particular, we discuss two possible complexity assumptions that can be set on the problem, which may alternatively concern the regularity of $f-g$ or the boundary of $G^*$. We prove minimax lower bounds for these both problems and explain how can these rates be atta...

  8. [Classifications in forensic medicine and their logical basis].

    Science.gov (United States)

    Kovalev, A V; Shmarov, L A; Ten'kov, A A

    2014-01-01

    The objective of the present study was to characterize the main requirements for the correct construction of classifications used in forensic medicine, with special reference to the errors that occur in the relevant text-books, guidelines, and manuals and the ways to avoid them. This publication continues the series of thematic articles of the authors devoted to the logical errors in the expert conclusions. The preparation of further publications is underway to report the results of the in-depth analysis of the logical errors encountered in expert conclusions, text-books, guidelines, and manuals.

  9. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  10. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  11. CLASSIFICATIONS OF EEG SIGNALS FOR MENTAL TASKS USING ADAPTIVE RBF NETWORK

    Institute of Scientific and Technical Information of China (English)

    薛建中; 郑崇勋; 闫相国

    2004-01-01

    Objective This paper presents classifications of mental tasks based on EEG signals using an adaptive Radial Basis Function (RBF) network with optimal centers and widths for the Brain-Computer Interface (BCI) schemes. Methods Initial centers and widths of the network are selected by a cluster estimation method based on the distribution of the training set. Using a conjugate gradient descent method, they are optimized during training phase according to a regularized error function considering the influence of their changes to output values. Results The optimizing process improves the performance of RBF network, and its best cognition rate of three task pairs over four subjects achieves 87.0%. Moreover, this network runs fast due to the fewer hidden layer neurons. Conclusion The adaptive RBF network with optimal centers and widths has high recognition rate and runs fast. It may be a promising classifier for on-line BCI scheme.

  12. Compensation of motion error in a high accuracy AFM

    Science.gov (United States)

    Cui, Yuguo; Arai, Yoshikazu; He, Gaofa; Asai, Takemi; Gao, Wei

    2008-10-01

    An atomic force microscope (AFM) system is used for large-area measurement with a spiral scanning strategy, which is composed of an air slide, an air spindle and a probe unit. The motion error which is brought from the air slide and the air spindle will increase with the increasing of the measurement area. Then the measurement accuracy will decrease. In order to achieve a high speed and high accuracy measurement, the probe scans along X-direction with constant height mode driven by the air slide, and at the same time, based on the change way of the motion error, it moves along Zdirection conducted by piezoactuator. According to the above method of error compensation, the profile measurement experiment of a micro-structured surface has been carried out. The experimental result shows that this method is effective for eliminating motion error, and it can achieve high speed and precision measurement of micro-structured surface.

  13. A Two Step Data Mining Approach for Amharic Text Classification

    Directory of Open Access Journals (Sweden)

    Seffi Gebeyehu

    2016-08-01

    Full Text Available Traditionally, text classifiers are built from labeled training examples (supervised. Labeling is usually done manually by human experts (or the users, which is a labor intensive and time consuming process. In the past few years, researchers have investigated various forms of semi-supervised learning to reduce the burden of manual labeling. In this paper is aimed to show as the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. In this paper, intended to implement an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation- Maximization (EM and two classifiers: Naive Bayes (NB and locally weighted learning (LWL. NB first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents while LWL uses a class of function approximation to build a model around the current point of interest. An experiment conducted on a mixture of labeled and unlabeled Amharic text documents showed that the new method achieved a significant performance in comparison with that of a supervised LWL and NB. The result also pointed out that the use of unlabeled data with EM reduces the classification absolute error by 27.6%. In general, since unlabeled documents are much less expensive and easier to collect than labeled documents, this method will be useful for text categorization tasks including online data sources such as web pages, e-mails and news group postings. If one uses this method, building text categorization systems will be significantly faster and less expensive than the supervised learning approach.

  14. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  15. Using History to Teach Scientific Method: The Role of Errors

    Science.gov (United States)

    Giunta, Carmen J.

    2001-05-01

    Including tales of error along with tales of discovery is desirable in any use of history of science to teach about science. Tales of error, particularly when they involve justly well-regarded historical figures, serve to avoid two pitfalls to which use of historical material in science teaching is otherwise susceptible. Acknowledging the false steps of great scientists avoids putting those scientists on a pedestal and illustrates that there is no automatic or mechanical scientific method. This paper lists five kinds of error with examples of each from the development of chemistry in the 18th and 19th centuries: erroneous theories (such as phlogiston), seeing a new phenomenon everywhere one seeks it (e.g., Lavoisier and the decomposition of water), theories erroneous in detail but nonetheless fruitful (e.g., Dalton's atomic theory), rejection of correct theories (e.g., Avogadro's hypothesis), and incoherent insights (e.g., J. A. R. Newlands' classification of the elements).

  16. Unbiased bootstrap error estimation for linear discriminant analysis.

    Science.gov (United States)

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  17. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  18. Spatial frequency domain error budget

    Energy Technology Data Exchange (ETDEWEB)

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  19. Introduction to error correcting codes in quantum computers

    CERN Document Server

    Salas, P J

    2006-01-01

    The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3

  20. Reflection error correction of gas turbine blade temperature

    Science.gov (United States)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  1. Finite Block-Length Achievable Rates for Queuing Timing Channels

    OpenAIRE

    2011-01-01

    The exponential server timing channel is known to be the simplest, and in some sense canonical, queuing timing channel. The capacity of this infinite-memory channel is known. Here, we discuss practical finite-length restrictions on the codewords and attempt to understand the amount of maximal rate that can be achieved for a target error probability. By using Markov chain analysis, we prove a lower bound on the maximal channel coding rate achievable at blocklength $n$ and error probability $...

  2. Comparing Science Achievement Constructs: Targeted and Achieved

    Science.gov (United States)

    Ferrara, Steve; Duncan, Teresa

    2011-01-01

    This article illustrates how test specifications based solely on academic content standards, without attention to other cognitive skills and item response demands, can fall short of their targeted constructs. First, the authors inductively describe the science achievement construct represented by a statewide sixth-grade science proficiency test.…

  3. Towards the Automatic Classification of Avian Flight Calls for Bioacoustic Monitoring

    Science.gov (United States)

    Bello, Juan Pablo; Farnsworth, Andrew; Robbins, Matt; Keen, Sara; Klinck, Holger; Kelling, Steve

    2016-01-01

    Automatic classification of animal vocalizations has great potential to enhance the monitoring of species movements and behaviors. This is particularly true for monitoring nocturnal bird migration, where automated classification of migrants’ flight calls could yield new biological insights and conservation applications for birds that vocalize during migration. In this paper we investigate the automatic classification of bird species from flight calls, and in particular the relationship between two different problem formulations commonly found in the literature: classifying a short clip containing one of a fixed set of known species (N-class problem) and the continuous monitoring problem, the latter of which is relevant to migration monitoring. We implemented a state-of-the-art audio classification model based on unsupervised feature learning and evaluated it on three novel datasets, one for studying the N-class problem including over 5000 flight calls from 43 different species, and two realistic datasets for studying the monitoring scenario comprising hundreds of thousands of audio clips that were compiled by means of remote acoustic sensors deployed in the field during two migration seasons. We show that the model achieves high accuracy when classifying a clip to one of N known species, even for a large number of species. In contrast, the model does not perform as well in the continuous monitoring case. Through a detailed error analysis (that included full expert review of false positives and negatives) we show the model is confounded by varying background noise conditions and previously unseen vocalizations. We also show that the model needs to be parameterized and benchmarked differently for the continuous monitoring scenario. Finally, we show that despite the reduced performance, given the right conditions the model can still characterize the migration pattern of a specific species. The paper concludes with directions for future research. PMID:27880836

  4. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Directory of Open Access Journals (Sweden)

    Muhammad Faisal Siddiqui

    Full Text Available A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT, principal component analysis (PCA, and least squares support vector machine (LS-SVM are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%. Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities

  5. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Science.gov (United States)

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the

  6. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... process in March 2012 (77 FR 5379). When verified by a futures classification, Smith-Doxey data serves as... Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed... for the addition of an optional cotton futures classification procedure--identified and known...

  7. Learning Apache Mahout classification

    CERN Document Server

    Gupta, Ashish

    2015-01-01

    If you are a data scientist who has some experience with the Hadoop ecosystem and machine learning methods and want to try out classification on large datasets using Mahout, this book is ideal for you. Knowledge of Java is essential.

  8. Update on diabetes classification.

    Science.gov (United States)

    Thomas, Celeste C; Philipson, Louis H

    2015-01-01

    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal.

  9. [Classification of cardiomyopathy].

    Science.gov (United States)

    Asakura, Masanori; Kitakaze, Masafumi

    2014-01-01

    Cardiomyopathy is a group of cardiovascular diseases with poor prognosis. Some patients with dilated cardiomyopathy need heart transplantations due to severe heart failure. Some patients with hypertrophic cardiomyopathy die unexpectedly due to malignant ventricular arrhythmias. Various phenotypes of cardiomyopathies are due to the heterogeneous group of diseases. The classification of cardiomyopathies is important and indispensable in the clinical situation. However, their classification has not been established, because the causes of cardiomyopathies have not been fully elucidated. We usually use definition and classification offered by WHO/ISFC task force in 1995. Recently, several new definitions and classifications of the cardiomyopathies have been published by American Heart Association, European Society of Cardiology and Japanese Circulation Society.

  10. Carbohydrate terminology and classification

    National Research Council Canada - National Science Library

    Cummings, J H; Stephen, A M

    2007-01-01

    ...) and polysaccharides (DP> or =10). Within this classification, a number of terms are used such as mono- and disaccharides, polyols, oligosaccharides, starch, modified starch, non-starch polysaccharides, total carbohydrate, sugars, etc...

  11. Inventory classification based on decoupling points

    Directory of Open Access Journals (Sweden)

    Joakim Wikner

    2015-01-01

    Full Text Available The ideal state of continuous one-piece flow may never be achieved. Still the logistics manager can improve the flow by carefully positioning inventory to buffer against variations. Strategies such as lean, postponement, mass customization, and outsourcing all rely on strategic positioning of decoupling points to separate forecast-driven from customer-order-driven flows. Planning and scheduling of the flow are also based on classification of decoupling points as master scheduled or not. A comprehensive classification scheme for these types of decoupling points is introduced. The approach rests on identification of flows as being either demand based or supply based. The demand or supply is then combined with exogenous factors, classified as independent, or endogenous factors, classified as dependent. As a result, eight types of strategic as well as tactical decoupling points are identified resulting in a process-based framework for inventory classification that can be used for flow design.

  12. Text Classification Using Sentential Frequent Itemsets

    Institute of Scientific and Technical Information of China (English)

    Shi-Zhu Liu; He-Ping Hu

    2007-01-01

    Text classification techniques mostly rely on single term analysis of the document data set, while more concepts,especially the specific ones, are usually conveyed by set of terms. To achieve more accurate text classifier, more informative feature including frequent co-occurring words in the same sentence and their weights are particularly important in such scenarios. In this paper, we propose a novel approach using sentential frequent itemset, a concept comes from association rule mining, for text classification, which views a sentence rather than a document as a transaction, and uses a variable precision rough set based method to evaluate each sentential frequent itemset's contribution to the classification. Experiments over the Reuters and newsgroup corpus are carried out, which validate the practicability of the proposed system.

  13. Pattern Classification of Signals Using Fisher Kernels

    Directory of Open Access Journals (Sweden)

    Yashodhan Athavale

    2012-01-01

    Full Text Available The intention of this study is to gauge the performance of Fisher kernels for dimension simplification and classification of time-series signals. Our research work has indicated that Fisher kernels have shown substantial improvement in signal classification by enabling clearer pattern visualization in three-dimensional space. In this paper, we will exhibit the performance of Fisher kernels for two domains: financial and biomedical. The financial domain study involves identifying the possibility of collapse or survival of a company trading in the stock market. For assessing the fate of each company, we have collected financial time-series composed of weekly closing stock prices in a common time frame, using Thomson Datastream software. The biomedical domain study involves knee signals collected using the vibration arthrometry technique. This study uses the severity of cartilage degeneration for classifying normal and abnormal knee joints. In both studies, we apply Fisher Kernels incorporated with a Gaussian mixture model (GMM for dimension transformation into feature space, which is created as a three-dimensional plot for visualization and for further classification using support vector machines. From our experiments we observe that Fisher Kernel usage fits really well for both kinds of signals, with low classification error rates.

  14. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  15. Error Analysis And Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    王惠丽

    2016-01-01

    Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.

  16. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  17. Completion of the classification

    CERN Document Server

    Strade, Helmut

    2012-01-01

    This is the last of three volumes about ""Simple Lie Algebras over Fields of Positive Characteristic""by Helmut Strade, presenting the state of the art of the structure and classification of Lie algebras over fields of positive characteristic. In this monograph the proof of the Classification Theorem presented in the first volumeis concluded.Itcollects all the important results on the topic whichcan be found only in scatteredscientific literaturso far.

  18. Twitter content classification

    OpenAIRE

    2010-01-01

    This paper delivers a new Twitter content classification framework based sixteen existing Twitter studies and a grounded theory analysis of a personal Twitter history. It expands the existing understanding of Twitter as a multifunction tool for personal, profession, commercial and phatic communications with a split level classification scheme that offers broad categorization and specific sub categories for deeper insight into the real world application of the service.

  19. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  20. Discretization error of Stochastic Integrals

    CERN Document Server

    Fukasawa, Masaaki

    2010-01-01

    Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.

  1. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  2. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  3. Determination of diametral error using finite elements and experimental method

    Directory of Open Access Journals (Sweden)

    A. Karabulut

    2010-01-01

    Full Text Available This study concerns experimental and numerical analysis on a one-sided bound workpiece on the lathe machine. Cutting force creates deflection on workpiece while turning process is on. Deflection quantity is estimated utilizing Laser Distance Sensor (LDS with no contact achieved. Also diametral values are detected from different sides of workpiece after each turning operation. It is observed that diametral error differs due to the quantity of the deflection. Diametral error reached a peak where deflection reached a peak. Model which constituted finite elements is verified by experimental results. And also, facts which caused diametral error are determined.

  4. Error resilient image transmission based on virtual SPIHT

    Science.gov (United States)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  5. Classification of mistakes in patient care in a Nigerian hospital.

    Science.gov (United States)

    Iyayi, Festus

    2009-12-01

    Recent discussions on improving health outcomes in the hospital setting have emphasized the importance of classification of mistakes in health care institutions These discussions indicate that the existence of a shared classificatory scheme among members of the health team indicates that errors in patient care are recognised as significant events that require systematic action as opposed to defensive, one-dimensional behaviours within the health institution. In Nigeria discussions of errors in patient care are rare in the literature. Discussions of the classification of errors in patient care are even more rare. This study represents a first attempt to deal with this significant problem and examines whether and how mistakes in patient care are classified across five professional health groups in one of Nigeria's largest tertiary health care institutions. The study shows that there are wide variations within and between professional health groups in the classification of errors in patient care. The implications of the absence of a classificatory scheme for errors in patient care for service improvement and organisational learning in the hospital environment are discussed.

  6. Real-Time Minimization of Tracking Error for Aircraft Systems

    Science.gov (United States)

    Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John

    2013-01-01

    This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.

  7. Segmentation and Classification of Bone Marrow Cells Images Using Contextual Information for Medical Diagnosis of Acute Leukemias.

    Science.gov (United States)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A; Diaz-Hernandez, Raquel; Peregrina, Hayde; Olmos, Ivan; Alonso, Jose E; Lobato, Ruben

    2015-01-01

    Morphological identification of acute leukemia is a powerful tool used by hematologists to determine the family of such a disease. In some cases, experienced physicians are even able to determine the leukemia subtype of the sample. However, the identification process may have error rates up to 40% (when classifying acute leukemia subtypes) depending on the physician's experience and the sample quality. This problem raises the need to create automatic tools that provide hematologists with a second opinion during the classification process. Our research presents a contextual analysis methodology for the detection of acute leukemia subtypes from bone marrow cells images. We propose a cells separation algorithm to break up overlapped regions. In this phase, we achieved an average accuracy of 95% in the evaluation of the segmentation process. In a second phase, we extract descriptive features to the nucleus and cytoplasm obtained in the segmentation phase in order to classify leukemia families and subtypes. We finally created a decision algorithm that provides an automatic diagnosis for a patient. In our experiments, we achieved an overall accuracy of 92% in the supervised classification of acute leukemia families, 84% for the lymphoblastic subtypes, and 92% for the myeloblastic subtypes. Finally, we achieved accuracies of 95% in the diagnosis of leukemia families and 90% in the diagnosis of leukemia subtypes.

  8. Segmentation and Classification of Bone Marrow Cells Images Using Contextual Information for Medical Diagnosis of Acute Leukemias

    Science.gov (United States)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Diaz-Hernandez, Raquel; Peregrina, Hayde; Olmos, Ivan; Alonso, Jose E.; Lobato, Ruben

    2015-01-01

    Morphological identification of acute leukemia is a powerful tool used by hematologists to determine the family of such a disease. In some cases, experienced physicians are even able to determine the leukemia subtype of the sample. However, the identification process may have error rates up to 40% (when classifying acute leukemia subtypes) depending on the physician’s experience and the sample quality. This problem raises the need to create automatic tools that provide hematologists with a second opinion during the classification process. Our research presents a contextual analysis methodology for the detection of acute leukemia subtypes from bone marrow cells images. We propose a cells separation algorithm to break up overlapped regions. In this phase, we achieved an average accuracy of 95% in the evaluation of the segmentation process. In a second phase, we extract descriptive features to the nucleus and cytoplasm obtained in the segmentation phase in order to classify leukemia families and subtypes. We finally created a decision algorithm that provides an automatic diagnosis for a patient. In our experiments, we achieved an overall accuracy of 92% in the supervised classification of acute leukemia families, 84% for the lymphoblastic subtypes, and 92% for the myeloblastic subtypes. Finally, we achieved accuracies of 95% in the diagnosis of leukemia families and 90% in the diagnosis of leukemia subtypes. PMID:26107374

  9. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  10. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  11. Error Propagation in the Hypercycle

    CERN Document Server

    Campos, P R A; Stadler, P F

    1999-01-01

    We study analytically the steady-state regime of a network of n error-prone self-replicating templates forming an asymmetric hypercycle and its error tail. We show that the existence of a master template with a higher non-catalyzed self-replicative productivity, a, than the error tail ensures the stability of chains in which merror tail is guaranteed for catalytic coupling strengths (K) of order of a. We find that the hypercycle becomes more stable than the chains only for K of order of a2. Furthermore, we show that the minimal replication accuracy per template needed to maintain the hypercycle, the so-called error threshold, vanishes like sqrt(n/K) for large K and n<=4.

  12. FPU-Supported Running Error Analysis

    OpenAIRE

    T. Zahradnický; R. Lórencz

    2010-01-01

    A-posteriori forward rounding error analyses tend to give sharper error estimates than a-priori ones, as they use actual data quantities. One of such a-posteriori analysis – running error analysis – uses expressions consisting of two parts; one generates the error and the other propagates input errors to the output. This paper suggests replacing the error generating term with an FPU-extracted rounding error estimate, which produces a sharper error bound.

  13. Identifying medication error chains from critical incident reports: a new analytic approach.

    Science.gov (United States)

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety.

  14. Improving patient safety in radiotherapy by learning from near misses, incidents and errors.

    Science.gov (United States)

    Williams, M V

    2007-05-01

    Radiotherapy incidents involving a major overdose such as that which affected a patient in Glasgow in 2006 are rare. The publicity surrounding this patient's treatment and the subsequent publication of the enquiry by the Scottish Executive have led to a re-evaluation of procedures in many departments. However, other incidents and near misses that might also generate learning are often surrounded by obsessive secrecy. With the passage of time, even those incidents that have been subject to a public enquiry are lost from view. Indeed, the report on the incident in Glasgow draws attention to strong parallels with that in North Staffordshire, the report of which is not freely available despite being in the public domain. A web-based system to archive and make available previously published reports should be relatively simple to establish. A greater challenge is to achieve open reporting of near misses, incidents and errors. The key elements would be the effective use of keywords, a system of classification and a searchable anonymized database with free access. There should be a well designed system for analysis, response and feedback. This would ensure the dissemination of learning. The development of a more open culture for reports under the Ionising Radiation (Medical Exposure) Regulations (IR(ME)R) is essential: at the very least, their main findings and recommendations should be routinely published. These changes should help us to achieve greater safety for our patients.

  15. Photometric Supernova Classification with Machine Learning

    Science.gov (United States)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  16. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses Using Structural Plasticity

    OpenAIRE

    Hussain, Shaista; Basu, Arindam

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  17. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses using Structural Plasticity

    OpenAIRE

    Shaista eHussain; Arindam eBasu

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  18. Classification of Scenes into Indoor/Outdoor

    Directory of Open Access Journals (Sweden)

    R. Raja

    2014-12-01

    Full Text Available Effective model for scene classification is essential, to access the desired images from large scale databases. This study presents an efficient scene classification approach by integrating low level features, to reduce the semantic gap between the visual features and richness of human perception. The objective of the study is to categorize an image into indoor or outdoor scene using relevant low level features such as color and texture. The color feature from HSV color model, texture feature through GLCM and entropy computed from UV color space forms the feature vector. To support automatic scene classification, Support Vector Machine (SVM is implemented on low level features for categorizing a scene into indoor/outdoor. Since the combination of these image features exhibit a distinctive disparity between images containing indoor or outdoor scenes, the proposed method achieves better performance in terms of classification accuracy of about 92.44%. The proposed method has been evaluated on IITM- SCID2 (Scene Classification Image Database and dataset of 3442 images collected from the web.

  19. Data selection in EEG signals classification.

    Science.gov (United States)

    Wang, Shuaifang; Li, Yan; Wen, Peng; Lai, David

    2016-03-01

    The alcoholism can be detected by analyzing electroencephalogram (EEG) signals. However, analyzing multi-channel EEG signals is a challenging task, which often requires complicated calculations and long execution time. This paper proposes three data selection methods to extract representative data from the EEG signals of alcoholics. The methods are the principal component analysis based on graph entropy (PCA-GE), the channel selection based on graph entropy (GE) difference, and the mathematic combinations channel selection, respectively. For comparison purposes, the selected data from the three methods are then classified by three classifiers: the J48 decision tree, the K-nearest neighbor and the Kstar, separately. The experimental results show that the proposed methods are successful in selecting data without compromising the classification accuracy in discriminating the EEG signals from alcoholics and non-alcoholics. Among them, the proposed PCA-GE method uses only 29.69% of the whole data and 29.5% of the computation time but achieves a 94.5% classification accuracy. The channel selection method based on the GE difference also gains a 91.67% classification accuracy by using only 29.69% of the full size of the original data. Using as little data as possible without sacrificing the final classification accuracy is useful for online EEG analysis and classification application design.

  20. LDA boost classification: boosting by topics

    Science.gov (United States)

    Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li

    2012-12-01

    AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.

  1. Classification based polynomial image interpolation

    Science.gov (United States)

    Lenke, Sebastian; Schröder, Hartmut

    2008-02-01

    Due to the fast migration of high resolution displays for home and office environments there is a strong demand for high quality picture scaling. This is caused on the one hand by large picture sizes and on the other hand due to an enhanced visibility of picture artifacts on these displays [1]. There are many proposals for an enhanced spatial interpolation adaptively matched to picture contents like e.g. edges. The drawback of these approaches is the normally integer and often limited interpolation factor. In order to achieve rational factors there exist combinations of adaptive and non adaptive linear filters, but due to the non adaptive step the overall quality is notably limited. We present in this paper a content adaptive polyphase interpolation method which uses "offline" trained filter coefficients and an "online" linear filtering depending on a simple classification of the input situation. Furthermore we present a new approach to a content adaptive interpolation polynomial, which allows arbitrary polyphase interpolation factors at runtime and further improves the overall interpolation quality. The main goal of our new approach is to optimize interpolation quality by adapting higher order polynomials directly to the image content. In addition we derive filter constraints for enhanced picture quality. Furthermore we extend the classification based filtering to the temporal dimension in order to use it for an intermediate image interpolation.

  2. 3-PRS serial-parallel machine tool error calibration and parameter identification

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jun-wei; DAI Jun; HUANG Jun-jie

    2009-01-01

    3-PRS serial-parallel machine tool consists of a 3-degree-of-freedom (DOF) implementation platform and a 2-DOF X-Y platform. The error modeling and parameter identification methods were deduced based on 3-PRS serial-parallel machine tool. 3-PRS serial-parallel machine tool was researched, and the mechanism of error analysis, modeling, identification of error parameters and measurement equipment for the use of agency error of measurement were conducted. In order to achieve the geometric parameters calibration and error compensation of the serial-parallel machine tool, the nominal structural parameters of the controller was adjusted by identifying the structure of the machine tool. With the establishment of a vector space size chain, we can do the error analysis, error modeling, error measurement and error compensation can be done.

  3. Multilayer perceptron, fuzzy sets, and classification

    Science.gov (United States)

    Pal, Sankar K.; Mitra, Sushmita

    1992-01-01

    A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.

  4. Assessing Measures of Order Flow Toxicity via Perfect Trade Classification

    DEFF Research Database (Denmark)

    Andersen, Torben G.; Bondarenko, Oleg

    . The VPIN metric involves decomposing volume into active buys and sells. We use the best-bid-offer (BBO) files from the CME Group to construct (near) perfect trade classification measures for the E-mini S&P 500 futures contract. We investigate the accuracy of the ELO Bulk Volume Classification (BVC) scheme...... and find it inferior to a standard tick rule based on individual transactions. Moreover, when VPIN is constructed from accurate classification, it behaves in a diametrically opposite way to BVC-VPIN. We also find the latter to have forecast power for short-term volatility solely because it generates...... systematic classification errors that are correlated with trading volume and return volatility. When controlling for trading intensity and volatility, the BVC-VPIN measure has no incremental predictive power for future volatility. We conclude that VPIN is not suitable for measuring order flow imbalances....

  5. Texture Classification Using Sparse Frame-Based Representations

    Directory of Open Access Journals (Sweden)

    Skretting Karl

    2006-01-01

    Full Text Available A new method for supervised texture classification, denoted by frame texture classification method (FTCM, is proposed. The method is based on a deterministic texture model in which a small image block, taken from a texture region, is modeled as a sparse linear combination of frame elements. FTCM has two phases. In the design phase a frame is trained for each texture class based on given texture example images. The design method is an iterative procedure in which the representation error, given a sparseness constraint, is minimized. In the classification phase each pixel in a test image is labeled by analyzing its spatial neighborhood. This block is represented by each of the frames designed for the texture classes under consideration, and the frame giving the best representation gives the class. The FTCM is applied to nine test images of natural textures commonly used in other texture classification work, yielding excellent overall performance.

  6. Surface errors in the course of machining precision optics

    Science.gov (United States)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  7. R-Peak Detection using Daubechies Wavelet and ECG Signal Classification using Radial Basis Function Neural Network

    Science.gov (United States)

    Rai, H. M.; Trivedi, A.; Chatterjee, K.; Shukla, S.

    2014-01-01

    This paper employed the Daubechies wavelet transform (WT) for R-peak detection and radial basis function neural network (RBFNN) to classify the electrocardiogram (ECG) signals. Five types of ECG beats: normal beat, paced beat, left bundle branch block (LBBB) beat, right bundle branch block (RBBB) beat and premature ventricular contraction (PVC) were classified. 500 QRS complexes were arbitrarily extracted from 26 records in Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, which are available on Physionet website. Each and every QRS complex was represented by 21 points from p1 to p21 and these QRS complexes of each record were categorized according to types of beats. The system performance was computed using four types of parameter evaluation metrics: sensitivity, positive predictivity, specificity and classification error rate. The experimental result shows that the average values of sensitivity, positive predictivity, specificity and classification error rate are 99.8%, 99.60%, 99.90% and 0.12%, respectively with RBFNN classifier. The overall accuracy achieved for back propagation neural network (BPNN), multilayered perceptron (MLP), support vector machine (SVM) and RBFNN classifiers are 97.2%, 98.8%, 99% and 99.6%, respectively. The accuracy levels and processing time of RBFNN is higher than or comparable with BPNN, MLP and SVM classifiers.

  8. Concepts of Classification and Taxonomy Phylogenetic Classification

    Science.gov (United States)

    Fraix-Burnet, D.

    2016-05-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works.

  9. EMD-DWT based transform domain feature reduction approach for quantitative multi-class classification of breast lesions.

    Science.gov (United States)

    Ara, Sharmin R; Bashar, Syed Khairul; Alam, Farzana; Hasan, Md Kamrul

    2017-09-01

    Using a large set of ultrasound features does not necessarily ensure improved quantitative classification of breast tumors; rather, it often degrades the performance of a classifier. In this paper, we propose an effective feature reduction approach in the transform domain for improved multi-class classification of breast tumors. Feature transformation methods, such as empirical mode decomposition (EMD) and discrete wavelet transform (DWT), followed by a filter- or wrapper-based subset selection scheme are used to extract a set of non-redundant and more potential transform domain features through decorrelation of an optimally ordered sequence of N ultrasonic bi-modal (i.e., quantitative ultrasound and elastography) features. The proposed transform domain bi-modal reduced feature set with different conventional classifiers will classify 201 breast tumors into benign-malignant as well as BI-RADS⩽3, 4, and 5 categories. For the latter case, an inadmissible error probability is defined for the subset selection using a wrapper/filter. The classifiers use train truth from histopathology/cytology for binary (i.e., benign-malignant) separation of tumors and then bi-modal BI-RADS scores from the radiologists for separating malignant tumors into BI-RADS category 4 and 5. A comparative performance analysis of several widely used conventional classifiers is also presented to assess their efficacy for the proposed transform domain reduced feature set for classification of breast tumors. The results show that our transform domain bimodal reduced feature set achieves improvement of 5.35%, 3.45%, and 3.98%, respectively, in sensitivity, specificity, and accuracy as compared to that of the original domain optimal feature set for benign-malignant classification of breast tumors. In quantitative classification of breast tumors into BI-RADS categories⩽3, 4, and 5, the proposed transform domain reduced feature set attains improvement of 3.49%, 9.07%, and 3.06%, respectively, in

  10. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  11. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  12. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  13. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  14. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  15. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  16. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Science.gov (United States)

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  17. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  18. Evaluation of partial classification algorithms using ROC curves.

    Science.gov (United States)

    Tusch, G

    1995-01-01

    When using computer programs for decision support in clinical routine, an assessment or a comparison of the underlying classification algorithms is essential. In classical (forced) classification, the classification rule always selects exactly one alternative. A number of proven discriminant measures are available here, e.g.sensitivity and error rate. For probabilistic classification, a series of additional measures has been developed [1]. However, for many clinical applications, there are models where an observation is classified into several classes (partial classification), e.g., models from artificial intelligence, decision analysis, or fuzzy set theory. In partial classification, the discriminatory ability (Murphy) can be adjusted a priori to any level, in most practical cases. Here the usual measures do not apply. We investigate the preconditions for assessment and comparison based on medical decision theory. We focus on problems in the medical domain and establish a methodological framework. When using partial classification procedures, a ROC analysis in the classical sense is no longer appropriate. In forced classification for two classes, the problem is to find a cutoff point on the ROC curve; while in partial classification, you have to find two of them. They characterize the elements being classified as coming from both classes. This extends to several classes. We propose measures corresponding to the usual discriminant measures for forced classification (e.g., sensitivity and error rate) and demonstrate the effects using the ROC approach. For this purpose, we extend the existing method for forced classification in a mathematically sound manner. Algorithms for the construction of thresholds can easily be adapted. Two specific measurement models, based on parametric and non-parametric approaches, will be introduced. The basic methodology is suitable for all partial classification problems, whereas the extended ROC analysis assumes a rank order of the

  19. Implementation of neural networks for classification of moss and lichen samples on the basis of gamma-ray spectrometric analysis.

    Science.gov (United States)

    Dragović, Snezana; Onjia, Antonije; Dragović, Ranko; Bacić, Goran

    2007-07-01

    Mosses and lichens have an important role in biomonitoring. The objective of this study is to develop a neural network model to classify these plants according to geographical origin. A three-layer feed-forward neural network was used. The activities of radionuclides ((226)Ra, (238)U, (235)U, (40)K, (232)Th, (134)Cs, (137)Cs and (7)Be) detected in plant samples by gamma-ray spectrometry were used as inputs for neural network. Five different training algorithms with different number of samples in training sets were tested and compared, in order to find the one with the minimum root mean square error. The best predictive power for the classification of plants from 12 regions was achieved using a network with 5 hidden layer nodes and 3,000 training epochs, using the online back-propagation randomized training algorithm. Implementation of this model to experimental data resulted in satisfactory classification of moss and lichen samples in terms of their geographical origin. The average classification rate obtained in this study was (90.7 +/- 4.8)%.

  20. Perceptual Classification Images from Vernier Acuity Masked by Noise

    Science.gov (United States)

    Ahumada, A. J.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Letting external noise rather than internal noise limit discrimination performance allows information to be extracted about the observer's stimulus classification rule. A perceptual classification image is the correlation over trials between the noise amplitude at a spatial location and the observer's responses. If, for example, the observer followed the rule of the ideal observer, the perceptual classification image would be an estimate of the ideal observer filter, the difference between the two unmasked images being discriminated. Perceptual classification images were estimated for a vernier discrimination task. The display screen had 48 pixels per degree horizontally and vertically. The no-offset image had a dark horizontal line of 4 pixels, a 1 pixel space, and 4 more dark pixels. Classification images were based on 1600 discrimination trials with the line contrast adjusted to keep the error rate near 25 percent. In the offset image, the second line was one pixel higher. Unlike the ideal observer filter (a horizontal dipole), the observer perceptual classification images are strongly oriented. Fourier transforms of the classification images had a peak amplitude near one cycle per degree and an orientation near 25 degrees. The spatial spread is much more than image blur predicts, and probably indicates the spatial position uncertainty in the task.

  1. Supernova Photometric Classification Challenge

    CERN Document Server

    Kessler, Richard; Jha, Saurabh; Kuhlmann, Stephen

    2010-01-01

    We have publicly released a blinded mix of simulated SNe, with types (Ia, Ib, Ic, II) selected in proportion to their expected rate. The simulation is realized in the griz filters of the Dark Energy Survey (DES) with realistic observing conditions (sky noise, point spread function and atmospheric transparency) based on years of recorded conditions at the DES site. Simulations of non-Ia type SNe are based on spectroscopically confirmed light curves that include unpublished non-Ia samples donated from the Carnegie Supernova Project (CSP), the Supernova Legacy Survey (SNLS), and the Sloan Digital Sky Survey-II (SDSS-II). We challenge scientists to run their classification algorithms and report a type for each SN. A spectroscopically confirmed subset is provided for training. The goals of this challenge are to (1) learn the relative strengths and weaknesses of the different classification algorithms, (2) use the results to improve classification algorithms, and (3) understand what spectroscopically confirmed sub-...

  2. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition......, a good metric is required to measure distance or similarity between feature points so that the classification becomes feasible. Furthermore, in order to build a successful classifier, one needs to deeply understand how classifiers work. This thesis focuses on these three aspects of classification...... to segment breast tissue and pectoral muscle area from the background in mammogram. The second focus is the choices of metric and its influence to the feasibility of a classifier, especially on k-nearest neighbors (k-NN) algorithm, with medical applications on breast cancer prediction and calcification...

  3. Classification of hand eczema

    DEFF Research Database (Denmark)

    Agner, T; Aalto-Korte, K; Andersen, K E;

    2015-01-01

    BACKGROUND: Classification of hand eczema (HE) is mandatory in epidemiological and clinical studies, and also important in clinical work. OBJECTIVES: The aim was to test a recently proposed classification system of HE in clinical practice in a prospective multicentre study. METHODS: Patients were...... HE, protein contact dermatitis/contact urticaria, hyperkeratotic endogenous eczema and vesicular endogenous eczema, respectively. An additional diagnosis was given if symptoms indicated that factors additional to the main diagnosis were of importance for the disease. RESULTS: Four hundred and twenty......%) could not be classified. 38% had one additional diagnosis and 26% had two or more additional diagnoses. Eczema on feet was found in 30% of the patients, statistically significantly more frequently associated with hyperkeratotic and vesicular endogenous eczema. CONCLUSION: We find that the classification...

  4. Acoustic classification of dwellings

    DEFF Research Database (Denmark)

    Berardi, Umberto; Rasmussen, Birgit

    2014-01-01

    Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on soun...... exchanging experiences about constructions fulfilling different classes, reducing trade barriers, and finally increasing the sound insulation of dwellings.......Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on sound...... insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...

  5. Cellular image classification

    CERN Document Server

    Xu, Xiang; Lin, Feng

    2017-01-01

    This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...

  6. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  7. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  8. 'No delays achiever'.

    Science.gov (United States)

    2007-05-01

    The latest version of the NHS Institute for Innovation and Improvement's 'no delays achiever', a web based tool created to help NHS organisations achieve the 18-week target for GP referrals to first treatment, is available at www.nodelaysachiever.nhs.uk.

  9. Sequence Classification: 890773 [

    Lifescience Database Archive (English)

    Full Text Available oline as sole nitrogen source; deficiency of the human homolog causes HPII, an autosomal recessive inborn error of metabolism; Put2p || http://www.ncbi.nlm.nih.gov/protein/6321826 ...

  10. Trends and concepts in fern classification

    Science.gov (United States)

    Christenhusz, Maarten J. M.; Chase, Mark W.

    2014-01-01

    Background and Aims Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. Scope An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Key Results Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called ‘fern allies’ (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is

  11. Land-cover classification with an expert classification algorithm using digital aerial photographs

    Directory of Open Access Journals (Sweden)

    José L. de la Cruz

    2010-05-01

    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (Zea mays L., oats (Avena sativa L., rye (Secale cereale L., wheat (Triticum aestivum L. and barley (Hordeun vulgare L., (3 high protein crops, such as peas (Pisum sativum L. and beans (Vicia faba L., (4 alfalfa (Medicago sativa L., (5 woodlands and scrublands, including holly oak (Quercus ilex L. and common retama (Retama sphaerocarpa L., (6 urban soil, (7 olive groves (Olea europaea L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  12. Error image aware content restoration

    Science.gov (United States)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  13. Quantum error correction for beginners.

    Science.gov (United States)

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  14. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  15. Training strategy for convolutional neural networks in pedestrian gender classification

    Science.gov (United States)

    Ng, Choon-Boon; Tay, Yong-Haur; Goi, Bok-Min

    2017-06-01

    In this work, we studied a strategy for training a convolutional neural network in pedestrian gender classification with limited amount of labeled training data. Unsupervised learning by k-means clustering on pedestrian images was used to learn the filters to initialize the first layer of the network. As a form of pre-training, supervised learning for the related task of pedestrian classification was performed. Finally, the network was fine-tuned for gender classification. We found that this strategy improved the network's generalization ability in gender classification, achieving better test results when compared to random weights initialization and slightly more beneficial than merely initializing the first layer filters by unsupervised learning. This shows that unsupervised learning followed by pre-training with pedestrian images is an effective strategy to learn useful features for pedestrian gender classification.

  16. An Efficient Audio Classification Approach Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Lhoucine Bahatti

    2016-05-01

    Full Text Available In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines. As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT approach and includes original audio features related to the musical context in which the notes appear. The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies. Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.

  17. The paradox of atheoretical classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may...... be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural...

  18. Information gathering for CLP classification.

    Science.gov (United States)

    Marcello, Ida; Giordano, Felice; Costamagna, Francesca Marina

    2011-01-01

    Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP). If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances) and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  19. Information gathering for CLP classification

    Directory of Open Access Journals (Sweden)

    Ida Marcello

    2011-01-01

    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  20. Bosniak Classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;

    2014-01-01

    . Purpose: To investigate the inter- and intra-observer agreement among experienced uroradiologists when categorizing complex renal cysts according to the Bosniak classification. Material and Methods: The original categories of 100 cystic renal masses were chosen as “Gold Standard” (GS), established...... to the calculated weighted κ all readers performed “very good” for both inter-observer and intra-observer variation. Most variation was seen in cysts catagorized as Bosniak II, IIF, and III. These results show that radiologists who evaluate complex renal cysts routinely may apply the Bosniak classification...

  1. Acoustic classification of dwellings

    DEFF Research Database (Denmark)

    Berardi, Umberto; Rasmussen, Birgit

    2014-01-01

    insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...... of descriptors, number of classes, and class intervals occurred between national schemes. However, a proposal “acoustic classification scheme for dwellings” has been developed recently in the European COST Action TU0901 with 32 member countries. This proposal has been accepted as an ISO work item. This paper...

  2. Classification of iconic images

    OpenAIRE

    Zrianina, Mariia; Kopf, Stephan

    2016-01-01

    Iconic images represent an abstract topic and use a presentation that is intuitively understood within a certain cultural context. For example, the abstract topic “global warming” may be represented by a polar bear standing alone on an ice floe. Such images are widely used in media and their automatic classification can help to identify high-level semantic concepts. This paper presents a system for the classification of iconic images. It uses a variation of the Bag of Visual Words approach wi...

  3. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    One of the simplest, and yet most consistently well-performing setof classifiers is the \\NB models. These models rely on twoassumptions: $(i)$ All the attributes used to describe an instanceare conditionally independent given the class of that instance,and $(ii)$ all attributes follow a specific...... parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  4. Constructing criticality by classification

    DEFF Research Database (Denmark)

    Machacek, Erika

    2017-01-01

    This paper explores the role of expertise, the nature of criticality, and their relationship to securitisation as mineral raw materials are classified. It works with the construction of risk along the liberal logic of security to explore how "key materials" are turned into "critical materials......, legitimizing a criticality discourse.Specifically, the paper introduces a typology delineating the inferences made by the experts from their produced recommendations in the classification of rare earth element criticality. The paper argues that the classification is a specific process of constructing risk...

  5. Development of a classification system for cup anemometers - CLASSCUP

    DEFF Research Database (Denmark)

    Friis Pedersen, Troels

    2003-01-01

    Errors associated with the measurements of the wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show a significant and not acceptable difference. TheEuropean CLASSCUP research project posed the object......Errors associated with the measurements of the wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show a significant and not acceptable difference. TheEuropean CLASSCUP research project posed...... the objectives to quantify the errors associated with the use of cup anemometers, and to determine the requirements for an optimum design of a cup anemometer, and to develop a classification system forquantification of systematic errors of cup anemometers. The present report describes this proposed...... classification system. A classification method for cup anemometers has been developed, which proposes general external operational ranges to be used. Anormal category range connected to ideal sites of the IEC power performance standard was made, and another extended category range for complex terrain...

  6. [Achievement of therapeutic objectives].

    Science.gov (United States)

    Mantilla, Teresa

    2014-07-01

    Therapeutic objectives for patients with atherogenic dyslipidemia are achieved by improving patient compliance and adherence. Clinical practice guidelines address the importance of treatment compliance for achieving objectives. The combination of a fixed dose of pravastatin and fenofibrate increases the adherence by simplifying the drug regimen and reducing the number of daily doses. The good tolerance, the cost of the combination and the possibility of adjusting the administration to the patient's lifestyle helps achieve the objectives for these patients with high cardiovascular risk. Copyright © 2014 Sociedad Española de Arteriosclerosis y Elsevier España, S.L. All rights reserved.

  7. Adoptees' Educational Achievements

    DEFF Research Database (Denmark)

    Olsen, Rikke Fuglsang

    to country of origin. The results suggest that the relatively small gap between non-kin adoptees’ and non-adoptees’ educational achievements widens between ages 20 and 25. Moreover, the results show some differences in educational outcomes among non-kin adoptees with different countries of origin.......This study analyses educational achievement at age 20 for 3,180 non-kin adoptees and at age 25 for 1,559 non-kin adoptees in Denmark by comparing them to non-adoptees. The study also analyses whether there are within-group differences in the educational achievement of non-kin adoptees according...

  8. Harmless error analysis: How do judges respond to confession errors?

    Science.gov (United States)

    Wallace, D Brian; Kassin, Saul M

    2012-04-01

    In Arizona v. Fulminante (1991), the U.S. Supreme Court opened the door for appellate judges to conduct a harmless error analysis of erroneously admitted, coerced confessions. In this study, 132 judges from three states read a murder case summary, evaluated the defendant's guilt, assessed the voluntariness of his confession, and responded to implicit and explicit measures of harmless error. Results indicated that judges found a high-pressure confession to be coerced and hence improperly admitted into evidence. As in studies with mock jurors, however, the improper confession significantly increased their conviction rate in the absence of other evidence. On the harmless error measures, judges successfully overruled the confession when required to do so, indicating that they are capable of this analysis.

  9. VO2 estimation using 6-axis motion sensor with sports activity classification.

    Science.gov (United States)

    Nagata, Takashi; Nakamura, Naoteru; Miyatake, Masato; Yuuki, Akira; Yomo, Hiroyuki; Kawabata, Takashi; Hara, Shinsuke

    2016-08-01

    In this paper, we focus on oxygen consumption (VO2) estimation using 6-axis motion sensor (3-axis accelerometer and 3-axis gyroscope) for people playing sports with diverse intensities. The VO2 estimated with a small motion sensor can be used to calculate the energy expenditure, however, its accuracy depends on the intensities of various types of activities. In order to achieve high accuracy over a wide range of intensities, we employ an estimation framework that first classifies activities with a simple machine-learning based classification algorithm. We prepare different coefficients of linear regression model for different types of activities, which are determined with training data obtained by experiments. The best-suited model is used for each type of activity when VO2 is estimated. The accuracy of the employed framework depends on the trade-off between the degradation due to classification errors and improvement brought by applying separate, optimum model to VO2 estimation. Taking this trade-off into account, we evaluate the accuracy of the employed estimation framework by using a set of experimental data consisting of VO2 and motion data of people with a wide range of intensities of exercises, which were measured by a VO2 meter and motion sensor, respectively. Our numerical results show that the employed framework can improve the estimation accuracy in comparison to a reference method that uses a common regression model for all types of activities.

  10. Investigation of an automatic sleep stage classification by means of multiscorer hypnogram.

    Science.gov (United States)

    Figueroa Helland, V C; Gapelyuk, A; Suhrbier, A; Riedl, M; Penzel, T; Kurths, J; Wessel, N

    2010-01-01

    Scoring sleep visually based on polysomnography is an important but time-consuming element of sleep medicine. Whereas computer software assists human experts in the assignment of sleep stages to polysomnogram epochs, their performance is usually insufficient. This study evaluates the possibility to fully automatize sleep staging considering the reliability of the sleep stages available from human expert sleep scorers. We obtain features from EEG, ECG and respiratory signals of polysomnograms from ten healthy subjects. Using the sleep stages provided by three human experts, we evaluate the performance of linear discriminant analysis on the entire polysomnogram and only on epochs where the three experts agree in their sleep stage scoring. We show that in polysomnogram intervals, to which all three scorers assign the same sleep stage, our algorithm achieves 90% accuracy. This high rate of agreement with the human experts is accomplished with only a small set of three frequency features from the EEG. We increase the performance to 93% by including ECG and respiration features. In contrast, on intervals of ambiguous sleep stage, the sleep stage classification obtained from our algorithm, agrees with the human consensus scorer in approximately 61%. These findings suggest that machine classification is highly consistent with human sleep staging and that error in the algorithm's assignments is rather a problem of lack of well-defined criteria for human experts to judge certain polysomnogram epochs than an insufficiency of computational procedures.

  11. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  12. Pauli Exchange Errors in Quantum Computation

    CERN Document Server

    Ruskai, M B

    2000-01-01

    We argue that a physically reasonable model of fault-tolerant computation requires the ability to correct a type of two-qubit error which we call Pauli exchange errors as well as one qubit errors. We give an explicit 9-qubit code which can handle both Pauli exchange errors and all one-bit errors.

  13. Compensatory neurofuzzy model for discrete data classification in biomedical

    Science.gov (United States)

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  14. Words semantic orientation classification based on HowNet

    Institute of Scientific and Technical Information of China (English)

    LI Dun; MA Yong-tao; GUO Jian-li

    2009-01-01

    Based on the text orientation classification, a new measurement approach to semantic orientation of words was proposed. According to the integrated and detailed definition of words in HowNet, seed sets including the words with intense orientations were built up. The orientation similarity between the seed words and the given word was then calculated using the sentiment weight priority to recognize the semantic orientation of common words. Finally, the words' semantic orientation and the context were combined to recognize the given words' orientation. The experiments show that the measurement approach achieves better results for common words' orientation classification and contributes particularly to the text orientation classification of large granularities.

  15. Key-phrase based classification of public health web pages.

    Science.gov (United States)

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  16. Surgical options for correction of refractive error following cataract surgery.

    Science.gov (United States)

    Abdelghany, Ahmed A; Alio, Jorge L

    2014-01-01

    Refractive errors are frequently found following cataract surgery and refractive lens exchange. Accurate biometric analysis, selection and calculation of the adequate intraocular lens (IOL) and modern techniques for cataract surgery all contribute to achieving the goal of cataract surgery as a refractive procedure with no refractive error. However, in spite of all these advances, residual refractive error still occasionally occurs after cataract surgery and laser in situ keratomileusis (LASIK) can be considered the most accurate method for its correction. Lens-based procedures, such as IOL exchange or piggyback lens implantation are also possible alternatives especially in cases with extreme ametropia, corneal abnormalities, or in situations where excimer laser is unavailable. In our review, we have found that piggyback IOL is safer and more accurate than IOL exchange. Our aim is to provide a review of the recent literature regarding target refraction and residual refractive error in cataract surgery.

  17. Overview of Quantum Error Prevention and Leakage Elimination

    CERN Document Server

    Byrd, M S; Lidar, D A; Byrd, Mark S.; Wu, Lian-Ao; Lidar, Daniel A.

    2004-01-01

    Quantum error prevention strategies will be required to produce a scalable quantum computing device and are of central importance in this regard. Progress in this area has been quite rapid in the past few years. In order to provide an overview of the achievements in this area, we discuss the three major classes of error prevention strategies, the abilities of these methods and the shortcomings. We then discuss the combinations of these strategies which have recently been proposed in the literature. Finally we present recent results in reducing errors on encoded subspaces using decoupling controls. We show how to generally remove mixing of an encoded subspace with external states (termed leakage errors) using decoupling controls. Such controls are known as ``leakage elimination operations'' or ``LEOs.''

  18. Academic Achievement Among Juvenile Detainees.

    Science.gov (United States)

    Grigorenko, Elena L; Macomber, Donna; Hart, Lesley; Naples, Adam; Chapman, John; Geib, Catherine F; Chart, Hilary; Tan, Mei; Wolhendler, Baruch; Wagner, Richard

    2015-01-01

    The literature has long pointed to heightened frequencies of learning disabilities (LD) within the population of law offenders; however, a systematic appraisal of these observations, careful estimation of these frequencies, and investigation of their correlates and causes have been lacking. Here we present data collected from all youth (1,337 unique admissions, mean age 14.81, 20.3% females) placed in detention in Connecticut (January 1, 2010-July 1, 2011). All youth completed a computerized educational screener designed to test a range of performance in reading (word and text levels) and mathematics. A subsample (n = 410) received the Wide Range Achievement Test, in addition to the educational screener. Quantitative (scale-based) and qualitative (grade-equivalence-based) indicators were then analyzed for both assessments. Results established the range of LD in this sample from 13% to 40%, averaging 24.9%. This work provides a systematic exploration of the type and severity of word and text reading and mathematics skill deficiencies among juvenile detainees and builds the foundation for subsequent efforts that may link these deficiencies to both more formal, structured, and variable definitions and classifications of LD, and to other types of disabilities (e.g., intellectual disability) and developmental disorders (e.g., ADHD) that need to be conducted in future research.

  19. Study on error analysis and accuracy improvement for aspheric profile measurement

    Science.gov (United States)

    Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou

    2017-06-01

    Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.

  20. Shark Teeth Classification

    Science.gov (United States)

    Brown, Tom; Creel, Sally; Lee, Velda

    2009-01-01

    On a recent autumn afternoon at Harmony Leland Elementary in Mableton, Georgia, students in a fifth-grade science class investigated the essential process of classification--the act of putting things into groups according to some common characteristics or attributes. While they may have honed these skills earlier in the week by grouping their own…

  1. Sandwich classification theorem

    Directory of Open Access Journals (Sweden)

    Alexey Stepanov

    2015-09-01

    Full Text Available The present note arises from the author's talk at the conference ``Ischia Group Theory 2014''. For subgroups FleN of a group G denote by Lat(F,N the set of all subgroups of N , containing F . Let D be a subgroup of G . In this note we study the lattice LL=Lat(D,G and the lattice LL ′ of subgroups of G , normalized by D . We say that LL satisfies sandwich classification theorem if LL splits into a disjoint union of sandwiches Lat(F,N G (F over all subgroups F such that the normal closure of D in F coincides with F . Here N G (F denotes the normalizer of F in G . A similar notion of sandwich classification is introduced for the lattice LL ′ . If D is perfect, i.,e. coincides with its commutator subgroup, then it turns out that sandwich classification theorem for LL and LL ′ are equivalent. We also show how to find basic subroup F of sandwiches for LL ′ and review sandwich classification theorems in algebraic groups over rings.

  2. Dynamic Latent Classification Model

    DEFF Research Database (Denmark)

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...... in the process as well as modeling dependences between attributes....

  3. An automated cirrus classification

    Science.gov (United States)

    Gryspeerdt, Edward; Quaas, Johannes; Sourdeval, Odran; Goren, Tom

    2017-04-01

    Cirrus clouds play an important role in determining the radiation budget of the earth, but our understanding of the lifecycle and controls on cirrus clouds remains incomplete. Cirrus clouds can have very different properties and development depending on their environment, particularly during their formation. However, the relevant factors often cannot be distinguished using commonly retrieved satellite data products (such as cloud optical depth). In particular, the initial cloud phase has been identified as an important factor in cloud development, but although back-trajectory based methods can provide information on the initial cloud phase, they are computationally expensive and depend on the cloud parametrisations used in re-analysis products. In this work, a classification system (Identification and Classification of Cirrus, IC-CIR) is introduced. Using re-analysis and satellite data, cirrus clouds are separated in four main types: frontal, convective, orographic and in-situ. The properties of these classes show that this classification is able to provide useful information on the properties and initial phase of cirrus clouds, information that could not be provided by instantaneous satellite retrieved cloud properties alone. This classification is designed to be easily implemented in global climate models, helping to improve future comparisons between observations and models and reducing the uncertainty in cirrus clouds properties, leading to improved cloud parametrisations.

  4. Classifications in popular music

    NARCIS (Netherlands)

    van Venrooij, A.; Schmutz, V.; Wright, J.D.

    2015-01-01

    The categorical system of popular music, such as genre categories, is a highly differentiated and dynamic classification system. In this article we present work that studies different aspects of these categorical systems in popular music. Following the work of Paul DiMaggio, we focus on four questio

  5. Nearest convex hull classification

    NARCIS (Netherlands)

    G.I. Nalbantov (Georgi); P.J.F. Groenen (Patrick); J.C. Bioch (Cor)

    2006-01-01

    textabstractConsider the classification task of assigning a test object to one of two or more possible groups, or classes. An intuitive way to proceed is to assign the object to that class, to which the distance is minimal. As a distance measure to a class, we propose here to use the distance to the

  6. Principles for ecological classification

    Science.gov (United States)

    Dennis H. Grossman; Patrick Bourgeron; Wolf-Dieter N. Busch; David T. Cleland; William Platts; G. Ray; C. Robins; Gary Roloff

    1999-01-01

    The principal purpose of any classification is to relate common properties among different entities to facilitate understanding of evolutionary and adaptive processes. In the context of this volume, it is to facilitate ecosystem stewardship, i.e., to help support ecosystem conservation and management objectives.

  7. Improving Student Question Classification

    Science.gov (United States)

    Heiner, Cecily; Zachary, Joseph L.

    2009-01-01

    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  8. Classification of waste packages

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, H.P.; Sauer, M.; Rojahn, T. [Versuchsatomkraftwerk GmbH, Kahl am Main (Germany)

    2001-07-01

    A barrel gamma scanning unit has been in use at the VAK for the classification of radioactive waste materials since 1998. The unit provides the facility operator with the data required for classification of waste barrels. Once these data have been entered into the AVK data processing system, the radiological status of raw waste as well as pre-treated and processed waste can be tracked from the point of origin to the point at which the waste is delivered to a final storage. Since the barrel gamma scanning unit was commissioned in 1998, approximately 900 barrels have been measured and the relevant data required for classification collected and analyzed. Based on the positive results of experience in the use of the mobile barrel gamma scanning unit, the VAK now offers the classification of barrels as a service to external users. Depending upon waste quantity accumulation, this measurement unit offers facility operators a reliable and time-saving and cost-effective means of identifying and documenting the radioactivity inventory of barrels scheduled for final storage. (orig.)

  9. Event Classification using Concepts

    NARCIS (Netherlands)

    Boer, M.H.T. de; Schutte, K.; Kraaij, W.

    2013-01-01

    The semantic gap is one of the challenges in the GOOSE project. In this paper a Semantic Event Classification (SEC) system is proposed as an initial step in tackling the semantic gap challenge in the GOOSE project. This system uses semantic text analysis, multiple feature detectors using the BoW

  10. Munitions Classification Library

    Science.gov (United States)

    2016-04-04

    the MM and TEMTADS 2x2 systems , with dynamic data handling for these systems on the horizon. Using UX-Analyze, a data processor can apply physics ... classification libraries. DRAFT 4 2.0 TECHNOLOGY Three different sensor systems were used during the initial phase of the data collection: a modified...

  11. Recurrent neural collective classification.

    Science.gov (United States)

    Monner, Derek D; Reggia, James A

    2013-12-01

    With the recent surge in availability of data sets containing not only individual attributes but also relationships, classification techniques that take advantage of predictive relationship information have gained in popularity. The most popular existing collective classification techniques have a number of limitations-some of them generate arbitrary and potentially lossy summaries of the relationship data, whereas others ignore directionality and strength of relationships. Popular existing techniques make use of only direct neighbor relationships when classifying a given entity, ignoring potentially useful information contained in expanded neighborhoods of radius greater than one. We present a new technique that we call recurrent neural collective classification (RNCC), which avoids arbitrary summarization, uses information about relationship directionality and strength, and through recursive encoding, learns to leverage larger relational neighborhoods around each entity. Experiments with synthetic data sets show that RNCC can make effective use of relationship data for both direct and expanded neighborhoods. Further experiments demonstrate that our technique outperforms previously published results of several collective classification methods on a number of real-world data sets.

  12. Event Classification using Concepts

    NARCIS (Netherlands)

    Boer, M.H.T. de; Schutte, K.; Kraaij, W.

    2013-01-01

    The semantic gap is one of the challenges in the GOOSE project. In this paper a Semantic Event Classification (SEC) system is proposed as an initial step in tackling the semantic gap challenge in the GOOSE project. This system uses semantic text analysis, multiple feature detectors using the BoW mod

  13. Crowdsourcing for error detection in cortical surface delineations

    DEFF Research Database (Denmark)

    Ganz, Melanie; Kondermann, Daniel; Andrulis, Jonas

    2017-01-01

    crowdsourcing platform were asked to annotate errors in automatic cortical surface delineations on 100 central, coronal slices of MR images. RESULTS: On average, annotations for 100 images were obtained in less than an hour. When using expert annotations as reference, the crowd on average achieves a sensitivity...

  14. Energy efficient error-correcting coding for wireless systems

    NARCIS (Netherlands)

    Shao, Xiaoying

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required t

  15. Error Immune Logic for Low-Power Probabilistic Computing

    Directory of Open Access Journals (Sweden)

    Bo Marr

    2010-01-01

    design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.

  16. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  17. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  18. A-posteriori error estimation for second order mechanical systems

    Institute of Scientific and Technical Information of China (English)

    Thomas Ruiner; J(ǒ)rg Fehr; Bernard Haasdonk; Peter Eberhard

    2012-01-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom.As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important.In this work,an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems.Due to the special second order structure of mechanical systems,an improvement of the a-posteriori error estimator is achieved· A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique.Therefore,it can be used for moment-matching based,Gramian matrices based or modal based model reduction techniques.The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system,and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  19. Error-thresholds for qudit-based topological quantum memories

    Science.gov (United States)

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.

    2014-03-01

    Extending the quantum computing paradigm from qubits to higher-dimensional quantum systems allows for increased channel capacity and a more efficient implementation of quantum gates. However, to perform reliable computations an efficient error-correction scheme adapted for these multi-level quantum systems is needed. A promising approach is via topological quantum error correction, where stability to external noise is achieved by encoding quantum information in non-local degrees of freedom. A key figure of merit is the error threshold which quantifies the fraction of physical qudits that can be damaged before logical information is lost. Here we analyze the resilience of generalized topological memories built from d-level quantum systems (qudits) to bit-flip errors. The error threshold is determined by mapping the quantum setup to a classical Potts-like model with bond disorder, which is then investigated numerically using large-scale Monte Carlo simulations. Our results show that topological error correction with qutrits exhibits an improved error threshold in comparison to qubit-based systems.

  20. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  1. Computational error and complexity in science and engineering computational error and complexity

    CERN Document Server

    Lakshmikantham, Vangipuram; Chui, Charles K; Chui, Charles K

    2005-01-01

    The book "Computational Error and Complexity in Science and Engineering” pervades all the science and engineering disciplines where computation occurs. Scientific and engineering computation happens to be the interface between the mathematical model/problem and the real world application. One needs to obtain good quality numerical values for any real-world implementation. Just mathematical quantities symbols are of no use to engineers/technologists. Computational complexity of the numerical method to solve the mathematical model, also computed along with the solution, on the other hand, will tell us how much computation/computational effort has been spent to achieve that quality of result. Anyone who wants the specified physical problem to be solved has every right to know the quality of the solution as well as the resources spent for the solution. The computed error as well as the complexity provide the scientific convincing answer to these questions. Specifically some of the disciplines in which the book w...

  2. Co-occurrence Models in Music Genre Classification

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan

    2005-01-01

    Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...... genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models....

  3. Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring

    Science.gov (United States)

    Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat

    2015-01-01

    We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863

  4. Efficient Fingercode Classification

    Science.gov (United States)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  5. BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS

    Directory of Open Access Journals (Sweden)

    Mohamad Iwan

    2005-06-01

    This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.

  6. Text Classification Retrieval Based on Complex Network and ICA Algorithm

    Directory of Open Access Journals (Sweden)

    Hongxia Li

    2013-08-01

    Full Text Available With the development of computer science and information technology, the library is developing toward information and network. The library digital process converts the book into digital information. The high-quality preservation and management are achieved by computer technology as well as text classification techniques. It realizes knowledge appreciation. This paper introduces complex network theory in the text classification process and put forwards the ICA semantic clustering algorithm. It realizes the independent component analysis of complex network text classification. Through the ICA clustering algorithm of independent component, it realizes character words clustering extraction of text classification. The visualization of text retrieval is improved. Finally, we make a comparative analysis of collocation algorithm and ICA clustering algorithm through text classification and keyword search experiment. The paper gives the clustering degree of algorithm and accuracy figure. Through simulation analysis, we find that ICA clustering algorithm increases by 1.2% comparing with text classification clustering degree. Accuracy can be improved by 11.1% at most. It improves the efficiency and accuracy of text classification retrieval. It also provides a theoretical reference for text retrieval classification of eBook

  7. Using quadtree segmentation to support error modelling in categorical raster data

    NARCIS (Netherlands)

    Bruin, de S.; Wit, de A.J.W.; Oort, van P.A.J.

    2004-01-01

    This paper explores the use of quadtree segmentation of a land-cover map to improve error modelling by (1) accounting for variation in classification accuracy among differently sized homogeneous map regions and (2) improving the statistical properties of map realizations generated by sequential indi

  8. Multiclass Bayes error estimation by a feature space sampling technique

    Science.gov (United States)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  9. Multiclass Bayes error estimation by a feature space sampling technique

    Science.gov (United States)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  10. On quantifying post-classification subpixel landcover changes

    Science.gov (United States)

    Silván-Cárdenas, Jose L.; Wang, Le

    2014-12-01

    The post-classifications change matrix is well defined for hard classifications. However, for soft classifications, where partial membership of pixels to landcover classes is allowed, there is no single definition for such a matrix. In this paper, we argue that a natural definition of the post-classification change matrix for subpixel classifications can be done in terms of a constrained optimization problem, according to which the change matrix should allow an optimal prediction of the subpixel landcover fractions at the latest date from those of the earliest date. We first show that the traditional change matrix for crisp classification corresponds to the optimal solution of the unconstrained problem. Then, the formulation is generalized for subpixel classifications by incorporating certain constraints pertaining to desirable properties of a change matrix, thus resulting in a constrained least square (CLS) change matrix. In addition, based on intuitive criteria, a generalized product (GPROD) was parameterized in terms of an exponent parameter and used to derive another change matrix. It was shown that when the exponent parameter of the GPROD operator tends to infinity, one of the most commonly used methods for map comparison from subpixel fractions, namely the MINPROD composite operator, results. The three matrices (CLS, GPROD and MINPROD) were tested on both simulated and real subpixel changes derived from QuickBird and Landsat TM images. Results indicated that, for small exponent values (0-0.5), the GPROD matrix yielded the lowest errors of estimated landcover changes, whereas the MINPROD generally yielded the highest errors for the same estimations.

  11. A Novel Vehicle Classification Using Embedded Strain Gauge Sensors

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2008-11-01

    Full Text Available Abstract: This paper presents a new vehicle classification and develops a traffic monitoring detector to provide reliable vehicle classification to aid traffic management systems. The basic principle of this approach is based on measuring the dynamic strain caused by vehicles across pavement to obtain the corresponding vehicle parameters – wheelbase and number of axles – to then accurately classify the vehicle. A system prototype with five embedded strain sensors was developed to validate the accuracy and effectiveness of the classification method. According to the special arrangement of the sensors and the different time a vehicle arrived at the sensors one can estimate the vehicle’s speed accurately, corresponding to the estimated vehicle wheelbase and number of axles. Because of measurement errors and vehicle characteristics, there is a lot of overlap between vehicle wheelbase patterns. Therefore, directly setting up a fixed threshold for vehicle classification often leads to low-accuracy results. Using the machine learning pattern recognition method to deal with this problem is believed as one of the most effective tools. In this study, support vector machines (SVMs were used to integrate the classification features extracted from the strain sensors to automatically classify vehicles into five types, ranging from small vehicles to combination trucks, along the lines of the Federal Highway Administration vehicle classification guide. Test bench and field experiments will be introduced in this paper. Two support vector machines classification algorithms (one-against-all, one-against-one are used to classify single sensor data and multiple sensor combination data. Comparison of the two classification method results shows that the classification accuracy is very close using single data or multiple data. Our results indicate that using multiclass SVM-based fusion multiple sensor data significantly improves

  12. Design and scheduling for periodic concurrent error detection and recovery in processor arrays

    Science.gov (United States)

    Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent

    1992-01-01

    Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.

  13. POSITION ERROR IN STATION-KEEPING SATELLITE

    Science.gov (United States)

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  14. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  15. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  16. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  17. Redundant measurements for controlling errors

    Energy Technology Data Exchange (ETDEWEB)

    Ehinger, M. H.; Crawford, J. M.; Madeen, M. L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program.

  18. A Conceptual Framework to use Remediation of Errors Based on Multiple External Remediation Applied to Learning Objects

    Directory of Open Access Journals (Sweden)

    Maici Duarte Leite

    2014-09-01

    Full Text Available This paper presents the application of some concepts of Intelligent Tutoring Systems (ITS to elaborate a conceptual framework that uses the remediation of errors with Multiple External Representations (MERs in Learning Objects (LO. To this is demonstrated a development of LO for teaching the Pythagorean Theorem through this framework. This study explored the remediation process of error by a classification of error in mathematical, providing support for the use of MERs with the remediation of error. The main objective of the proposed framework is to assist the individual learner in the recovery of a mistake made during the interaction with the LO, either through carelessness or lack of knowledge. Initially, we present the compilation of the classification of mathematical errors and their relationship with MERs. Later the concepts involved with conceptual framework proposed. Finally, an experiment with LO developed with a authoring tool called FARMA, using the conceptual framework for teaching the Pythagorean Theorem is presented.

  19. Updated Classification System for Proximal Humeral Fractures

    Science.gov (United States)

    Guix, José M. Mora; Pedrós, Juan Sala; Serrano, Alejandro Castaño

    2009-01-01

    Proximal humeral fractures can restrict daily activities and, therefore, deserve efficient diagnoses that minimize complications and sequels. For good diagnosis and treatment, patient characteristics, variability in the forms of the fractures presented, and the technical difficulties in achieving fair results with surgical treatment should all be taken into account. Current classification systems for these fractures are based on anatomical and pathological principles, and not on systematic image reading. These fractures can appear in many different forms, with many characteristics that must be identified. However, many current classification systems lack good reliability, both inter-observer and intra-observer for different image types. A new approach to image reading, following a well-designed set and sequence of variables to check, is needed. We previously reported such an image reading system. In the present study, we report a classification system based on this image reading system. Here we define 21 fracture characteristics and apply them along with classical Codman approaches to classify fractures. We base this novel classification system for classifying proximal humeral fractures on a review of scientific literature and improvements to our image reading protocol. Patient status, fracture characteristics and surgeon circumstances have been important issues in developing this system. PMID:19574487

  20. Classification of Sporting Activities Using Smartphone Accelerometers

    Directory of Open Access Journals (Sweden)

    Noel E. O'Connor

    2013-04-01

    Full Text Available In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT. Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today’s society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach.