WorldWideScience

Sample records for achieved classification error

  1. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  2. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  3. Assessing the Statistical Significance of the Achieved Classification Error of Classifiers Constructed using Serum Peptide Profiles, and a Prescription for Random Sampling Repeated Studies for Massive High-Throughput Genomic and Proteomic Studies

    Directory of Open Access Journals (Sweden)

    William L Bigbee

    2005-01-01

    Full Text Available source of patient-specific information with high potential impact on the early detection and classification of cancer and other diseases. The new profiling technology comes, however, with numerous challenges and concerns. Particularly important are concerns of reproducibility of classification results and their significance. In this work we describe a computational validation framework, called PACE (Permutation-Achieved Classification Error, that lets us assess, for a given classification model, the significance of the Achieved Classification Error (ACE on the profile data. The framework compares the performance statistic of the classifier on true data samples and checks if these are consistent with the behavior of the classifier on the same data with randomly reassigned class labels. A statistically significant ACE increases our belief that a discriminative signal was found in the data. The advantage of PACE analysis is that it can be easily combined with any classification model and is relatively easy to interpret. PACE analysis does not protect researchers against confounding in the experimental design, or other sources of systematic or random error.We use PACE analysis to assess significance of classification results we have achieved on a number of published data sets. The results show that many of these datasets indeed possess a signal that leads to a statistically significant ACE.

  4. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Directory of Open Access Journals (Sweden)

    Sun Yanni

    2011-05-01

    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.

  5. Automated Classification of Phonological Errors in Aphasic Language

    Science.gov (United States)

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  6. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  7. Reducing Support Vector Machine Classification Error by Implementing Kalman Filter

    Directory of Open Access Journals (Sweden)

    Muhsin Hassan

    2013-08-01

    Full Text Available The aim of this is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine classification errors in classifying pipeline corrosion depth. In pipeline defect classification, it is important to increase the accuracy of the SVM classification so that one can avoid misclassification which can lead to greater problems in monitoring pipeline defect and prediction of pipeline leakage. In this paper, it is found that noisy data can greatly affect the performance of SVM. Hence, Kalman Filter + SVM hybrid technique has been proposed as a solution to reduce SVM classification errors. The datasets has been added with Additive White Gaussian Noise in several stages to study the effect of noise on SVM classification accuracy. Three techniques have been studied in this experiment, namely SVM, hybrid of Discrete Wavelet Transform + SVM and hybrid of Kalman Filter + SVM. Experiment results have been compared to find the most promising techniques among them. MATLAB simulations show Kalman Filter and Support Vector Machine combination in a single system produced higher accuracy compared to the other two techniques.

  8. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  9. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  10. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Directory of Open Access Journals (Sweden)

    David Ayllón

    Full Text Available Bioimpedance spectroscopy (BIS measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33% and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible.

  11. Mining discriminative class codes for multi-class classification based on minimizing generalization errors

    Science.gov (United States)

    Eiadon, Mongkon; Pipanmaekaporn, Luepol; Kamonsantiroj, Suwatchai

    2016-07-01

    Error Correcting Output Code (ECOC) has emerged as one of promising techniques for solving multi-class classification. In the ECOC framework, a multi-class problem is decomposed into several binary ones with a coding design scheme. Despite this, the suitable multi-class decomposition scheme is still ongoing research in machine learning. In this work, we propose a novel multi-class coding design method to mine the effective and compact class codes for multi-class classification. For a given n-class problem, this method decomposes the classes into subsets by embedding a structure of binary trees. We put forward a novel splitting criterion based on minimizing generalization errors across the classes. Then, a greedy search procedure is applied to explore the optimal tree structure for the problem domain. We run experiments on many multi-class UCI datasets. The experimental results show that our proposed method can achieve better classification performance than the common ECOC design methods.

  12. Measuring the achievable error of query sets under differential privacy

    CERN Document Server

    Li, Chao

    2012-01-01

    A common goal of privacy research is to release synthetic data that satisfies a formal privacy guarantee and can be used by an analyst in place of the original data. To achieve reasonable accuracy, a synthetic data set must be tuned to support a specified set of queries accurately, sacrificing fidelity for other queries. This work considers methods for producing synthetic data under differential privacy and investigates what makes a set of queries "easy" or "hard" to answer. We consider answering sets of linear counting queries using the matrix mechanism, a recent differentially-private mechanism that can reduce error by adding complex correlated noise adapted to a specified workload. Our main result is a novel lower bound on the minimum total error required to simultaneously release answers to a set of workload queries. The bound reveals that the hardness of a query workload is related to the spectral properties of the workload when it is represented in matrix form. The bound is tight and, because it satisfi...

  13. CLASSIFICATION OF CRYOSOLS: SIGNIFICANCE,ACHIEVEMENTS AND CHALLENGES

    Institute of Scientific and Technical Information of China (English)

    CHEN Jie; GONG Zi-tong; CHEN Zhi-cheng; TAN Man-zhi

    2003-01-01

    International concerns about the effects of global change on permafrost-affected soils and responses of permafrost terrestrial landscapes to such change have been increasing in the last two decades. To achieve a variety of goals including the determining of soil carbon stocks and dynamics in the Northern Hemisphere, the understanding of soil degradation and the best ways to protect the fragile ecosystems in permafrost environment, further study development on Cryosol classification is being in great demand. In this paper the existing Cryosol classifications contained in three representative soil taxonomies are introduced, and the problems in the practical application of the defining criteria used for category differentiation in these taxonomic systems are discussed. Meanwhile, the resumption and reconstruction of Chinese Cryosol classification within a taxonomic frame is proposed. In dealing with Cryosol classification the advantages that Chinese pedologists have and the challenges that they have to face are analyzed. Finally, several suggestions on the study development of the further taxonomic frame of Cryosol classification are put forward.

  14. The Effect of Error Correction vs. Error Detection on Iranian Pre-Intermediate EFL Learners' Writing Achievement

    Science.gov (United States)

    Abedi, Razie; Latifi, Mehdi; Moinzadeh, Ahmad

    2010-01-01

    This study tries to answer some ever-existent questions in writing fields regarding approaching the most effective ways to give feedback to students' errors in writing by comparing the effect of error correction and error detection on the improvement of students' writing ability. In order to achieve this goal, 60 pre-intermediate English learners…

  15. Establishment and application of medication error classification standards in nursing care based on the International Classification of Patient Safety

    Directory of Open Access Journals (Sweden)

    Xiao-Ping Zhu

    2014-09-01

    Conclusion: Application of this classification system will help nursing administrators to accurately detect system- and process-related defects leading to medication errors, and enable the factors to be targeted to improve the level of patient safety management.

  16. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    Science.gov (United States)

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki

    2013-01-01

    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  17. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  18. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    Science.gov (United States)

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  19. Insight into error hiding: exploration of nursing students' achievement goal orientations.

    Science.gov (United States)

    Dunn, Karee E

    2014-02-01

    An estimated 50% of medication errors go unreported, and error hiding is costly to hospitals and patients. This study explored one issue that may facilitate error hiding. Descriptive statistics were used to examine nursing students' achievement goal orientations in a high-fidelity simulation course. Results indicated that although this sample of nursing students held high mastery goal orientations, they also held moderate levels of performance-approach and performance-avoidance goal orientations. These goal orientations indicate that this sample is at high risk for error hiding, which places the benefits that are typically gleaned from a strong mastery orientation at risk. Understanding variables, such as goal orientation, that can be addressed in nursing education to reduce error hiding is an area of research that needs to be further explored. This article discusses the study results and evidence-based instructional practices for this sample's achievement goal orientation profile.

  20. Early math and reading achievement are associated with the error positivity

    Directory of Open Access Journals (Sweden)

    Matthew H. Kim

    2016-12-01

    Full Text Available Executive functioning (EF and motivation are associated with academic achievement and error-related ERPs. The present study explores whether early academic skills predict variability in the error-related negativity (ERN and error positivity (Pe. Data from 113 three- to seven-year-old children in a Go/No-Go task revealed that stronger early reading and math skills predicted a larger Pe. Closer examination revealed that this relation was quadratic and significant for children performing at or near grade level, but not significant for above-average achievers. Early academics did not predict the ERN. These findings suggest that the Pe – which reflects individual differences in motivational processes as well as attention – may be associated with early academic achievement.

  1. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  2. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    Directory of Open Access Journals (Sweden)

    Rong Wang

    2015-01-01

    Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  3. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    Science.gov (United States)

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  4. An Optimization Approach of Deriving Bounds between Entropy and Error from Joint Distribution: Case Study for Binary Classifications

    Directory of Open Access Journals (Sweden)

    Bao-Gang Hu

    2016-02-01

    Full Text Available In this work, we propose a new approach of deriving the bounds between entropy and error from a joint distribution through an optimization means. The specific case study is given on binary classifications. Two basic types of classification errors are investigated, namely, the Bayesian and non-Bayesian errors. The consideration of non-Bayesian errors is due to the facts that most classifiers result in non-Bayesian solutions. For both types of errors, we derive the closed-form relations between each bound and error components. When Fano’s lower bound in a diagram of “Error Probability vs. Conditional Entropy” is realized based on the approach, its interpretations are enlarged by including non-Bayesian errors and the two situations along with independent properties of the variables. A new upper bound for the Bayesian error is derived with respect to the minimum prior probability, which is generally tighter than Kovalevskij’s upper bound.

  5. Decision support system for determining the contact lens for refractive errors patients with classification ID3

    Science.gov (United States)

    Situmorang, B. H.; Setiawan, M. P.; Tosida, E. T.

    2017-01-01

    Refractive errors are abnormalities of the refraction of light so that the shadows do not focus precisely on the retina resulting in blurred vision [1]. Refractive errors causing the patient should wear glasses or contact lenses in order eyesight returned to normal. The use of glasses or contact lenses in a person will be different from others, it is influenced by patient age, the amount of tear production, vision prescription, and astigmatic. Because the eye is one organ of the human body is very important to see, then the accuracy in determining glasses or contact lenses which will be used is required. This research aims to develop a decision support system that can produce output on the right contact lenses for refractive errors patients with a value of 100% accuracy. Iterative Dichotomize Three (ID3) classification methods will generate gain and entropy values of attributes that include code sample data, age of the patient, astigmatic, the ratio of tear production, vision prescription, and classes that will affect the outcome of the decision tree. The eye specialist test result for the training data obtained the accuracy rate of 96.7% and an error rate of 3.3%, the result test using confusion matrix obtained the accuracy rate of 96.1% and an error rate of 3.1%; for the data testing obtained accuracy rate of 100% and an error rate of 0.

  6. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images

    Directory of Open Access Journals (Sweden)

    Henry Joutsijoki

    2016-01-01

    Full Text Available The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC colony images can be classified using error-correcting output codes (ECOC. Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT based features in classification. The best accuracy (62.4% is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting. The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  7. Classification errors in contingency tables analyzed with hierarchical log-linear models. Technical report No. 20

    Energy Technology Data Exchange (ETDEWEB)

    Korn, E L

    1978-08-01

    This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.

  8. Reversible watermarking based on invariant image classification and dynamical error histogram shifting.

    Science.gov (United States)

    Pan, W; Coatrieux, G; Cuppens, N; Cuppens, F; Roux, Ch

    2011-01-01

    In this article, we present a novel reversible watermarking scheme. Its originality stands in identifying parts of the image that can be watermarked additively with the most adapted lossless modulation between: Pixel Histogram Shifting (PHS) or Dynamical Error Histogram Shifting (DEHS). This classification process makes use of a reference image derived from the image itself, a prediction of it, which has the property to be invariant to the watermark addition. In that way, watermark embedded and reader remain synchronized through this image of reference. DEHS is also an original contribution of this work. It shifts predict-errors between the image and its reference image taking care of the local specificities of the image, thus dynamically. Conducted experiments, on different medical image test sets issued from different modalities and some natural images, show that our method can insert more data with lower distortion than the most recent and efficient methods of the literature.

  9. Software platform for managing the classification of error- related potentials of observers

    Science.gov (United States)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  10. Factors that affect large subunit ribosomal DNA amplicon sequencing studies of fungal communities: classification method, primer choice, and error.

    Directory of Open Access Journals (Sweden)

    Teresita M Porter

    Full Text Available Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1 a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN; 2 a composition-based method (Ribosomal Database Project naïve bayesian classifier, NBC; and, 3 a phylogeny-based method (Statistical Assignment Package, SAP. We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50-100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys.

  11. Systematic classification of unseeded batch crystallization systems for achievable shape and size analysis

    Science.gov (United States)

    Acevedo, David; Nagy, Zoltan K.

    2014-05-01

    The purpose of the current work is to develop a systematic classification scheme for crystallization systems considering simultaneous size and shape variations, and to study the effect of temperature profiles on the achievable final shape of crystals for various crystallization systems. A classification method is proposed based on the simultaneous consideration of the effect of temperature profiles on nucleation and growth rates of two different characteristic crystal dimensions. Hence the approach provides direct indication of the extent in which crystal shape may be controlled for a particular system class by manipulating the supersaturation. A multidimensional population balance model (PBM) was implemented for unseeded crystallization processes of four different compounds. The effect between the nucleation and growth mechanisms on the final aspect ratio (AR) was investigated and it was shown that for nucleation dominated systems the AR is independent of the supersaturation profile. The simulation results confirmed experimentally also show that most crystallization systems tend to achieve an equilibrium shape hence the variation in the aspect ratio that can be achieved by manipulating the supersaturation is limited, in particular when nucleation is also taken into account as a competing phenomenon.

  12. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    Science.gov (United States)

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  13. Human Error Classification for the Permit to Work System by SHERPA in a Petrochemical Industry

    Directory of Open Access Journals (Sweden)

    Arash Ghasemi

    2015-12-01

    Full Text Available Background & objective: Occupational accidents may occur in any types of activities. Carrying out daily activities such as repairing and maintaining are one of the work phases that have high risck. Despite the issuance of work permits or work license systems for controling the risks of non-routine activities, the high rate of accidents during activity indicates the inadequacy of such systems. A main portion of this lacking is attributed to the human errors. Then, it is necessary to identify and control the probable human errors during issuing permits. Methods: In the present study, the probable errors for four categories of working permits were identified using SHERPA method. Then, an expert team analyzed 25500 issued permits during a period of approximately one year. Most of frequent human errors and their types were determined. Results: The “Excavation” and “Entry to confined space” permit possess the most errors. Approximately, 28.5 present of all errors were related to the excavation permits. The implementation error was recognized as the most frequent error for all types of error taxonomy. For every category of permits, about 40% of all errors were attributed to the implementation errors. Conclusion: The results may indicate the weakness points in the practical training of the licensing system. The human error identification methods can be used to predict and decrease the human errors.

  14. Classification of English language learner writing errors using a parallel corpus with SVM

    OpenAIRE

    Flanagan, Brendan; Yin, Chengjiu; Suzuki, Takahiko; Hirokawa, Sachio

    2014-01-01

    In order to overcome mistakes, learners need feedback to prompt reflection on their errors. This is a particularly important issue in education systems as the system effectiveness in finding errors or mistakes could have an impact on learning. Finding errors is essential to providing appropriate guidance in order for learners to overcome their flaws. Traditionally the task of finding errors in writing takes time and effort. The authors of this paper have a long-term research goal of creating ...

  15. The Concurrent Validity of the Diagnostic Analysis of Reading Errors as a Predictor of the English Achievement of Lebanese Students.

    Science.gov (United States)

    Saigh, Philip A.; Khairallah, Shereen

    1983-01-01

    The concurrent validity of the Diagnostic Analysis of Reading Errors (DARE) subtests was studied, based on the responses of Lebanese secondary and postsecondary students relative to their achievement in an English course or on a standardized test of English proficiency. The results indicate that the DARE is not a viable predictor of English…

  16. Stochastic analysis of multiple-passband spectral classifications systems affected by observation errors

    Science.gov (United States)

    Tsokos, C. P.

    1980-01-01

    The classification of targets viewed by a pushbroom type multiple band spectral scanner by algorithms suitable for implementation in high speed online digital circuits is considered. A class of algorithms suitable for use with a pipelined classifier is investigated through simulations based on observed data from agricultural targets. It is shown that time distribution of target types is an important determining factor in classification efficiency.

  17. Medication errors in outpatient setting of a tertiary care hospital: classification and root cause analysis

    Directory of Open Access Journals (Sweden)

    Sunil Basukala

    2015-12-01

    Conclusions: Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Hence, A focus on easy-to-use and inexpensive techniques for medication error reduction should be used to have the greatest impact. [Int J Basic Clin Pharmacol 2015; 4(6.000: 1235-1240

  18. 基于误差模型的混合分类算法%Error-based Hybrid Classification Algorithm

    Institute of Scientific and Technical Information of China (English)

    丛雪燕

    2014-01-01

    A new error-based approach of hybrid classification is presented , when data sets with binary objective variables are classified and it could increase the accuracy of classification .The paper also uses data sets to test the proposed approach and compares with the single classification .The results show that this method greatly improve the property , especially when it is pre-dicted by two methods and the rate of variance is higher , this hybrid approach had demonstrated impressive capacities to improve the prediction accuracy .%针对目标变量为二进制的数据集合进行分类,提出一种新的基于误差模型的混合分类方法,可以提高分类的精度。采用实际数据集作为测试数据,结果表明本文提出的算法性能优于其他的混合算法以及现有的单一使用的分类方法,尤其是当2种方法预测不一致的比率较高时,利用该方法能够显著地改善预测的准确性。

  19. An Integrated Method of Multiradar Quantitative Precipitation Estimation Based on Cloud Classification and Dynamic Error Analysis

    Directory of Open Access Journals (Sweden)

    Yong Huang

    2017-01-01

    Full Text Available Relationships between radar reflectivity factor and rainfall are different in various precipitation cloud systems. In this study, the cloud systems are firstly classified into five categories with radar and satellite data to improve radar quantitative precipitation estimation (QPE algorithm. Secondly, the errors of multiradar QPE algorithms are assumed to be different in convective and stratiform clouds. The QPE data are then derived with methods of Z-R, Kalman filter (KF, optimum interpolation (OI, Kalman filter plus optimum interpolation (KFOI, and average calibration (AC based on error analysis on the Huaihe River Basin. In the case of flood on the early of July 2007, the KFOI is applied to obtain the QPE product. Applications show that the KFOI can improve precision of estimating precipitation for multiple precipitation types.

  20. Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands

    CERN Document Server

    Dick, Josef

    2010-01-01

    We study numerical approximations of integrals $\\int_{[0,1]^s} f(\\bsx) \\,\\mathrm{d} \\bsx$ by averaging the function at some sampling points. Monte Carlo (MC) sampling yields a convergence of the root mean square error (RMSE) of order $N^{-1/2}$ (where $N$ is the number of samples). Quasi-Monte Carlo (QMC) sampling on the other hand achieves a convergence of order $N^{-1+\\varepsilon}$, for any $\\varepsilon >0$. Randomized QMC (RQMC), a combination of MC and QMC, achieves a RMSE of order $N^{-3/2+\\varepsilon}$. A combination of RQMC with local antithetic sampling achieves a convergence of the RMSE of order $N^{-3/2-1/s+\\varepsilon}$ (where $s \\ge 1$ is the dimension). QMC, RQMC and RQMC with local antithetic sampling require that the integrand has some smoothness (for instance, bounded variation). Stronger smoothness assumptions on the integrand do not improve the convergence of the above algorithms further. This paper introduces a new RQMC algorithm, for which we prove that it achieves a convergence of the RMS...

  1. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification

    Science.gov (United States)

    Spinnato, J.; Roubaud, M.-C.; Burle, B.; Torrésani, B.

    2015-06-01

    Objective. The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. Approach. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Main results. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. Significance. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  2. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  3. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  4. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Science.gov (United States)

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  5. Do the Kinds of Achievement Errors Made by Students Diagnosed with ADHD Vary as a Function of Their Reading Ability?

    Science.gov (United States)

    Pagirsky, Matthew S.; Koriakin, Taylor A.; Avitia, Maria; Costa, Michael; Marchis, Lavinia; Maykel, Cheryl; Sassu, Kari; Bray, Melissa A.; Pan, Xingyu

    2017-01-01

    A large body of research has documented the relationship between attention-deficit hyperactivity disorder (ADHD) and reading difficulties in children; however, there have been no studies to date that have examined errors made by students with ADHD and reading difficulties. The present study sought to determine whether the kinds of achievement…

  6. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  7. Hybrid evolutionary techniques in feed forward neural network with distributed error for classification of handwritten Hindi `SWARS'

    Science.gov (United States)

    Kumar, Somesh; Pratap Singh, Manu; Goel, Rajkumar; Lavania, Rajesh

    2013-12-01

    In this work, the performance of feedforward neural network with a descent gradient of distributed error and the genetic algorithm (GA) is evaluated for the recognition of handwritten 'SWARS' of Hindi curve script. The performance index for the feedforward multilayer neural networks is considered here with distributed instantaneous unknown error i.e. different error for different layers. The objective of the GA is to make the search process more efficient to determine the optimal weight vectors from the population. The GA is applied with the distributed error. The fitness function of the GA is considered as the mean of square distributed error that is different for each layer. Hence the convergence is obtained only when the minimum of different errors is determined. It has been analysed that the proposed method of a descent gradient of distributed error with the GA known as hybrid distributed evolutionary technique for the multilayer feed forward neural performs better in terms of accuracy, epochs and the number of optimal solutions for the given training and test pattern sets of the pattern recognition problem.

  8. Supervised, Multivariate, Whole-brain Reduction Did Not Help to Achieve High Classification Performance in Schizophrenia Research

    Directory of Open Access Journals (Sweden)

    Eva Janousova

    2016-08-01

    Full Text Available We examined how penalized linear discriminant analysis with resampling, which is a supervised, multivariate, whole-brain reduction technique, can help schizophrenia diagnostics and research. In an experiment with magnetic resonance brain images of 52 first-episode schizophrenia patients and 52 healthy controls, this method allowed us to select brain areas relevant to schizophrenia, such as the left prefrontal cortex, the anterior cingulum, the right anterior insula, the thalamus and the hippocampus. Nevertheless, the classification performance based on such reduced data was not significantly better than the classification of data reduced by mass univariate selection using a t-test or unsupervised multivariate reduction using principal component analysis. Moreover, we found no important influence of the type of imaging features, namely local deformations or grey matter volumes, and the classification method, specifically linear discriminant analysis or linear support vector machines, on the classification results. However, we ascertained significant effect of a cross-validation setting on classification performance as classification results were overestimated even though the resampling was performed during the selection of brain imaging features. Therefore, it is critically important to perform cross-validation in all steps of the analysis (not only during classification in case there is no external validation set to avoid optimistically biasing the results of classification studies.

  9. A multitemporal probabilistic error correction approach to SVM classification of alpine glacier exploiting sentinel-1 images (Conference Presentation)

    Science.gov (United States)

    Callegari, Mattia; Marin, Carlo; Notarnicola, Claudia; Carturan, Luca; Covi, Federico; Galos, Stephan; Seppi, Roberto

    2016-10-01

    In mountain regions and their forelands, glaciers are key source of melt water during the middle and late ablation season, when most of the winter snow has already melted. Furthermore, alpine glaciers are recognized as sensitive indicators of climatic fluctuations. Monitoring glacier extent changes and glacier surface characteristics (i.e. snow, firn and bare ice coverage) is therefore important for both hydrological applications and climate change studies. Satellite remote sensing data have been widely employed for glacier surface classification. Many approaches exploit optical data, such as from Landsat. Despite the intuitive visual interpretation of optical images and the demonstrated capability to discriminate glacial surface thanks to the combination of different bands, one of the main disadvantages of available high-resolution optical sensors is their dependence on cloud conditions and low revisit time frequency. Therefore, operational monitoring strategies relying only on optical data have serious limitations. Since SAR data are insensitive to clouds, they are potentially a valid alternative to optical data for glacier monitoring. Compared to past SAR missions, the new Sentinel-1 mission provides much higher revisit time frequency (two acquisitions each 12 days) over the entire European Alps, and this number will be doubled once the Sentinel1-b will be in orbit (April 2016). In this work we present a method for glacier surface classification by exploiting dual polarimetric Sentinel-1 data. The method consists of a supervised approach based on Support Vector Machine (SVM). In addition to the VV and VH signals, we tested the contribution of local incidence angle, extracted from a digital elevation model and orbital information, as auxiliary input feature in order to account for the topographic effects. By exploiting impossible temporal transition between different classes (e.g. if at a given date one pixel is classified as rock it cannot be classified as

  10. A new design criterion and construction method for space-time trellis codes based on classification of error events

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The known design criterions of Space-Time Trellis Codes (STTC) on slow Rayleigh fading channel are rank, determinant and trace criterion. These criterions are not advantageous not only in operation but also in performance. With classifying the error events of STTC, a new criterion was presented on slow Rayleigh fading channels. Based on the criterion, an effective and straightforward multi-step method is proposed to construct codes with better performance. This method can reduce the computation of search to small enough. Simulation results show that the codes searched by computer have the same or even better performance than the reported codes.

  11. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Science.gov (United States)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  12. Discriminative Structured Dictionary Learning for Image Classification

    Institute of Scientific and Technical Information of China (English)

    王萍; 兰俊花; 臧玉卫; 宋占杰

    2016-01-01

    In this paper, a discriminative structured dictionary learning algorithm is presented. To enhance the dictionary’s discriminative power, the reconstruction error, classification error and inhomogeneous representation error are integrated into the objective function. The proposed approach learns a single structured dictionary and a linear classifier jointly. The learned dictionary encourages the samples from the same class to have similar sparse codes, and the samples from different classes to have dissimilar sparse codes. The solution to the objective function is achieved by employing a feature-sign search algorithm and Lagrange dual method. Experimental results on three public databases demonstrate that the proposed approach outperforms several recently proposed dictionary learning techniques for classification.

  13. Sparse group lasso and high dimensional multinomial classification

    DEFF Research Database (Denmark)

    Vincent, Martin; Hansen, N.R.

    2014-01-01

    group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  14. Rhythm Analysis by Heartbeat Classification in the Electrocardiogram (Review article of the research achievements of the members of the Centre of Biomedical Engineering, Bulgarian Academy of Sciences

    Directory of Open Access Journals (Sweden)

    Irena Jekova

    2009-08-01

    Full Text Available The morphological and rhythm analysis of the electrocardiogram (ECG is based on ventricular beats detection, wave parameters measurement, as amplitudes, widths, polarities, intervals and relations between them, and a subsequent classification supporting the diagnostic process. Number of algorithms for detection and classification of the QRS complexes have been developed by researchers in the Centre of Biomedical Engineering - Bulgarian Academy of Sciences, and are reviewed in this material. Combined criteria have been introduced dealing with the QRS areas and amplitudes, the waveshapes evaluated by steep slopes and sharp peaks, vectorcardiographic (VCG loop descriptors, RR intervals irregularities. Algorithms have been designed for application on a single ECG lead, a synthesized lead derived by multichannel synchronous recordings, or simultaneous multilead analysis. Some approaches are based on templates matching, cross-correlation or rely on a continuous updating of adaptive thresholds. Various beat classification methods have been designed involving discriminant analysis, the K-th nearest neighbors, fuzzy sets, genetic algorithms, neural networks, etc. The efficiency of the developed methods has been assessed using internationally recognized arrhythmia ECG databases with annotated beats and rhythm disturbances. In general, high values for specificity and sensitivity competitive to those reported in the literature have been achieved.

  15. Fabrication of a magnetic-tunnel-junction-based nonvolatile logic-in-memory LSI with content-aware write error masking scheme achieving 92% storage capacity and 79% power reduction

    Science.gov (United States)

    Natsui, Masanori; Tamakoshi, Akira; Endoh, Tetsuo; Ohno, Hideo; Hanyu, Takahiro

    2017-04-01

    A magnetic-tunnel-junction (MTJ)-based video coding hardware with an MTJ-write-error-rate relaxation scheme as well as a nonvolatile storage capacity reduction technique is designed and fabricated in a 90 nm MOS and 75 nm perpendicular MTJ process. The proposed MTJ-oriented dynamic error masking scheme suppresses the effect of write operation errors on the operation result of LSI, which results in the increase in an acceptable MTJ write error rate up to 7.8 times with less than 6% area overhead, while achieving 79% power reduction compared with that of the static-random-access-memory-based one.

  16. Random forest for gene selection and microarray data classification.

    Science.gov (United States)

    Moorthy, Kohbalan; Mohamad, Mohd Saberi

    2011-01-01

    A random forest method has been selected to perform both gene selection and classification of the microarray data. In this embedded method, the selection of smallest possible sets of genes with lowest error rates is the key factor in achieving highest classification accuracy. Hence, improved gene selection method using random forest has been proposed to obtain the smallest subset of genes as well as biggest subset of genes prior to classification. The option for biggest subset selection is done to assist researchers who intend to use the informative genes for further research. Enhanced random forest gene selection has performed better in terms of selecting the smallest subset as well as biggest subset of informative genes with lowest out of bag error rates through gene selection. Furthermore, the classification performed on the selected subset of genes using random forest has lead to lower prediction error rates compared to existing method and other similar available methods.

  17. Achievements in mental health outcome measurement in Australia: Reflections on progress made by the Australian Mental Health Outcomes and Classification Network (AMHOCN

    Directory of Open Access Journals (Sweden)

    Burgess Philip

    2012-05-01

    Full Text Available Abstract Background Australia’s National Mental Health Strategy has emphasised the quality, effectiveness and efficiency of services, and has promoted the collection of outcomes and casemix data as a means of monitoring these. All public sector mental health services across Australia now routinely report outcomes and casemix data. Since late-2003, the Australian Mental Health Outcomes and Classification Network (AMHOCN has received, processed, analysed and reported on outcome data at a national level, and played a training and service development role. This paper documents the history of AMHOCN’s activities and achievements, with a view to providing lessons for others embarking on similar exercises. Method We conducted a desktop review of relevant documents to summarise the history of AMHOCN. Results AMHOCN has operated within a framework that has provided an overarching structure to guide its activities but has been flexible enough to allow it to respond to changing priorities. With no precedents to draw upon, it has undertaken activities in an iterative fashion with an element of ‘trial and error’. It has taken a multi-pronged approach to ensuring that data are of high quality: developing innovative technical solutions; fostering ‘information literacy’; maximising the clinical utility of data at a local level; and producing reports that are meaningful to a range of audiences. Conclusion AMHOCN’s efforts have contributed to routine outcome measurement gaining a firm foothold in Australia’s public sector mental health services.

  18. Achieving the "triple aim" for inborn errors of metabolism: a review of challenges to outcomes research and presentation of a new practice-based evidence framework.

    Science.gov (United States)

    Potter, Beth K; Chakraborty, Pranesh; Kronick, Jonathan B; Wilson, Kumanan; Coyle, Doug; Feigenbaum, Annette; Geraghty, Michael T; Karaceper, Maria D; Little, Julian; Mhanni, Aizeddin; Mitchell, John J; Siriwardena, Komudi; Wilson, Brenda J; Syrowatka, Ania

    2013-06-01

    Across all areas of health care, decision makers are in pursuit of what Berwick and colleagues have called the "triple aim": improving patient experiences with care, improving health outcomes, and managing health system impacts. This is challenging in a rare disease context, as exemplified by inborn errors of metabolism. There is a need for evaluative outcomes research to support effective and appropriate care for inborn errors of metabolism. We suggest that such research should consider interventions at both the level of the health system (e.g., early detection through newborn screening, programs to provide access to treatments) and the level of individual patient care (e.g., orphan drugs, medical foods). We have developed a practice-based evidence framework to guide outcomes research for inborn errors of metabolism. Focusing on outcomes across the triple aim, this framework integrates three priority themes: tailoring care in the context of clinical heterogeneity; a shift from "urgent care" to "opportunity for improvement"; and the need to evaluate the comparative effectiveness of emerging and established therapies. Guided by the framework, a new Canadian research network has been established to generate knowledge that will inform the design and delivery of health services for patients with inborn errors of metabolism and other rare diseases.

  19. Research on Controller Errors Classification and Analysis Model Based on Information Processing Theory%基于信息加工的管制人误分类分析模型研究

    Institute of Scientific and Technical Information of China (English)

    罗晓利; 秦凤姣; 孟斌; 李海龙

    2015-01-01

    On the basis of comparing and analyzing the existing human error identification models ,and in the light of controller's task characteristics ,the paper constructs the air traffic controller (ATCo) errors classification and analysis system model integrating the advantages of different human error analysis mod‐els and with cognitive psychology as the basis .The model takes into account the controller‐related factors while working ,also the controller errors can be analyzed from "reason and human error type identifica‐tion","information processing",and"internal and psychological mechenism".After these works ,an unsafe incident case of ATC is investigated by using the model .The results show that the model can be used to recognize ,analyze and prevent controller errors .%在比较分析已有人误因素识别模型的基础上,根据管制员任务特点,以认知心理学为基础,融合不同人误分析模型的优势成分,构建了基于信息加工的管制人误分类分析系统模型。该模型考虑了管制员任务执行过程中的相关条件因素,从“诱因及人误类型辨识”、“信息加工过程”、“内部和心理机制”进行管制员人误致因的辨识分析。最后,采用该模型对一起空管不安全事件进行了分析,结果表明,本模型可用于管制人误的有效识别、分类分析和预防。

  20. Hyperspectral image classification based on spatial and spectral features and sparse representation

    Institute of Scientific and Technical Information of China (English)

    Yang Jing-Hui; Wang Li-Guo; Qian Jin-Xi

    2014-01-01

    To minimize the low classification accuracy and low utilization of spatial information in traditional hyperspectral image classification methods, we propose a new hyperspectral image classification method, which is based on the Gabor spatial texture features and nonparametric weighted spectral features, and the sparse representation classification method (Gabor–NWSF and SRC), abbreviated GNWSF–SRC. The proposed (GNWSF–SRC) method first combines the Gabor spatial features and nonparametric weighted spectral features to describe the hyperspectral image, and then applies the sparse representation method. Finally, the classification is obtained by analyzing the reconstruction error. We use the proposed method to process two typical hyperspectral data sets with different percentages of training samples. Theoretical analysis and simulation demonstrate that the proposed method improves the classification accuracy and Kappa coefficient compared with traditional classification methods and achieves better classification performance.

  1. Análisis y Clasificación de Errores Cometidos por Alumnos de Secundaria en los Procesos de Sustitución Formal, Generalización y Modelización en Álgebra (Secondary Students´ Error Analysis and Classification in Formal Substitution, Generalization and Modelling Process in Algebra

    Directory of Open Access Journals (Sweden)

    Raquel M. Ruano

    2008-01-01

    Full Text Available Presentamos un estudio con alumnos de educación secundaria sobre tres procesos específicos del lenguaje algebraico: la sustitución formal, la generalización y la modelización. A partir de las respuestas a un cuestionario, realizamos una clasificación de los errores cometidos y se analizan sus posibles orígenes. Finalmente, formulamos algunas consecuencias didácticas que se derivan de estos resultados. We present a study with secondary students about three specific processes of algebraic language: Formal substitution, generalization, and modelling. Using a test, we develop a students´ errors classifications, and we analyze its possible origins. Finally we present some didactical conclusions from the results.

  2. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Radatz, Hendrik

    1979-01-01

    Five types of errors in an information-processing classification are discussed: language difficulties; difficulties in obtaining spatial information; deficient mastery of prerequisite skills, facts, and concepts; incorrect associations; and application of irrelevant rules. (MP)

  3. The Research and Appliaction of the Multi-classification Algorithm of Error-Correcting Codes Based on Support Vector Machine%基于SVM的纠错编码多分类算法的研究与应用

    Institute of Scientific and Technical Information of China (English)

    祖文超; 苑津莎; 王峰; 刘磊

    2012-01-01

    In order to enhance the accuracy rate of transformer fault diagnosis,multiclass classification algorithm,which is based upon Error-correcting codes connects with SVM,has been proposedThe mathe-matical model of transformer fault diagnosis is set up according to the theory of Support Vector Machine. Firstly,the Error-correcting codes matrix constructs some irrelevant Support Vector Machine,so that the accuracy rate of classified model can be enhanced.Finally,taking the dissolved gases in the transformer oil as the practise and testing sample of Error-correcting codes and SVM to realize transformer fault diagno- sis.And checking the arithmetic by using UCI data.The multiclass classification algorithm has been verified through VS2008 combined with Libsvm has been verified.And the result shows the method has high ac- curacy of classification.%为了提高变压器故障诊断的准确率,提出了一种基于纠错编码和支持向量机相结合的多分类算法,根据SVM理论建立变压器故障诊断数学模型,首先基于纠错编码矩阵构造出若干个互不相关的子支持向量机,以提高分类模型的分类准确率。最后把变压器油中溶解气体(DGA)作为纠错编码支持向量机的训练以及测试样本,实现变压器的故障诊断,同时用UCI数据对该算法进行验证。通过VS2008和Libsvm相结合对其进行验证,结果表明该方法具有很高的分类精度。

  4. Modulation classification based on spectrogram

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The aim of modulation classification (MC) is to identify the modulation type of a communication signal. It plays an important role in many cooperative or noncooperative communication applications. Three spectrogram-based modulation classification methods are proposed. Their reccgnition scope and performance are investigated or evaluated by theoretical analysis and extensive simulation studies. The method taking moment-like features is robust to frequency offset while the other two, which make use of principal component analysis (PCA) with different transformation inputs,can achieve satisfactory accuracy even at low SNR (as low as 2 dB). Due to the properties of spectrogram, the statistical pattern recognition techniques, and the image preprocessing steps, all of our methods are insensitive to unknown phase and frequency offsets, timing errors, and the arriving sequence of symbols.

  5. 试论洛克白板说思想的得失%Discussion on the Achievements and Errors of the Theory of Tabula Rasa of Lock

    Institute of Scientific and Technical Information of China (English)

    杜晶; 傅长吉

    2014-01-01

    Lock’ s theory of tabula rasa has a far-reaching impact on Modern Western Philosophy. The theory, which opposing the ideology of innate ideas, proposing the thought of initiative of cognitive subject, and exerting an effect to the nature, makes positive influence to the world.It is not difficult to find that Lock partially understands the cognitive origin, destination and process of development and exaggerates or restrains the role of the heart. In spite of this, the theory of tabula rasa still has great significance, which enlightens people to master the correct mode of thinking and understanding in the process of knowing the world.After testing and understanding the developing world, we will achieve the value of ourselves with talent and ability.%洛克白板说思想对近代西方哲学产生深远影响,它反对“天赋观念”,提出认识主体的人具有自主能动性,并论证观念产生发展于自然界,具有积极价值。随着对白板说思想的深入了解,不难发现洛克在探究人的主体性时夸大或抑制人心作用,片面理解认识来源和归宿以及认识的发展过程。尽管如此,洛克白板说思想仍具有重大的意义,它启示人在认识世界的过程中要掌握正确的思维方式和认识方法,应用天赋能力在自然界中检验并发展认识,从而实现自身价值。

  6. Research on Software Error Behavior Classification Based on Software Failure Chain%基于软件失效链的软件错误行为分类研究

    Institute of Scientific and Technical Information of China (English)

    刘义颖; 江建慧

    2015-01-01

    目前软件应用广泛,对软件可靠性要求越来越高,研究软件的缺陷—错误—失效过程,提前预防失效的发生,减小软件失效带来的损失是十分必要的。研究描述软件错误行为的属性有助于独一无二地描述不同的错误行为,为建立软件故障模式库、软件故障预测和软件故障注入提供依据。文中基于软件失效链的理论,分析软件缺陷、软件错误和软件失效构成的因果链,由缺陷—错误—失效链之间的因果关系,进一步分析描述各个阶段异常的属性集合之间的联系。以现有的IEEE软件异常分类标准研究成果为基础,通过缺陷属性集合和失效属性集合来推导出错误属性集合,给出一种软件错误行为的分类方法,并给出属性集合以及参考值,选取基于最小相关和最大依赖度准则的属性约简算法进行实验,验证属性的合理性。%Software applications are more important than before. The requirements of reliability are more and more higher. It is very neces-sary to study the process of software defect-error-failure,to prevent failure happened in advance and reduce losses. It is helpful to de-scribe the unique software error behavior and help developers to communicate about this field. It also provides more support with software fault pattern library,software fault detection and fault injection. Based on software failure chain theory,analyze the causal chain of soft-ware defect-error-failure,further analyzing and describing each stage abnormal relationships between attributes sets. Based on the existing IEEE software anomaly classification standard,give out software error attributes sets and reference values and a way to classify error be-haviors. Verify rationality of attributes by the attribute reduction algorithm of minimal mutual information and maximal dependency.

  7. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    2013-01-01

    ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  8. Signal Classification for Acoustic Neutrino Detection

    CERN Document Server

    Neff, M; Enzenhöfer, A; Graf, K; Hößl, J; Katz, U; Lahmann, R; Richardt, C

    2011-01-01

    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of one percent is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  9. Signal classification for acoustic neutrino detection

    Energy Technology Data Exchange (ETDEWEB)

    Neff, M., E-mail: max.neff@physik.uni-erlangen.de [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany); Anton, G.; Enzenhoefer, A.; Graf, K.; Hoessl, J.; Katz, U.; Lahmann, R.; Richardt, C. [Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen Centre for Astroparticle Physics, Erwin-Rommel-Str. 1, 91058 Erlangen (Germany)

    2012-01-11

    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of 1% is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  10. Improved Error Thresholds for Measurement-Free Error Correction

    Science.gov (United States)

    Crow, Daniel; Joynt, Robert; Saffman, M.

    2016-09-01

    Motivated by limitations and capabilities of neutral atom qubits, we examine whether measurement-free error correction can produce practical error thresholds. We show that this can be achieved by extracting redundant syndrome information, giving our procedure extra fault tolerance and eliminating the need for ancilla verification. The procedure is particularly favorable when multiqubit gates are available for the correction step. Simulations of the bit-flip, Bacon-Shor, and Steane codes indicate that coherent error correction can produce threshold error rates that are on the order of 10-3 to 10-4—comparable with or better than measurement-based values, and much better than previous results for other coherent error correction schemes. This indicates that coherent error correction is worthy of serious consideration for achieving protected logical qubits.

  11. On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data

    CERN Document Server

    Richards, Joseph W; Butler, Nathaniel R; Bloom, Joshua S; Brewer, John M; Crellin-Quick, Arien; Higgins, Justin; Kennedy, Rachel; Rischard, Maxime

    2011-01-01

    With the coming data deluge from synoptic surveys, there is a growing need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly-observed variables based on a small number of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics ("feature"), detail methods to robustly estimate periodic light-curve features, introduce tree-ensemble methods for accurate variable star classification, and show how to rigorously evaluate the classification results using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% overall classification error using the random forest classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying sam...

  12. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  13. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  14. Sparse group lasso and high dimensional multinomial classification

    DEFF Research Database (Denmark)

    Vincent, Martin; Hansen, N.R.

    2014-01-01

    algorithm is available in the R package msgl. Its performance scales well with the problem size as illustrated by one of the examples considered - a 50 class classification problem with 10 k features, which amounts to estimating 500 k parameters. © 2013 Elsevier Inc. All rights reserved.......The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse...... group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  15. Error detection and reduction in blood banking.

    Science.gov (United States)

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  16. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  17. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  18. A new classification of glaucomas

    Directory of Open Access Journals (Sweden)

    Bordeianu CD

    2014-09-01

    Full Text Available Constantin-Dan Bordeianu Private Practice, Ploiesti, Prahova, Romania Purpose: To suggest a new glaucoma classification that is pathogenic, etiologic, and clinical.Methods: After discussing the logical pathway used in criteria selection, the paper presents the new classification and compares it with the classification currently in use, that is, the one issued by the European Glaucoma Society in 2008.Results: The paper proves that the new classification is clear (being based on a coherent and consistently followed set of criteria, is comprehensive (framing all forms of glaucoma, and helps in understanding the sickness understanding (in that it uses a logical framing system. The great advantage is that it facilitates therapeutic decision making in that it offers direct therapeutic suggestions and avoids errors leading to disasters. Moreover, the scheme remains open to any new development.Conclusion: The suggested classification is a pathogenic, etiologic, and clinical classification that fulfills the conditions of an ideal classification. The suggested classification is the first classification in which the main criterion is consistently used for the first 5 to 7 crossings until its differentiation capabilities are exhausted. Then, secondary criteria (etiologic and clinical pick up the relay until each form finds its logical place in the scheme. In order to avoid unclear aspects, the genetic criterion is no longer used, being replaced by age, one of the clinical criteria. The suggested classification brings only benefits to all categories of ophthalmologists: the beginners will have a tool to better understand the sickness and to ease their decision making, whereas the experienced doctors will have their practice simplified. For all doctors, errors leading to therapeutic disasters will be less likely to happen. Finally, researchers will have the object of their work gathered in the group of glaucoma with unknown or uncertain pathogenesis, whereas

  19. Improve mask inspection capacity with Automatic Defect Classification (ADC)

    Science.gov (United States)

    Wang, Crystal; Ho, Steven; Guo, Eric; Wang, Kechang; Lakkapragada, Suresh; Yu, Jiao; Hu, Peter; Tolani, Vikram; Pang, Linyong

    2013-09-01

    As optical lithography continues to extend into low-k1 regime, resolution of mask patterns continues to diminish. The adoption of RET techniques like aggressive OPC, sub-resolution assist features combined with the requirements to detect even smaller defects on masks due to increasing MEEF, poses considerable challenges for mask inspection operators and engineers. Therefore a comprehensive approach is required in handling defects post-inspections by correctly identifying and classifying the real killer defects impacting the printability on wafer, and ignoring nuisance defect and false defects caused by inspection systems. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at the SMIC mask shop for the 40nm technology node. Traditionally, each defect is manually examined and classified by the inspection operator based on a set of predefined rules and human judgment. At SMIC mask shop due to the significant total number of detected defects, manual classification is not cost-effective due to increased inspection cycle time, resulting in constrained mask inspection capacity, since the review has to be performed while the mask stays on the inspection system. Luminescent Technologies Automated Defect Classification (ADC) product offers a complete and systematic approach for defect disposition and classification offline, resulting in improved utilization of the current mask inspection capability. Based on results from implementation of ADC in SMIC mask production flow, there was around 20% improvement in the inspection capacity compared to the traditional flow. This approach of computationally reviewing defects post mask-inspection ensures no yield loss by qualifying reticles without the errors associated with operator mis-classification or human error. The ADC engine retrieves the high resolution inspection images and uses a decision-tree flow to classify a given defect. Some identification mechanisms adopted by ADC to

  20. Rademacher Complexity in Neyman-Pearson Classification

    Institute of Scientific and Technical Information of China (English)

    Min HAN; Di Rong CHEN; Zhao Xu SUN

    2009-01-01

    Neyman-Pearson(NP) criterion is one of the most important ways in hypothesis testing.It is also a criterion for classification. This paper addresses the problem of bounding the estimation error of NP classification, in terms of Rademacher averages. We investigate the behavior of the global and local Rademacher averages, and present new NP classification error bounds which are based on the localized averages, and indicate how the estimation error can be estimated without a priori knowledge of the class at hand.

  1. Improved neural network algorithm for classification of UAV imagery related to Wenchuan earthquake

    Science.gov (United States)

    Lin, Na; Yang, Wunian; Wang, Bin

    2009-06-01

    When Wenchuan earthquake struck, the terrain of the region changed violently. Unmanned aerial vehicles (UAV) remote sensing is effective in extracting first hand information. The high resolution images are of great importance in disaster management and relief operations. Back propagation (BP) neural network is an artificial neural network which combines multi-layer feed-forward network and error back-propagation algorithm. It has a strong input-output mapping capability, and does not require the object to be identified obeying certain distribution law. It has strong non-linear features and error-tolerant capabilities. Remotely-sensed image classification can achieve high accuracy and satisfactory error-tolerant capabilities. But it also has drawbacks such as slow convergence speed and can probably be trapped by local minimum points. In order to solve these problems, we have improved this algorithm through setting up self-adaptive training rate and adding momentum factor. UAV high-resolution aerial image in Taoguan District of Wenchuan County is used as data source. First, we preprocess UAV aerial images and rectify geometric distortion in images. Training samples were selected and purified. The image is then classified using the improved BP neural network algorithm. Finally, we compare such classification result with the maximum likelihood classification (MLC) result. Numerical comparison shows that the overall accuracy of maximum likelihood classification is 83.8%, while the improved BP neural network classification is 89.7%. The testing results indicate that the latter is better.

  2. A deep learning approach to the classification of 3D CAD models

    Institute of Scientific and Technical Information of China (English)

    Fei-wei QIN; Lu-ye LI; Shu-ming GAO; Xiao-ling YANG; Xiang CHEN

    2014-01-01

    Model classification is essential to the management and reuse of 3D CAD models. Manual model classification is laborious and error prone. At the same time, the automatic classification methods are scarce due to the intrinsic complexity of 3D CAD models. In this paper, we propose an automatic 3D CAD model classification approach based on deep neural networks. According to prior knowledge of the CAD domain, features are selected and extracted from 3D CAD models first, and then pre-processed as high dimensional input vectors for category recognition. By analogy with the thinking process of engineers, a deep neural network classifier for 3D CAD models is constructed with the aid of deep learning techniques. To obtain an optimal solution, multiple strategies are appropriately chosen and applied in the training phase, which makes our classifier achieve better per-formance. We demonstrate the efficiency and effectiveness of our approach through experiments on 3D CAD model datasets.

  3. Error Analysis in Composition of Iranian Lower Intermediate Students

    Science.gov (United States)

    Taghavi, Mehdi

    2012-01-01

    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  4. Binary Error Correcting Network Codes

    CERN Document Server

    Wang, Qiwen; Li, Shuo-Yen Robert

    2011-01-01

    We consider network coding for networks experiencing worst-case bit-flip errors, and argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. We propose a new metric for errors under this model. Using this metric, we prove a new Hamming-type upper bound on the network capacity. We also show a commensurate lower bound based on GV-type codes that can be used for error-correction. The codes used to attain the lower bound are non-coherent (do not require prior knowledge of network topology). The end-to-end nature of our design enables our codes to be overlaid on classical distributed random linear network codes. Further, we free internal nodes from having to implement potentially computationally intensive link-by-link error-correction.

  5. Corpus Analysis of the Various Errors Made by the Student

    Institute of Scientific and Technical Information of China (English)

    刘启迪

    2012-01-01

      The software Wordsmith has been commonly used in corpus linguistics. In this paper, the author used the tool of Con⁃cord in Wordsmith to analyze various errors made by a student. Five passages made by the students are used. After annotating on the errors, the author uses Concord to sort out each error maker and made classification chart of the errors. All the errors are clas⁃sified into two categories:errors caused by carelessness and by language ability. After analyzing, there are mainly three kinds of er⁃rors and in the first category and five kinds of errors in the second category.

  6. ACCUWIND - Methods for classification of cup anemometers

    DEFF Research Database (Denmark)

    Dahlberg, J.-Å.; Friis Pedersen, Troels; Busche, P.

    2006-01-01

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The EuropeanCLASSCUP project posed the objectives to quantify...... the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implementedin the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification...... of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A categoryclassification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result...

  7. 山区LIDAR点云数据的阶层次粗差探测与分析%GROSS ERROR DETECTION AND ANALYSIS BY HIERARCHICAL CLASSIFICATION OF MOUNTAINOUS LIDAR DATA

    Institute of Scientific and Technical Information of China (English)

    李芸; 杨志强; 杨博

    2012-01-01

    Gross error detection is one of the important data processing steps of mountainous LIDAR point cloud data. Through analysing the features of gross error distribution, original LIDAR point cloud data can be divided into extreme outliers, outlier clusters and isolated points. On this basis, the idea of hierarchical gross error detection of mountainous LIDAR point cloud data is proposed, and an example of experimental data is verified. Experimental results show that the method can effectively remove gross errors from original mountainous LIDAR point cloud data, and, to a certain extent, improving the effect of pre-processing of point cloud.%针对山区LIDAR原始点云数据粗差的空间分布特性,将粗差分为极值粗差、粗差簇群和孤立点,在此基础上提出了山区机载LIDAR点云数据粗差探测的阶层次处理,并用实验数据进行了验证.实验结果表明,该方法可以有效地去除山区机载LIDAR原始点云数据中的粗差,在一定程度上提高了点云预处理的效果.

  8. Experimental demonstration of topological error correction

    OpenAIRE

    2012-01-01

    Scalable quantum computing can only be achieved if qubits are manipulated fault-tolerantly. Topological error correction - a novel method which combines topological quantum computing and quantum error correction - possesses the highest known tolerable error rate for a local architecture. This scheme makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the first experimental demonstration of topological error correction with a...

  9. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    Science.gov (United States)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  10. 6. SINIF ÖĞRENCİLERİNİN YAZMA BECERİLERİNDE GÖRÜLEN HATALARIN SINIFLANDIRILMASI / CLASSIFICATION OF THE OBSERVED ERRORS IN WRITING SKILLS OF 6TH GRADE STUDENTS

    Directory of Open Access Journals (Sweden)

    Rabia Sena AKBABA

    2016-09-01

    üzeyinde ise en fazla yapıldığı tespit edilen hata, öğrencilerin paragraflar arasında anlamlı bir geçiş sağlayamamalarıdır Bu araştırma, uygulamaya dayalı olması, metinlerde rastlanan hataların alan yazından da yararlanılarak çeşitli başlıklar altında incelenmesi, öğrencilerin yazma hatalarının nerede daha çok olduğunun belirlenmesi ve hataların nasıl giderilebileceğine yönelik öneriler sunulması bakımından önemlidir. / Writing is a skill obtained in primary school period, needed and used life long as other language skills. This skill is expected to develop parallel with class up-grading. In this sense, the student is expected by to develop her / his writing skill producing a text with less errors parallel to developing class grade. To obtain this expectation varies according to each one. While some students produce error-free texts in direct proportion to their writing education, some others write texts below or much more below the expected level. It is needed to evaluate the students who write texts with errors each by each and to decrease those errors to the minimum. To detect the errors in writing education is significant to understand how and in which way those errors will be eliminated. That’s why, primarily, to detect the place of the errors in texts is required. In this study, the texts written by the students have been evaluated in the light of error topics determined by the researchers. The errors are classified under four topics; word, sentence, paragraph and text, and where the errors appear the most is tried to be established. Where the writing errors done the most in the texts examined with data analysis technic is detected. At the end of the analysis, it is observed that the writing errors at the level of word consist of clerical errors. At the level of sentence, the errors under the titles of punctuation errors, and then ambiguity are observed more. At the level of paragraph, it is observed that the students have

  11. Audio Classification from Time-Frequency Texture

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    Time-frequency representations of audio signals often resemble texture images. This paper derives a simple audio classification algorithm based on treating sound spectrograms as texture images. The algorithm is inspired by an earlier visual classification scheme particularly efficient at classifying textures. While solely based on time-frequency texture features, the algorithm achieves surprisingly good performance in musical instrument classification experiments.

  12. Classification problem in CBIR

    OpenAIRE

    Tatiana Jaworska

    2013-01-01

    At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR). Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results ...

  13. Classification of DNA nucleotides with transverse tunneling currents

    DEFF Research Database (Denmark)

    Pedersen, Jonas Nyvold; Boynton, Paul; Ventra, Massimiliano Di

    2016-01-01

    , however. In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing.......g., the assignment of specific nucleobases to current signals. As the signals from different molecules overlap, unambiguous classification is impossible with a single measurement. We argue that the assignment of molecules to a signal is a standard pattern classification problem and calculation of the error rates......It has been theoretically suggested and experimentally demonstrated that fast and low-cost sequencing of DNA, RNA, and peptide molecules might be achieved by passing such molecules between electrodes embedded in a nanochannel. The experimental realization of this scheme faces major challenges...

  14. Classification problem in CBIR

    Directory of Open Access Journals (Sweden)

    Tatiana Jaworska

    2013-04-01

    Full Text Available At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR. Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results of fuzzy rule-based classification in our CBIR. Further-more, these results are used to construct a search engine taking into account data mining.

  15. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  16. Experimental demonstration of topological error correction.

    Science.gov (United States)

    Yao, Xing-Can; Wang, Tian-Xiong; Chen, Hao-Ze; Gao, Wei-Bo; Fowler, Austin G; Raussendorf, Robert; Chen, Zeng-Bing; Liu, Nai-Le; Lu, Chao-Yang; Deng, You-Jin; Chen, Yu-Ao; Pan, Jian-Wei

    2012-02-22

    Scalable quantum computing can be achieved only if quantum bits are manipulated in a fault-tolerant fashion. Topological error correction--a method that combines topological quantum computation with quantum error correction--has the highest known tolerable error rate for a local architecture. The technique makes use of cluster states with topological properties and requires only nearest-neighbour interactions. Here we report the experimental demonstration of topological error correction with an eight-photon cluster state. We show that a correlation can be protected against a single error on any quantum bit. Also, when all quantum bits are simultaneously subjected to errors with equal probability, the effective error rate can be significantly reduced. Our work demonstrates the viability of topological error correction for fault-tolerant quantum information processing.

  17. High-Performance Neural Networks for Visual Object Classification

    CERN Document Server

    Cireşan, Dan C; Masci, Jonathan; Gambardella, Luca M; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.

  18. ACCUWIND - Methods for classification of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.

    2006-05-15

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  19. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2016-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...

  20. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2014-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...

  1. Semiparametric Gaussian copula classification

    OpenAIRE

    Zhao, Yue; Wegkamp, Marten

    2014-01-01

    This paper studies the binary classification of two distributions with the same Gaussian copula in high dimensions. Under this semiparametric Gaussian copula setting, we derive an accurate semiparametric estimator of the log density ratio, which leads to our empirical decision rule and a bound on its associated excess risk. Our estimation procedure takes advantage of the potential sparsity as well as the low noise condition in the problem, which allows us to achieve faster convergence rate of...

  2. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production....... Finally, pedagogical implication of CFL is discussed and future research is suggested. Keywords: error analysis, comparative sentences, comparative structure ‘‘bǐ - 比’, Chinese as a foreign language (CFL), written production...

  3. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    OpenAIRE

    Kleiss, R. H. P.; Lazopoulos, A.

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...

  4. Improved Algorithm of Pattern Classification and Recognition Applied in a Coal Dust Sensor

    Institute of Scientific and Technical Information of China (English)

    MA Feng-ying; SONG Shu

    2007-01-01

    To resolve the conflicting requirements of measurement precision and real-time performance speed, an improved algorithm for pattern classification and recognition was developed. The angular distribution of diffracted light varies with particle size. These patterns could be classified into groups with an innovative classification based upon reference dust samples. After such classification patterns could be recognized easily and rapidly by minimizing the variance between the reference pattern and dust sample eigenvectors. Simulation showed that the maximum recognition speed improves 20 fold. This enables the use of a single-chip, real-time inversion algorithm. An increased number of reference patterns reduced the errors in total and respiring coal dust measurements. Experiments in coal mine testify that the accuracy of sensor achieves 95%. Results indicate the improved algorithm enhances the precision and real-time capability of the coal dust sensor effectively.

  5. Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification

    CERN Document Server

    Bouselmi, Ghazi; Illina, Irina; Haton, Jean-Paul

    2007-01-01

    In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques.

  6. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  7. Tissue Classification

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  8. Automatic cloud classification of whole sky images

    Directory of Open Access Journals (Sweden)

    A. Heinle

    2010-05-01

    Full Text Available The recently increasing development of whole sky imagers enables temporal and spatial high-resolution sky observations. One application already performed in most cases is the estimation of fractional sky cover. A distinction between different cloud types, however, is still in progress. Here, an automatic cloud classification algorithm is presented, based on a set of mainly statistical features describing the color as well as the texture of an image. The k-nearest-neighbour classifier is used due to its high performance in solving complex issues, simplicity of implementation and low computational complexity. Seven different sky conditions are distinguished: high thin clouds (cirrus and cirrostratus, high patched cumuliform clouds (cirrocumulus and altocumulus, stratocumulus clouds, low cumuliform clouds, thick clouds (cumulonimbus and nimbostratus, stratiform clouds and clear sky. Based on the Leave-One-Out Cross-Validation the algorithm achieves an accuracy of about 97%. In addition, a test run of random images is presented, still outperforming previous algorithms by yielding a success rate of about 75%, or up to 88% if only "serious" errors with respect to radiation impact are considered. Reasons for the decrement in accuracy are discussed, and ideas to further improve the classification results, especially in problematic cases, are investigated.

  9. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  10. Optimized Time-Gated Fluorescence Spectroscopy for the Classification and Recycling of Fluorescently Labeled Plastics.

    Science.gov (United States)

    Fomin, Petr; Zhelondz, Dmitry; Kargel, Christian

    2016-08-29

    For the production of high-quality parts from recycled plastics, a very high purity of the plastic waste to be recycled is mandatory. The incorporation of fluorescent tracers ("markers") into plastics during the manufacturing process helps overcome typical problems of non-tracer based optical classification methods. Despite the unique emission spectra of fluorescent markers, the classification becomes difficult when the host plastics exhibit (strong) autofluorescence that spectrally overlaps the marker fluorescence. Increasing the marker concentration is not an option from an economic perspective and might also adversely affect the properties of the plastics. A measurement approach that suppresses the autofluorescence in the acquired signal is time-gated fluorescence spectroscopy (TGFS). Unfortunately, TGFS is associated with a lower signal-to-noise (S/N) ratio, which results in larger classification errors. In order to optimize the S/N ratio we investigate and validate the best TGFS parameters-derived from a model for the fluorescence signal-for plastics labeled with four specifically designed fluorescent markers. In this study we also demonstrate the implementation of TGFS on a measurement and classification prototype system and determine its performance. Mean values for a sensitivity of [Formula: see text] = 99.93% and precision [Formula: see text] = 99.80% were achieved, proving that a highly reliable classification of plastics can be achieved in practice.

  11. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Directory of Open Access Journals (Sweden)

    Lev V. Utkin

    2012-01-01

    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  12. Evaluation criteria for software classification inventories, accuracies, and maps

    Science.gov (United States)

    Jayroe, R. R., Jr.

    1976-01-01

    Statistical criteria are presented for modifying the contingency table used to evaluate tabular classification results obtained from remote sensing and ground truth maps. This classification technique contains information on the spatial complexity of the test site, on the relative location of classification errors, on agreement of the classification maps with ground truth maps, and reduces back to the original information normally found in a contingency table.

  13. Error Analysis: Past, Present, and Future

    Science.gov (United States)

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  14. Neuromuscular disease classification system

    Science.gov (United States)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M.; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns.

  15. Errors in medicine administration - profile of medicines: knowing and preventing

    OpenAIRE

    Reis,Adriano Max Moreira; Marques, Tatiane Cristina; Opitz,Simone Perufo; Silva,Ana Elisa Bauer de Camargo; GIMENES, Fernanda Raphael Escobar; Teixeira,Thalyta Cardoso Alux; LIMA, Rhanna Emanuela Fontenele; Cassiani, Silvia Helena De Bortoli

    2010-01-01

    OBJECTIVES: To describe the pharmacological characteristics of medicines involved in administration errors and determine the frequency of errors with potentially dangerous medicines and low therapeutic index, in clinical units of five teaching hospitals, in Brazil. METHODS: Multicentric study, descriptive and exploratory, using the non-participant observation technique (during the administration of 4958 doses of medicines) and the anatomical therapeutic chemical classification (ATC). RESULTS:...

  16. Spelling in adolescents with dyslexia: errors and modes of assessment.

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia.

  17. Detection and Classification of Whale Acoustic Signals

    Science.gov (United States)

    Xian, Yin

    This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data. Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear. We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale

  18. The application of Aronson's taxonomy to medication errors in nursing.

    Science.gov (United States)

    Johnson, Maree; Young, Helen

    2011-01-01

    Medication administration is a frequent nursing activity that is prone to error. In this study of 318 self-reported medication incidents (including near misses), very few resulted in patient harm-7% required intervention or prolonged hospitalization or caused temporary harm. Aronson's classification system provided an excellent framework for analysis of the incidents with a close connection between the type of error and the change strategy to minimize medication incidents. Taking a behavioral approach to medication error classification has provided helpful strategies for nurses such as nurse-call cards on patient lockers when patients are absent and checking of medication sign-off by outgoing and incoming staff at handover.

  19. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  20. Multilingual documentation and classification.

    Science.gov (United States)

    Donnelly, Kevin

    2008-01-01

    Health care providers around the world have used classification systems for decades as a basis for documentation, communications, statistical reporting, reimbursement and research. In more recent years machine-readable medical terminologies have taken on greater importance with the adoption of electronic health records and the need for greater granularity of data in clinical systems. Use of a clinical terminology harmonised with classifications, implemented within a clinical information system, will enable the delivery of many patient health benefits including electronic clinical decision support, disease screening and enhanced patient safety. In order to be usable these systems must be translated into the language of use, without losing meaning. It is evident that today one system cannot meet all requirements which call for collaboration and harmonisation in order to achieve true interoperability on a multilingual basis.

  1. AN ANALYSIS OF GRAMMATICAL ERRORS ON SPEAKING ACTIVITIES

    Directory of Open Access Journals (Sweden)

    Merlyn Simbolon

    2015-09-01

    Full Text Available This study aims to analyze the grammatical errors and to provide description of errors on speaking activities using simple present and present progressive tenses made by the second year students of English Education Department, Palangka Raya University. The subject for this study was 30 students. This research applied qualitative research to describe the types, source and causes of students’ errors taken from oral essay test which consisted of questions using the tenses of simple present and present progressive. The errors were indentified and classified according to Linguistic Category Taxonomy and Richard’s classification, well as the possible sources and causes of errors. The findings showed that the errors made by students were in 6 aspects; errors in production of verb groups, errors in the distribution of verb groups, errors in the use of article, errors in the use of preposition, errors in the use of questions and miscellaneous errors. In regard to resource and causes, it was found that intra-lingual interference was the major source of errors (82.55% where overgeneralization took place as the major cause of the errors with total percentage of 44.71%. Keywords: grammatical errors, speaking skill, speaking activities

  2. Inborn errors of metabolism

    Science.gov (United States)

    ... metabolism. A few of them are: Fructose intolerance Galactosemia Maple sugar urine disease (MSUD) Phenylketonuria (PKU) Newborn ... disorder. Alternative Names Metabolism - inborn errors of Images Galactosemia References Bodamer OA. Approach to inborn errors of ...

  3. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    Science.gov (United States)

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  4. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  5. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  6. Unsupervised classification of operator workload from brain signals

    Science.gov (United States)

    Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin

    2016-06-01

    Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.

  7. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  8. Three-Class Mammogram Classification Based on Descriptive CNN Features

    Science.gov (United States)

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  9. Habitat Classification of Temperate Marine Macroalgal Communities Using Bathymetric LiDAR

    Directory of Open Access Journals (Sweden)

    Richard Zavalas

    2014-03-01

    Full Text Available Here, we evaluated the potential of using bathymetric Light Detection and Ranging (LiDAR to characterise shallow water (<30 m benthic habitats of high energy subtidal coastal environments. Habitat classification, quantifying benthic substrata and macroalgal communities, was achieved in this study with the application of LiDAR and underwater video groundtruth data using automated classification techniques. Bathymetry and reflectance datasets were used to produce secondary terrain derivative surfaces (e.g., rugosity, aspect that were assumed to influence benthic patterns observed. An automated decision tree classification approach using the Quick Unbiased Efficient Statistical Tree (QUEST was applied to produce substrata, biological and canopy structure habitat maps of the study area. Error assessment indicated that habitat maps produced were primarily accurate (>70%, with varying results for the classification of individual habitat classes; for instance, producer accuracy for mixed brown algae and sediment substrata, was 74% and 93%, respectively. LiDAR was also successful for differentiating canopy structure of macroalgae communities (i.e., canopy structure classification, such as canopy forming kelp versus erect fine branching algae. In conclusion, habitat characterisation using bathymetric LiDAR provides a unique potential to collect baseline information about biological assemblages and, hence, potential reef connectivity over large areas beyond the range of direct observation. This research contributes a new perspective for assessing the structure of subtidal coastal ecosystems, providing a novel tool for the research and management of such highly dynamic marine environments.

  10. Volumetric magnetic resonance imaging classification for Alzheimer's disease based on kernel density estimation of local features

    Institute of Scientific and Technical Information of China (English)

    YAN Hao; WANG Hu; WANG Yong-hui; ZHANG Yu-mei

    2013-01-01

    Background The classification of Alzheimer's disease (AD) from magnetic resonance imaging (MRI) has been challenged by lack of effective and reliable biomarkers due to inter-subject variability.This article presents a classification method for AD based on kernel density estimation (KDE) of local features.Methods First,a large number of local features were extracted from stable image blobs to represent various anatomical patterns for potential effective biomarkers.Based on distinctive descriptors and locations,the local features were robustly clustered to identify correspondences of the same underlying patterns.Then,the KDE was used to estimate distribution parameters of the correspondences by weighting contributions according to their distances.Thus,biomarkers could be reliably quantified by reducing the effects of further away correspondences which were more likely noises from inter-subject variability.Finally,the Bayes classifier was applied on the distribution parameters for the classification of AD.Results Experiments were performed on different divisions of a publicly available database to investigate the accuracy and the effects of age and AD severity.Our method achieved an equal error classification rate of 0.85 for subject aged 60-80 years exhibiting mild AD and outperformed a recent local feature-based work regardless of both effects.Conclusions We proposed a volumetric brain MRI classification method for neurodegenerative disease based on statistics of local features using KDE.The method may be potentially useful for the computer-aided diagnosis in clinical settings.

  11. Ambiguity and Concepts in Real Time Online Internet Traffic Classification

    Directory of Open Access Journals (Sweden)

    Hamza Awad Hamza Ibrahim

    2014-03-01

    Full Text Available Internet traffic classification gained significant attention in the last few years. Identifying the Internet applications in the real time is one of the most significant challenges in network traffic classification. Most of the proposed classification methods are limited to offline classification and cannot support online classification. This paper aims to highlight the ambiguity in the definition of online classification. Therefore, some of the previous online classification works are discussed and analyzed. This analysing is to check how far the real time online classification was achieved. The results indicate that most of the previous works consider a real Internet traffic but did not consider a real time online classification. In addition, the paper provides a real time classifier which was proposed and used in [1] [2] [3], to show how to perform a real time online classification.

  12. Multi-Level Audio Classification Architecture

    Directory of Open Access Journals (Sweden)

    Jozef Vavrek

    2015-01-01

    Full Text Available A multi-level classification architecture for solving binary discrimination problem is proposed in this paper. The main idea of proposed solution is derived from the fact that solving one binary discrimination problem multiple times can reduce the overall miss-classification error. We aimed our effort towards building the classification architecture employing the combination of multiple binary SVM (Support Vector Machine classifiers for solving two-class discrimination problem. Therefore, we developed a binary discrimination architecture employing the SVM classifier (BDASVM with intention to use it for classification of broadcast news (BN audio data. The fundamental element of BDASVM is the binary decision (BD algorithm that performs discrimination between each pair of acoustic classes utilizing decision function modeled by separating hyperplane. The overall classification accuracy is conditioned by finding the optimal parameters for discrimination function resulting in higher computational complexity. The final form of proposed BDASVM is created by combining four BDSVM discriminators supplemented by decision table. Experimental results show that the proposed classification architecture can decrease the overall classification error in comparison with binary decision trees SVM (BDTSVM architecture.

  13. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  14. Providing an Approach to Locating the Semantic Error of Application Using Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Abdollah Rahimi

    2016-12-01

    Full Text Available Regardless of the efforts taken to produce a computer program, the program may still have some bugs and defects. In fact, the larger and more complex programs are more likely to contain errors. The purpose of this paper is to present an approach to detect erroneous performance of application using clustering technique. Because the program paased different execution paths based on different inputs, there is impossible to discover all errors in the program before delivery the software. Monitoring all execution paths before delivery of program is very difficult or maybe impossible, so a lot of errors are hidden in the program and is revealed after delivery. Solutions that have been proposed to achieve this goal are trying to compare the information in the implementation of the program to be successful or unsuccessful which called determinant and introduces the points suspended to the error to programmer. But the main problem is that the analysis carried out at the decisive time information regardless of affiliation between predicate, leading to the inability of these methods to detect certain types of errors. To solve these problems, in this paper a new solution based on behavior analysis and runtime of executable paths in the form of taking into account the interactions between determinants are provided. For this purpose, a clustering method was used for classification of graphs based on the similarities and the ultimate determination of areas suspected of error in the erroneous code paths. Assessment of the proposed strategy on the collection of real programs shows the success of the proposed approach more accurate in detecting errors compared to previous.

  15. Analysis of the influencing factors of nursing related medication errors based on the conceptual framework of international classification of patient safety%基于 ICPS 分类法框架的护理相关药物事件的影响因素分析

    Institute of Scientific and Technical Information of China (English)

    朱晓萍; 田梅梅; 施雁; 孙晓; 龚美芳; 毛雅芬

    2014-01-01

    Objective To identify the influencing factors of nursing related medication errors , and put forward the effective prevention and control measures .Methods One thousand three hundred and forty-three cases with medication errors from 15 tertiary hospitals in Shanghai Nursing Quality Control Center were chosen . The influencing factors were analyzed by the research tools which were constructed by the level of influencing factors depending on the international classification of patient safety ( ICPS ) through the method of content analysis.Results The occurrences of medication errors were most frequent (62.84%) in the early morning (8:00-16:00), and were related to the most therapeutic nursing .The nursing related medication errors happened frequently in elderly patients with more than 70 years old (32.45%), and which suggested that the abilities of self-care and communication in elderly patients were weak , and the elderly patients were the highest risk of medication errors .The results of safety events of patients in the ICPS were divided into 5 levels including the no, mild, moderate, severe, death.The percent of medication errors including the no , mild, moderate, severe, death in 1 343 cases were respectively 91.88%, 3.35%, 2.76%, 2.01%, 0%.The frequencies of influencing factors of nursing related medication errors in 1 343 cases were 3 185, and the frequencies from high to low were respectively routine violations ,“negligence” and “fault” in the technical mistakes ,“misapplication of good rules” in the error of rule, knowledge-based mistake, communication and illusion.Conclusions Application of the influencing factor of ICPS is helpful to discriminate the system and process defect from the perspective of human error in the nursing managers , and can improve the level of management of patient safety .%目的:识别护理相关药物事件的影响因素,提出有效的预控措施。方法随机抽取上海市15所三级医疗机构,对选取

  16. EXPLICIT ERROR ESTIMATES FOR MIXED AND NONCONFORMING FINITE ELEMENTS

    Institute of Scientific and Technical Information of China (English)

    Shipeng Mao; Zhong-Ci Shi

    2009-01-01

    In this paper, we study the explicit expressions of the constants in the error estimates of the lowest order mixed and nonconforming finite element methods. We start with an ex-plicit relation between the error constant of the lowest order Raviart-Thomas interpolation error and the geometric characters of the triangle. This gives an explicit error constant of the lowest order mixed finite element method. Furthermore, similar results can be ex-tended to the nonconforming P1 scheme based on its close connection with the lowest order Raviart-Thomas method. Meanwhile, such explicit a priori error estimates can be used as computable error bounds, which are also consistent with the maximal angle condition for the optimal error estimates of mixed and nonconforming finite element methods.Mathematics subject classification: 65N12, 65N15, 65N30, 65N50.

  17. Acetylcholine mediates behavioral and neural post-error control

    NARCIS (Netherlands)

    Danielmeier, C.; Allen, E.A.; Jocham, G.; Onur, O.A.; Eichele, T.; Ullsperger, M.

    2015-01-01

    Humans often commit errors when they are distracted by irrelevant information and no longer focus on what is relevant to the task at hand. Adjustments following errors are essential for optimizing goal achievement. The posterior medial frontal cortex (pMFC), a key area for monitoring errors, has bee

  18. A gender-based analysis of Iranian EFL learners' types of written errors

    Directory of Open Access Journals (Sweden)

    Faezeh Boroomand

    2013-05-01

    Full Text Available Committing errors is inevitable in process of language acquisition and learning. Analysis of learners' errors from different perspectives, contributes to the improvement of language learning and teaching. Although the issue of gender differences has received considerable attention in the context of second or foreign language learning and teaching, few studies on the relationship between gender and EFL learners' written errors have been carried out. The present study conducted on 100 Iranian advanced EFL learners' written errors (50 male learners and 50 female learners, presents different classifications and subdivisions of errors, and carries out an analysis on these errors. Detecting the most committed errors in each classification, findings reveal significant differences between error frequencies of the two male and female groups (more error frequency in female written productions.

  19. Selective ablation of Copper-Indium-Diselenide solar cells monitored by laser-induced breakdown spectroscopy and classification methods

    Energy Technology Data Exchange (ETDEWEB)

    Diego-Vallejo, David [Technische Universität Berlin, Institute of Optics and Atomic Physics, Straße des 17, Juni 135, 10623 Berlin (Germany); Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany); Ashkenasi, David, E-mail: d.ashkenasi@lmtb.de [Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany); Lemke, Andreas [Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany); Eichler, Hans Joachim [Technische Universität Berlin, Institute of Optics and Atomic Physics, Straße des 17, Juni 135, 10623 Berlin (Germany); Laser- und Medizin- Technologie Berlin GmbH (LMTB), Applied Laser Technology, Fabeckstr. 60-62, 14195 Berlin (Germany)

    2013-09-01

    Laser-induced breakdown spectroscopy (LIBS) and two classification methods, i.e. linear correlation and artificial neural networks (ANN), are used to monitor P1, P2 and P3 scribing steps of Copper-Indium-Diselenide (CIS) solar cells. Narrow channels featuring complete removal of desired layers with minimum damage on the underlying film are expected to enhance efficiency of solar cells. The monitoring technique is intended to determine that enough material has been removed to reach the desired layer based on the analysis of plasma emission acquired during multiple pass laser scribing. When successful selective scribing is achieved, a high degree of similarity between test and reference spectra has to be identified by classification methods in order to stop the scribing procedure and avoid damaging the bottom layer. Performance of linear correlation and artificial neural networks is compared and evaluated for two spectral bandwidths. By using experimentally determined combinations of classifier and analyzed spectral band for each step, classification performance achieves errors of 7, 1 and 4% for steps P1, P2 and P3, respectively. The feasibility of using plasma emission for the supervision of processing steps of solar cell manufacturing is demonstrated. This method has the potential to be implemented as an online monitoring procedure assisting the production of solar cells. - Highlights: • LIBS and two classification methods were used to monitor CIS solar cells processing. • Selective ablation of thin-film solar cells was improved with inspection system. • Customized classification method and analyzed spectral band enhanced performance.

  20. Maintenance error reduction strategies in nuclear power plants, using root cause analysis.

    Science.gov (United States)

    Wu, T M; Hwang, S L

    1989-06-01

    This study proposes a conceptual model of maintenance tasks to facilitate the identification of root causes of human errors in carrying out such tasks in nuclear power plants. Based on this model, an external/internal classification scheme was developed to discover the root causes of human errors. As a consequence, certain policies pertaining to human error prevention or correction were proposed.

  1. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  2. We need to talk about error: causes and types of error in veterinary practice.

    Science.gov (United States)

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error.

  3. HYBRID INTERNET TRAFFIC CLASSIFICATION TECHNIQUE1

    Institute of Scientific and Technical Information of China (English)

    Li Jun; Zhang Shunyi; Lu Yanqing; Yan Junrong

    2009-01-01

    Accurate and real-time classification of network traffic is significant to network operation and management such as QoS differentiation, traffic shaping and security surveillance. However, with many newly emerged P2P applications using dynamic port numbers, masquerading techniques, and payload encryption to avoid detection, traditional classification approaches turn to be ineffective. In this paper, we present a layered hybrid system to classify current Internet traffic, motivated by variety of network activities and their requirements of traffic classification. The proposed method could achieve fast and accurate traffic classification with low overheads and robustness to accommodate both known and unknown/encrypted applications. Furthermore, it is feasible to be used in the context of real-time traffic classification. Our experimental results show the distinct advantages of the proposed classification system, compared with the one-step Machine Learning (ML) approach.

  4. SHIP CLASSIFICATION FROM MULTISPECTRAL VIDEOS

    Directory of Open Access Journals (Sweden)

    Frederique Robert-Inacio

    2012-05-01

    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  5. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  6. Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

    CERN Document Server

    Kleiss, R H

    2006-01-01

    While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

  7. Implications of Error Analysis Studies for Academic Interventions

    Science.gov (United States)

    Mather, Nancy; Wendling, Barbara J.

    2017-01-01

    We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

  8. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  9. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  10. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  11. Systematic error mitigation in multiple field astrometry

    CERN Document Server

    Gai, Mario

    2011-01-01

    Combination of more than two fields provides constraints on the systematic error of simultaneous observations. The concept is investigated in the context of the Gravitation Astrometric Measurement Experiment (GAME), which aims at measurement of the PPN parameter $\\gamma$ at the $10^{-7}-10^{-8}$ level. Robust self-calibration and control of systematic error is crucial to the achievement of the precision goal. The present work is focused on the concept investigation and practical implementation strategy of systematic error control over four simultaneously observed fields, implementing a "double differential" measurement technique. Some basic requirements on geometry, observing and calibration strategy are derived, discussing the fundamental characteristics of the proposed concept.

  12. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  13. Linear approximation for measurement errors in phase shifting interferometry

    Science.gov (United States)

    van Wingerden, Johannes; Frankena, Hans J.; Smorenburg, Cornelis

    1991-07-01

    This paper shows how measurement errors in phase shifting interferometry (PSI) can be described to a high degree of accuracy in a linear approximation. System error sources considered here are light source instability, imperfect reference phase shifting, mechanical vibrations, nonlinearity of the detector, and quantization of the detector signal. The measurement inaccuracies resulting from these errors are calculated in linear approximation for several formulas commonly used for PSI. The results are presented in tables for easy calculation of the measurement error magnitudes for known system errors. In addition, this paper discusses the measurement error reduction which can be achieved by choosing an appropriate phase calculation formula.

  14. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  15. Neural network classification - A Bayesian interpretation

    Science.gov (United States)

    Wan, Eric A.

    1990-01-01

    The relationship between minimizing a mean squared error and finding the optimal Bayesian classifier is reviewed. This provides a theoretical interpretation for the process by which neural networks are used in classification. A number of confidence measures are proposed to evaluate the performance of the neural network classifier within a statistical framework.

  16. Errors and mistakes in breast ultrasound diagnostics.

    Science.gov (United States)

    Jakubowski, Wiesław; Dobruch-Sobczak, Katarzyna; Migda, Bartosz

    2012-09-01

    Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Nevertheless, as in each imaging method, there are errors and mistakes resulting from the technical limitations of the method, breast anatomy (fibrous remodeling), insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts), improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS-usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, including the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  17. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  18. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  19. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  20. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  1. Errors generated with the use of rectangular collimation

    Energy Technology Data Exchange (ETDEWEB)

    Parks, E.T. (Department of Allied Health, Western Kentucky University, Bowling Green (USA))

    1991-04-01

    This study was designed to determine whether various techniques for achieving rectangular collimation generate different numbers and types of errors and remakes and to determine whether operator skill level influences errors and remakes. Eighteen students exposed full-mouth series of radiographs on manikins with the use of six techniques. The students were grouped according to skill level. The radiographs were evaluated for errors and remakes resulting from errors in the following categories: cone cutting, vertical angulation, and film placement. Significant differences were found among the techniques in cone cutting errors and remakes, vertical angulation errors and remakes, and total errors and remakes. Operator skill did not appear to influence the number or types of errors or remakes generated. Rectangular collimation techniques produced more errors than did the round collimation techniques. However, only one rectangular collimation technique generated significantly more remakes than the other techniques.

  2. Featureless Classification of Light Curves

    CERN Document Server

    Kügler, Sven Dennis; Polsterer, Kai Lars

    2015-01-01

    In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data can not be represented naturally as a plain vector which directly can be fed into a classifier. In the literature, various statistical features derived from time series serve as a representation. Typically, the usefulness of the derived features is judged in an empirical fashion according to their predictive power. In this work, an alternative to the feature-based approach is investigated. In this new representation the time series is described by a density model. Similarity between each pair of time series is quantified by the distance between their respective models. The density model captures all the information available, also including measurement errors. Hence, we view this model as a generalisation to the static features which directly can be derived, e.g., as ...

  3. A Literature Review of Research on Error Analysis Abroad

    Institute of Scientific and Technical Information of China (English)

    肖倩

    2014-01-01

    Error constitutes an important part of interlanguage.Error analysis is an approach influenced by behaviorism,it based on the cognitive theory. The aim of error analysis is to explore the errors made by second language learners, exploring the mental process of learners’second language acquisition,which is of great importance to both learners and teachers. However,as a research tool,error analysis has its limitations. In order to better understand and make best use of error analysis,its background, definition, basic assumptions, classification, procedure, explanation, implication as well as its application will be illustrated. Its limitations will be analyzed from the prospectives of its nature, definition categories.The literature review abroad sheds insight on implication for second language teaching.

  4. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  5. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...

  6. Minimax Optimal Rates of Convergence for Multicategory Classifications

    Institute of Scientific and Technical Information of China (English)

    Di Rong CHEN; Xu YOU

    2007-01-01

    In the problem of classification (or pattern recognition),given a set of n samples,weattempt to construct a classifier gn with a small misclassification error.It is important to study the convergence rates of the misclassification error as n tends to infinity.It is known that such a rate can'texist for the set of all distributions.In this paper we obtain the optimal convergence rates for a classof distributions D(λ,ω) in multicategory classification and nonstandard binary classification.

  7. Contextualizing Object Detection and Classification.

    Science.gov (United States)

    Chen, Qiang; Song, Zheng; Dong, Jian; Huang, Zhongyang; Hua, Yang; Yan, Shuicheng

    2015-01-01

    We investigate how to iteratively and mutually boost object classification and detection performance by taking the outputs from one task as the context of the other one. While context models have been quite popular, previous works mainly concentrate on co-occurrence relationship within classes and few of them focus on contextualization from a top-down perspective, i.e. high-level task context. In this paper, our system adopts a new method for adaptive context modeling and iterative boosting. First, the contextualized support vector machine (Context-SVM) is proposed, where the context takes the role of dynamically adjusting the classification score based on the sample ambiguity, and thus the context-adaptive classifier is achieved. Then, an iterative training procedure is presented. In each step, Context-SVM, associated with the output context from one task (object classification or detection), is instantiated to boost the performance for the other task, whose augmented outputs are then further used to improve the former task by Context-SVM. The proposed solution is evaluated on the object classification and detection tasks of PASCAL Visual Object Classes Challenge (VOC) 2007, 2010 and SUN09 data sets, and achieves the state-of-the-art performance.

  8. Motion error compensation of multi-legged walking robots

    Science.gov (United States)

    Wang, Liangwen; Chen, Xuedong; Wang, Xinjie; Tang, Weigang; Sun, Yi; Pan, Chunmei

    2012-07-01

    Existing errors in the structure and kinematic parameters of multi-legged walking robots, the motion trajectory of robot will diverge from the ideal sports requirements in movement. Since the existing error compensation is usually used for control compensation of manipulator arm, the error compensation of multi-legged robots has seldom been explored. In order to reduce the kinematic error of robots, a motion error compensation method based on the feedforward for multi-legged mobile robots is proposed to improve motion precision of a mobile robot. The locus error of a robot body is measured, when robot moves along a given track. Error of driven joint variables is obtained by error calculation model in terms of the locus error of robot body. Error value is used to compensate driven joint variables and modify control model of robot, which can drive the robots following control model modified. The model of the relation between robot's locus errors and kinematic variables errors is set up to achieve the kinematic error compensation. On the basis of the inverse kinematics of a multi-legged walking robot, the relation between error of the motion trajectory and driven joint variables of robots is discussed. Moreover, the equation set is obtained, which expresses relation among error of driven joint variables, structure parameters and error of robot's locus. Take MiniQuad as an example, when the robot MiniQuad moves following beeline tread, motion error compensation is studied. The actual locus errors of the robot body are measured before and after compensation in the test. According to the test, variations of the actual coordinate value of the robot centroid in x-direction and z-direction are reduced more than one time. The kinematic errors of robot body are reduced effectively by the use of the motion error compensation method based on the feedforward.

  9. Minimally-sized balanced decomposition schemes for multi-class classification

    NARCIS (Netherlands)

    Smirnov, E.N.; Moed, M.; Nalbantov, G.I.; Sprinkhuizen-Kuyper, I.G.

    2011-01-01

    Error-Correcting Output Coding (ECOC) is a well-known class of decomposition schemes for multi-class classification. It allows representing any multiclass classification problem as a set of binary classification problems. Due to code redundancy ECOC schemes can significantly improve generalization p

  10. Multinomial mixture model with heterogeneous classification probabilities

    Science.gov (United States)

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  11. Integrating TM and Ancillary Geographical Data with Classification Trees for Land Cover Classification of Marsh Area

    Institute of Scientific and Technical Information of China (English)

    NA Xiaodong; ZHANG Shuqing; ZHANG Huaiqing; LI Xiaofeng; YU Huan; LIU Chunyue

    2009-01-01

    The main objective of this research is to determine the capacity of land cover classification combining spectral and textural features of Landsat TM imagery with ancillary geographical data in wetlands of the Sanjiang Plain, Heilongjiang Province, China. Semi-variograms and Z-test value were calculated to assess the separability of grey-level co-occurrence texture measures to maximize the difference between land cover types. The degree of spatial autocorrelation showed that window sizes of 3×3 pixels and 11×11 pixels were most appropriate for Landsat TM image texture calculations. The texture analysis showed that co-occurrence entropy, dissimilarity, and variance texture measures, derived from the Landsat TM spectrum bands and vegetation indices provided the most significant statistical differentiation between land cover types. Subsequently, a Classification and Regression Tree (CART) algorithm was applied to three different combinations of predictors: 1) TM imagery alone (TM-only); 2) TM imagery plus image texture (TM+TXT model); and 3) all predictors including TM imagery, image texture and additional ancillary GIS information (TM+TXT+GIS model). Compared with traditional Maximum Likelihood Classification (MLC) supervised classification, three classification trees predictive models reduced the overall error rate significantly. Image texture measures and ancillary geographical variables depressed the speckle noise effectively and reduced classification error rate of marsh obviously. For classification trees model making use of all available predictors, omission error rate was 12.90% and commission error rate was 10.99% for marsh. The developed method is portable, relatively easy to implement and should be applicable in other settings and over larger extents.

  12. Investigation of fluorescence spectra disturbances influencing the classification performance of fluorescently labeled plastic flakes

    Science.gov (United States)

    Fomin, Petr; Brunner, Siegfried; Kargel, Christian

    2013-04-01

    The recycling of plastic products becomes increasingly attractive not only from an environmental point of view, but also economically. For recycled (engineering) plastic products with the highest possible quality, plastic sorting technologies must provide clean and virtually mono-fractional compositions from a mixture of many different types of (shredded) plastics. In order to put this high quality sorting into practice, the labeling of virgin plastics with specific fluorescent markers at very low concentrations (ppm level or less) during their manufacturing process is proposed. The emitted fluorescence spectra represent "optical fingerprints" - each being unique for a particular plastic - which we use for plastic identification and classification purposes. In this study we quantify the classification performance using our prototype measurement system and 15 different plastic types when various influence factors most relevant in practice cause disturbances of the fluorescence spectra emitted from the labeled plastics. The results of these investigations help optimize the development and incorporation of appropriate fluorescent markers as well as the classification algorithms and overall measurement system in order to achieve the lowest possible classification error rates.

  13. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  14. Quadratic dynamical decoupling with nonuniform error suppression

    Energy Technology Data Exchange (ETDEWEB)

    Quiroz, Gregory; Lidar, Daniel A. [Department of Physics and Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, California 90089 (United States); Departments of Electrical Engineering, Chemistry, and Physics, and Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, California 90089 (United States)

    2011-10-15

    We analyze numerically the performance of the near-optimal quadratic dynamical decoupling (QDD) single-qubit decoherence errors suppression method [J. West et al., Phys. Rev. Lett. 104, 130501 (2010)]. The QDD sequence is formed by nesting two optimal Uhrig dynamical decoupling sequences for two orthogonal axes, comprising N{sub 1} and N{sub 2} pulses, respectively. Varying these numbers, we study the decoherence suppression properties of QDD directly by isolating the errors associated with each system basis operator present in the system-bath interaction Hamiltonian. Each individual error scales with the lowest order of the Dyson series, therefore immediately yielding the order of decoherence suppression. We show that the error suppression properties of QDD are dependent upon the parities of N{sub 1} and N{sub 2}, and near-optimal performance is achieved for general single-qubit interactions when N{sub 1}=N{sub 2}.

  15. Hyperspectral image classification using functional data analysis.

    Science.gov (United States)

    Li, Hong; Xiao, Guangrun; Xia, Tian; Tang, Y Y; Li, Luoqing

    2014-09-01

    The large number of spectral bands acquired by hyperspectral imaging sensors allows us to better distinguish many subtle objects and materials. Unlike other classical hyperspectral image classification methods in the multivariate analysis framework, in this paper, a novel method using functional data analysis (FDA) for accurate classification of hyperspectral images has been proposed. The central idea of FDA is to treat multivariate data as continuous functions. From this perspective, the spectral curve of each pixel in the hyperspectral images is naturally viewed as a function. This can be beneficial for making full use of the abundant spectral information. The relevance between adjacent pixel elements in the hyperspectral images can also be utilized reasonably. Functional principal component analysis is applied to solve the classification problem of these functions. Experimental results on three hyperspectral images show that the proposed method can achieve higher classification accuracies in comparison to some state-of-the-art hyperspectral image classification methods.

  16. Automatic web services classification based on rough set theory

    Institute of Scientific and Technical Information of China (English)

    陈立; 张英; 宋自林; 苗壮

    2013-01-01

    With development of web services technology, the number of existing services in the internet is growing day by day. In order to achieve automatic and accurate services classification which can be beneficial for service related tasks, a rough set theory based method for services classification was proposed. First, the services descriptions were preprocessed and represented as vectors. Elicited by the discernibility matrices based attribute reduction in rough set theory and taking into account the characteristic of decision table of services classification, a method based on continuous discernibility matrices was proposed for dimensionality reduction. And finally, services classification was processed automatically. Through the experiment, the proposed method for services classification achieves approving classification result in all five testing categories. The experiment result shows that the proposed method is accurate and could be used in practical web services classification.

  17. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  18. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  19. Wind Power Error Estimation in Resource Assessments

    Science.gov (United States)

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  20. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  1. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue.…

  2. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  3. Error Reduction for Visible Watermarking in Still Images

    Institute of Scientific and Technical Information of China (English)

    LjubisaRadunovic; 王朔中; 等

    2002-01-01

    Different digital watermarking techniques and their applications are briefly reviewed.Solution to a practical problem with visible image marking is presented,together with experimental results and discussion.Main focusis on reduction of error caused by the mark addition and subtraction.Image classification based on its mean gray level and adjustment of out-of-range gray levels are implemented.

  4. Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications

    Science.gov (United States)

    Carrió Pastor, María Luisa; Mestre-Mestre, Eva María

    2014-01-01

    Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…

  5. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  6. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  7. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  8. Real-life GOLD 2011 implementation: the management of COPD lacks correct classification and adequate treatment.

    Directory of Open Access Journals (Sweden)

    Vladimir Koblizek

    Full Text Available Chronic obstructive pulmonary disease (COPD is a serious, yet preventable and treatable, disease. The success of its treatment relies largely on the proper implementation of recommendations, such as the recently released Global Strategy for Diagnosis, Management, and Prevention of COPD (GOLD 2011, of late December 2011. The primary objective of this study was to examine the extent to which GOLD 2011 is being used correctly among Czech respiratory specialists, in particular with regard to the correct classification of patients. The secondary objective was to explore what effect an erroneous classification has on inadequate use of inhaled corticosteroids (ICS. In order to achieve these goals, a multi-center, cross-sectional study was conducted, consisting of a general questionnaire and patient-specific forms. A subjective classification into the GOLD 2011 categories was examined, and then compared with the objectively computed one. Based on 1,355 patient forms, a discrepancy between the subjective and objective classifications was found in 32.8% of cases. The most common reason for incorrect classification was an error in the assessment of symptoms, which resulted in underestimation in 23.9% of cases, and overestimation in 8.9% of the patients' records examined. The specialists seeing more than 120 patients per month were most likely to misclassify their condition, and were found to have done so in 36.7% of all patients seen. While examining the subjectively driven ICS prescription, it was found that 19.5% of patients received ICS not according to guideline recommendations, while in 12.2% of cases the ICS were omitted, contrary to guideline recommendations. Furthermore, with consideration to the objectively-computed classification, it was discovered that 15.4% of patients received ICS unnecessarily, whereas in 15.8% of cases, ICS were erroneously omitted. It was therefore concluded that Czech specialists tend either to under-prescribe or overuse

  9. Floating-Point Numbers with Error Estimates (revised)

    CERN Document Server

    Masotti, Glauco

    2012-01-01

    The study addresses the problem of precision in floating-point (FP) computations. A method for estimating the errors which affect intermediate and final results is proposed and a summary of many software simulations is discussed. The basic idea consists of representing FP numbers by means of a data structure collecting value and estimated error information. Under certain constraints, the estimate of the absolute error is accurate and has a compact statistical distribution. By monitoring the estimated relative error during a computation (an ad-hoc definition of relative error has been used), the validity of results can be ensured. The error estimate enables the implementation of robust algorithms, and the detection of ill-conditioned problems. A dynamic extension of number precision, under the control of error estimates, is advocated, in order to compute results within given error bounds. A reduced time penalty could be achieved by a specialized FP processor. The realization of a hardwired processor incorporat...

  10. Radar clutter classification

    Science.gov (United States)

    Stehwien, Wolfgang

    1989-11-01

    The problem of classifying radar clutter as found on air traffic control radar systems is studied. An algorithm based on Bayes decision theory and the parametric maximum a posteriori probability classifier is developed to perform this classification automatically. This classifier employs a quadratic discriminant function and is optimum for feature vectors that are distributed according to the multivariate normal density. Separable clutter classes are most likely to arise from the analysis of the Doppler spectrum. Specifically, a feature set based on the complex reflection coefficients of the lattice prediction error filter is proposed. The classifier is tested using data recorded from L-band air traffic control radars. The Doppler spectra of these data are examined; the properties of the feature set computed using these data are studied in terms of both the marginal and multivariate statistics. Several strategies involving different numbers of features, class assignments, and data set pretesting according to Doppler frequency and signal to noise ratio were evaluated before settling on a workable algorithm. Final results are presented in terms of experimental misclassification rates and simulated and classified plane position indicator displays.

  11. Introduction to precision machine design and error assessment

    CERN Document Server

    Mekid, Samir

    2008-01-01

    While ultra-precision machines are now achieving sub-nanometer accuracy, unique challenges continue to arise due to their tight specifications. Written to meet the growing needs of mechanical engineers and other professionals to understand these specialized design process issues, Introduction to Precision Machine Design and Error Assessment places a particular focus on the errors associated with precision design, machine diagnostics, error modeling, and error compensation. Error Assessment and ControlThe book begins with a brief overview of precision engineering and applications before introdu

  12. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    Full Text Available Abstract Background Detailed knowledge of the subcellular location of each expressed protein is critical to a full understanding of its function. Fluorescence microscopy, in combination with methods for fluorescent tagging, is the most suitable current method for proteome-wide determination of subcellular location. Previous work has shown that neural network classifiers can distinguish all major protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Building on these results, we evaluate here new classifiers and features to improve the recognition of protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Results We report here a thorough comparison of the performance on this problem of eight different state-of-the-art classification methods, including neural networks, support vector machines with linear, polynomial, radial basis, and exponential radial basis kernel functions, and ensemble methods such as AdaBoost, Bagging, and Mixtures-of-Experts. Ten-fold cross validation was used to evaluate each classifier with various parameters on different Subcellular Location Feature sets representing both 2D and 3D fluorescence microscope images, including new feature sets incorporating features derived from Gabor and Daubechies wavelet transforms. After optimal parameters were chosen for each of the eight classifiers, optimal majority-voting ensemble classifiers were formed for each feature set. Comparison of results for each image for all eight classifiers permits estimation of the lower bound classification error rate for each subcellular pattern, which we interpret to reflect the fraction of cells whose patterns are distorted by mitosis, cell death or acquisition errors. Overall, we obtained statistically significant improvements in classification accuracy over the best previously published results, with the overall error rate being reduced by one-third to one-half and with the average

  13. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  14. Classification of DNA nucleotides with transverse tunneling currents

    Science.gov (United States)

    Nyvold Pedersen, Jonas; Boynton, Paul; Di Ventra, Massimiliano; Jauho, Antti-Pekka; Flyvbjerg, Henrik

    2017-01-01

    It has been theoretically suggested and experimentally demonstrated that fast and low-cost sequencing of DNA, RNA, and peptide molecules might be achieved by passing such molecules between electrodes embedded in a nanochannel. The experimental realization of this scheme faces major challenges, however. In realistic liquid environments, typical currents in tunneling devices are of the order of picoamps. This corresponds to only six electrons per microsecond, and this number affects the integration time required to do current measurements in real experiments. This limits the speed of sequencing, though current fluctuations due to Brownian motion of the molecule average out during the required integration time. Moreover, data acquisition equipment introduces noise, and electronic filters create correlations in time-series data. We discuss how these effects must be included in the analysis of, e.g., the assignment of specific nucleobases to current signals. As the signals from different molecules overlap, unambiguous classification is impossible with a single measurement. We argue that the assignment of molecules to a signal is a standard pattern classification problem and calculation of the error rates is straightforward. The ideas presented here can be extended to other sequencing approaches of current interest.

  15. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  16. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  17. Classification of cultivated plants.

    NARCIS (Netherlands)

    Brandenburg, W.A.

    1986-01-01

    Agricultural practice demands principles for classification, starting from the basal entity in cultivated plants: the cultivar. In establishing biosystematic relationships between wild, weedy and cultivated plants, the species concept needs re-examination. Combining of botanic classification, based

  18. Achievable Precision for Optical Ranging Systems

    Science.gov (United States)

    Moision, Bruce; Erkmen, Baris I.

    2012-01-01

    Achievable RMS errors in estimating the phase, frequency, and intensity of a direct-detected intensity-modulated optical pulse train are presented. For each parameter, the Cramer-Rao-Bound (CRB) is derived and the performance of the Maximum Likelihood estimator is illustrated. Approximations to the CRBs are provided, enabling an intuitive understanding of estimator behavior as a function of the signaling parameters. The results are compared to achievable RMS errors in estimating the same parameters from a sinusoidal waveform in additive white Gaussian noise. This establishes a framework for a performance comparison of radio frequency (RF) and optical science. Comparisons are made using parameters for state-of-the-art deep-space RF and optical links. Degradations to the achievable errors due to clock phase noise and detector jitter are illustrated.

  19. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  20. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  1. Decentralized estimation of sensor systematic error andtarget state vector

    Institute of Scientific and Technical Information of China (English)

    贺明科; 王正明; 朱炬波

    2003-01-01

    An accurate estimation of the sensor systematic error is significant for improving the performance of target tracking system. The existing methods usually append the bias states directly to the variable states to form augmented state vectors and utilize the conventional Kalman estimator to achieve state vectors estimate. So doing is expensive in computation, and much work is devoted to decoupling variable states and systematic error. But the decentralied estimation of systematic errors and reduction of the amount of computation as well as decentralied track fusion are far from being realized. This paper addresses distributed track fusion problem in multi-sensor tracking system in the presence of sensor bias. By this method, variable states and systematic error is decoupled. Decentralized systematic error estimation and track fusion are achieved. Simulation results verify that this method can get accurate estimation of systematic error and state vector.

  2. Random Forest Classification of Wetland Landcovers from Multi-Sensor Data in the Arid Region of Xinjiang, China

    Directory of Open Access Journals (Sweden)

    Shaohong Tian

    2016-11-01

    Full Text Available The wetland classification from remotely sensed data is usually difficult due to the extensive seasonal vegetation dynamics and hydrological fluctuation. This study presents a random forest classification approach for the retrieval of the wetland landcover in the arid regions by fusing the Pléiade-1B data with multi-date Landsat-8 data. The segmentation of the Pléiade-1B multispectral image data was performed based on an object-oriented approach, and the geometric and spectral features were extracted for the segmented image objects. The normalized difference vegetation index (NDVI series data were also calculated from the multi-date Landsat-8 data, reflecting vegetation phenological changes in its growth cycle. The feature set extracted from the two sensors data was optimized and employed to create the random forest model for the classification of the wetland landcovers in the Ertix River in northern Xinjiang, China. Comparison with other classification methods such as support vector machine and artificial neural network classifiers indicates that the random forest classifier can achieve accurate classification with an overall accuracy of 93% and the Kappa coefficient of 0.92. The classification accuracy of the farming lands and water bodies that have distinct boundaries with the surrounding land covers was improved 5%–10% by making use of the property of geometric shapes. To remove the difficulty in the classification that was caused by the similar spectral features of the vegetation covers, the phenological difference and the textural information of co-occurrence gray matrix were incorporated into the classification, and the main wetland vegetation covers in the study area were derived from the two sensors data. The inclusion of phenological information in the classification enables the classification errors being reduced down, and the overall accuracy was improved approximately 10%. The results show that the proposed random forest

  3. Stellar classification from single-band imaging using machine learning

    Science.gov (United States)

    Kuntzer, T.; Tewes, M.; Courbin, F.

    2016-06-01

    Information on the spectral types of stars is of great interest in view of the exploitation of space-based imaging surveys. In this article, we investigate the classification of stars into spectral types using only the shape of their diffraction pattern in a single broad-band image. We propose a supervised machine learning approach to this endeavour, based on principal component analysis (PCA) for dimensionality reduction, followed by artificial neural networks (ANNs) estimating the spectral type. Our analysis is performed with image simulations mimicking the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) in the F606W and F814W bands, as well as the Euclid VIS imager. We first demonstrate this classification in a simple context, assuming perfect knowledge of the point spread function (PSF) model and the possibility of accurately generating mock training data for the machine learning. We then analyse its performance in a fully data-driven situation, in which the training would be performed with a limited subset of bright stars from a survey, and an unknown PSF with spatial variations across the detector. We use simulations of main-sequence stars with flat distributions in spectral type and in signal-to-noise ratio, and classify these stars into 13 spectral subclasses, from O5 to M5. Under these conditions, the algorithm achieves a high success rate both for Euclid and HST images, with typical errors of half a spectral class. Although more detailed simulations would be needed to assess the performance of the algorithm on a specific survey, this shows that stellar classification from single-band images is well possible.

  4. Recursive training of neural networks for classification.

    Science.gov (United States)

    Aladjem, M

    2000-01-01

    A method for recursive training of neural networks for classification is proposed. It searches for the discriminant functions corresponding to several small local minima of the error function. The novelty of the proposed method lies in the transformation of the data into new training data with a deflated minimum of the error function and iteration to obtain the next solution. A simulation study and a character recognition application indicate that the proposed method has the potential to escape from local minima and to direct the local optimizer to new solutions.

  5. VOCAL SEGMENT CLASSIFICATION IN POPULAR MUSIC

    DEFF Research Database (Denmark)

    Feng, Ling; Nielsen, Andreas Brinch; Hansen, Lars Kai

    2008-01-01

    This paper explores the vocal and non-vocal music classification problem within popular songs. A newly built labeled database covering 147 popular songs is announced. It is designed for classifying signals from 1sec time windows. Features are selected for this particular task, in order to capture......-validated training and test setup. The database is divided in two different ways: with/without artist overlap between training and test sets, so as to study the so called ‘artist effect’. The performance and results are analyzed in depth: from error rates to sample-to-sample error correlation. A voting scheme...

  6. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  7. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  8. Texture Image Classification Based on Gabor Wavelet

    Institute of Scientific and Technical Information of China (English)

    DENG Wei-bing; LI Hai-fei; SHI Ya-li; YANG Xiao-hui

    2014-01-01

    For a texture image, by recognizining the class of every pixel of the image, it can be partitioned into disjoint regions of uniform texture. This paper proposed a texture image classification algorithm based on Gabor wavelet. In this algorithm, characteristic of every image is obtained through every pixel and its neighborhood of this image. And this algorithm can achieve the information transform between different sizes of neighborhood. Experiments on standard Brodatz texture image dataset show that our proposed algorithm can achieve good classification rates.

  9. Feature extraction and classification algorithms for high dimensional data

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  10. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  11. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  12. Raptor Codes for Use in Opportunistic Error Correction

    NARCIS (Netherlands)

    Zijnge, T.; Schiphorst, R.; Shao, X.; Slump, C.H.; Goseling, Jasper; Weber, Jos H.

    2010-01-01

    In this paper a Raptor code is developed and applied in an opportunistic error correction (OEC) layer for Coded OFDM systems. Opportunistic error correction [3] tries to recover information when it is available with the least effort. This is achieved by using Fountain codes in a COFDM system, which

  13. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  14. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  15. Error bars in experimental biology

    OpenAIRE

    2007-01-01

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what er...

  16. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  17. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  18. Distance-based classification of keystroke dynamics

    Science.gov (United States)

    Tran Nguyen, Ngoc

    2016-07-01

    This paper uses the keystroke dynamics in user authentication. The relationship between the distance metrics and the data template, for the first time, was analyzed and new distance based algorithm for keystroke dynamics classification was proposed. The results of the experiments on the CMU keystroke dynamics benchmark dataset1 were evaluated with an equal error rate of 0.0614. The classifiers using the proposed distance metric outperform existing top performing keystroke dynamics classifiers which use traditional distance metrics.

  19. Classification rules for Indian Rice diseases

    Directory of Open Access Journals (Sweden)

    A. Nithya

    2011-01-01

    Full Text Available Many techniques have been developed for learning rules and relationships automatically from diverse data sets, to simplify the often tedious and error-prone process of acquiring knowledge from empirical data. Decision tree is one of learning algorithm which posses certain advantages that make it suitable for discovering the classification rule for data mining applications. Normally Decision trees widely used learning method and do not require any prior knowledge of data distribution, works well on noisy data .It has been applied to classify Rice disease based on the symptoms. This paper intended to discover classification rules for the Indian rice diseases using the c4.5 decision trees algorithm. Expert systems have been used in agriculture since the early 1980s. Several systems have been developed in different countries including the USA, Europe, and Egypt for plant-disorder diagnosis, management and other production aspects. This paper explores what Classification rule can do in the agricultural domain.

  20. Neural network parameters affecting image classification

    Directory of Open Access Journals (Sweden)

    K.C. Tiwari

    2001-07-01

    Full Text Available The study is to assess the behaviour and impact of various neural network parameters and their effects on the classification accuracy of remotely sensed images which resulted in successful classification of an IRS-1B LISS II image of Roorkee and its surrounding areas using neural network classification techniques. The method can be applied for various defence applications, such as for the identification of enemy troop concentrations and in logistical planning in deserts by identification of suitable areas for vehicular movement. Five parameters, namely training sample size, number of hidden layers, number of hidden nodes, learning rate and momentum factor were selected. In each case, sets of values were decided based on earlier works reported. Neural network-based classifications were carried out for as many as 450 combinations of these parameters. Finally, a graphical analysis of the results obtained was carried out to understand the relationship among these parameters. A table of recommended values for these parameters for achieving 90 per cent and higher classification accuracy was generated and used in classification of an IRS-1B LISS II image. The analysis suggests the existence of an intricate relationship among these parameters and calls for a wider series of classification experiments as also a more intricate analysis of the relationships.

  1. Fingerprint Gender Classification using Wavelet Transform and Singular Value Decomposition

    CERN Document Server

    Gnanasivam, P

    2012-01-01

    A novel method of gender Classification from fingerprint is proposed based on discrete wavelet transform (DWT) and singular value decomposition (SVD). The classification is achieved by extracting the energy computed from all the sub-bands of DWT combined with the spatial features of non-zero singular values obtained from the SVD of fingerprint images. K nearest neighbor (KNN) used as a classifier. This method is experimented with the internal database of 3570 fingerprints finger prints in which 1980 were male fingerprints and 1590 were female fingerprints. Finger-wise gender classification is achieved which is 94.32% for the left hand little fingers of female persons and 95.46% for the left hand index finger of male persons. Gender classification for any finger of male persons tested is attained as 91.67% and 84.69% for female persons respectively. Overall classification rate is 88.28% has been achieved.

  2. Diagnostic errors in pediatric radiology

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, George A.; Voss, Stephan D. [Children' s Hospital Boston, Department of Radiology, Harvard Medical School, Boston, MA (United States); Melvin, Patrice R. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Graham, Dionne A. [Children' s Hospital Boston, The Program for Patient Safety and Quality, Boston, MA (United States); Harvard Medical School, The Department of Pediatrics, Boston, MA (United States)

    2011-03-15

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  3. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1989-09-01

    Certain governmental information must be classified for national security reasons. However, the national security benefits from classifying information are usually accompanied by significant costs -- those due to a citizenry not fully informed on governmental activities, the extra costs of operating classified programs and procuring classified materials (e.g., weapons), the losses to our nation when advances made in classified programs cannot be utilized in unclassified programs. The goal of a classification system should be to clearly identify that information which must be protected for national security reasons and to ensure that information not needing such protection is not classified. This document was prepared to help attain that goal. This document is the first of a planned four-volume work that comprehensively discusses the security classification of information. Volume 1 broadly describes the need for classification, the basis for classification, and the history of classification in the United States from colonial times until World War 2. Classification of information since World War 2, under Executive Orders and the Atomic Energy Acts of 1946 and 1954, is discussed in more detail, with particular emphasis on the classification of atomic energy information. Adverse impacts of classification are also described. Subsequent volumes will discuss classification principles, classification management, and the control of certain unclassified scientific and technical information. 340 refs., 6 tabs.

  4. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  5. Algorithms for Hyperspectral Endmember Extraction and Signature Classification with Morphological Dendritic Networks

    Science.gov (United States)

    Schmalz, M.; Ritter, G.

    Accurate multispectral or hyperspectral signature classification is key to the nonimaging detection and recognition of space objects. Additionally, signature classification accuracy depends on accurate spectral endmember determination [1]. Previous approaches to endmember computation and signature classification were based on linear operators or neural networks (NNs) expressed in terms of the algebra (R, +, x) [1,2]. Unfortunately, class separation in these methods tends to be suboptimal, and the number of signatures that can be accurately classified often depends linearly on the number of NN inputs. This can lead to poor endmember distinction, as well as potentially significant classification errors in the presence of noise or densely interleaved signatures. In contrast to traditional CNNs, autoassociative morphological memories (AMM) are a construct similar to Hopfield autoassociatived memories defined on the (R, +, ?,?) lattice algebra [3]. Unlimited storage and perfect recall of noiseless real valued patterns has been proven for AMMs [4]. However, AMMs suffer from sensitivity to specific noise models, that can be characterized as erosive and dilative noise. On the other hand, the prior definition of a set of endmembers corresponds to material spectra lying on vertices of the minimum convex region covering the image data. These vertices can be characterized as morphologically independent patterns. It has further been shown that AMMs can be based on dendritic computation [3,6]. These techniques yield improved accuracy and class segmentation/separation ability in the presence of highly interleaved signature data. In this paper, we present a procedure for endmember determination based on AMM noise sensitivity, which employs morphological dendritic computation. We show that detected endmembers can be exploited by AMM based classification techniques, to achieve accurate signature classification in the presence of noise, closely spaced or interleaved signatures, and

  6. Transient Error Data Analysis.

    Science.gov (United States)

    1979-05-01

    Analysis is 3.2 Graphical Data Analysis 16 3.3 General Statistics and Confidence Intervals 1" 3.4 Goodness of Fit Test 15 4. Conclusions 31 Acknowledgements...MTTF per System Technology Mechanism Processor Processor MT IE . CMUA PDP-10, ECL Parity 44 hrs. 800-1600 hrs. 0.03-0.06 Cm* LSI-1 1, NMOS Diagnostics...OF BAD TIME ERRORS: 6 TOTAL NUMBER OF ENTRIES FOR ALL INPUT FILESs 18445 TIME SPAN: 1542 HRS., FROM: 17-Feb-79 5:3:11 TO: 18-1Mj-79 11:30:99

  7. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  8. Fast Wavelet-Based Visual Classification

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a simple attention-like feedback mechanism, significantly improving recognition and robustness in multiple-object scenes. In experiments, the proposed algorithm achieves or exceeds state-of-the-art success rate on object recognition, texture and satellite image classification, language identification and sound classification.

  9. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Fangyu Pan

    2013-08-01

    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  10. Leader as achiever.

    Science.gov (United States)

    Dienemann, Jacqueline

    2002-01-01

    This article examines one outcome of leadership: productive achievement. Without achievement one is judged to not truly be a leader. Thus, the ideal leader must be a visionary, a critical thinker, an expert, a communicator, a mentor, and an achiever of organizational goals. This article explores the organizational context that supports achievement, measures of quality nursing care, fiscal accountability, leadership development, rewards and punishments, and the educational content and teaching strategies to prepare graduates to be achievers.

  11. Ontologies vs. Classification Systems

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2009-01-01

    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta d...... classification systems and meta data taxonomies, should be based on ontologies.......What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...

  12. Information gathering for CLP classification

    OpenAIRE

    Ida Marcello; Felice Giordano; Francesca Marina Costamagna

    2011-01-01

    Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP). If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requireme...

  13. Errors in CT colonography.

    Science.gov (United States)

    Trilisky, Igor; Ward, Emily; Dachman, Abraham H

    2015-10-01

    CT colonography (CTC) is a colorectal cancer screening modality which is becoming more widely implemented and has shown polyp detection rates comparable to those of optical colonoscopy. CTC has the potential to improve population screening rates due to its minimal invasiveness, no sedation requirement, potential for reduced cathartic examination, faster patient throughput, and cost-effectiveness. Proper implementation of a CTC screening program requires careful attention to numerous factors, including patient preparation prior to the examination, the technical aspects of image acquisition, and post-processing of the acquired data. A CTC workstation with dedicated software is required with integrated CTC-specific display features. Many workstations include computer-aided detection software which is designed to decrease errors of detection by detecting and displaying polyp-candidates to the reader for evaluation. There are several pitfalls which may result in false-negative and false-positive reader interpretation. We present an overview of the potential errors in CTC and a systematic approach to avoid them.

  14. Fast Fingerprint Classification with Deep Neural Network

    DEFF Research Database (Denmark)

    Michelsanti, Daniel; Guichi, Yanis; Ene, Andreea-Daniela

    2017-01-01

    Reducing the number of comparisons in automated fingerprint identification systems is essential when dealing with a large database. Fingerprint classification allows to achieve this goal by dividing fingerprints into several categories, but it presents still some challenges due to the large intra...

  15. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  16. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Rittner, Max

    1982-01-01

    The article reviews the development of mathematics error analysis as a means of diagnosing students' cognitive reasoning. Errors specific to addition, subtraction, multiplication, and division are described, and suggestions for remediation are provided. (CL)

  17. Linear time distances between fuzzy sets with applications to pattern matching and classification.

    Science.gov (United States)

    Lindblad, Joakim; Sladoje, Nataša

    2014-01-01

    We present four novel point-to-set distances defined for fuzzy or gray-level image data, two based on integration over α-cuts and two based on the fuzzy distance transform. We explore their theoretical properties. Inserting the proposed point-to-set distances in existing definitions of set-to-set distances, among which are the Hausdorff distance and the sum of minimal distances, we define a number of distances between fuzzy sets. These set distances are directly applicable for comparing gray-level images or fuzzy segmented objects, but also for detecting patterns and matching parts of images. The distance measures integrate shape and intensity/membership of observed entities, providing a highly applicable tool for image processing and analysis. Performance evaluation of derived set distances in real image processing tasks is conducted and presented. It is shown that the considered distances have a number of appealing theoretical properties and exhibit very good performance in template matching and object classification for fuzzy segmented images as well as when applied directly on gray-level intensity images. Examples include recognition of hand written digits and identification of virus particles. The proposed set distances perform excellently on the MNIST digit classification task, achieving the best reported error rate for classification using only rigid body transformations and a kNN classifier.

  18. Automatic classification of background EEG activity in healthy and sick neonates

    Science.gov (United States)

    Löfhede, Johan; Thordstein, Magnus; Löfgren, Nils; Flisberg, Anders; Rosa-Zurera, Manuel; Kjellmer, Ingemar; Lindecrantz, Kaj

    2010-02-01

    The overall aim of our research is to develop methods for a monitoring system to be used at neonatal intensive care units. When monitoring a baby, a range of different types of background activity needs to be considered. In this work, we have developed a scheme for automatic classification of background EEG activity in newborn babies. EEG from six full-term babies who were displaying a burst suppression pattern while suffering from the after-effects of asphyxia during birth was included along with EEG from 20 full-term healthy newborn babies. The signals from the healthy babies were divided into four behavioural states: active awake, quiet awake, active sleep and quiet sleep. By using a number of features extracted from the EEG together with Fisher's linear discriminant classifier we have managed to achieve 100% correct classification when separating burst suppression EEG from all four healthy EEG types and 93% true positive classification when separating quiet sleep from the other types. The other three sleep stages could not be classified. When the pathological burst suppression pattern was detected, the analysis was taken one step further and the signal was segmented into burst and suppression, allowing clinically relevant parameters such as suppression length and burst suppression ratio to be calculated. The segmentation of the burst suppression EEG works well, with a probability of error around 4%.

  19. Computationally efficient target classification in multispectral image data with Deep Neural Networks

    Science.gov (United States)

    Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca

    2016-10-01

    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.

  20. Classifications for Proliferative Vitreoretinopathy (PVR: An Analysis of Their Use in Publications over the Last 15 Years

    Directory of Open Access Journals (Sweden)

    Salvatore Di Lauro

    2016-01-01

    Full Text Available Purpose. To evaluate the current and suitable use of current proliferative vitreoretinopathy (PVR classifications in clinical publications related to treatment. Methods. A PubMed search was undertaken using the term “proliferative vitreoretinopathy therapy”. Outcome parameters were the reported PVR classification and PVR grades. The way the classifications were used in comparison to the original description was analyzed. Classification errors were also included. It was also noted whether classifications were used for comparison before and after pharmacological or surgical treatment. Results. 138 papers were included. 35 of them (25.4% presented no classification reference or did not use any one. 103 publications (74.6% used a standardized classification. The updated Retina Society Classification, the first Retina Society Classification, and the Silicone Study Classification were cited in 56.3%, 33.9%, and 3.8% papers, respectively. Furthermore, 3 authors (2.9% used modified-customized classifications and 4 (3.8% classification errors were identified. When the updated Retina Society Classification was used, only 10.4% of authors used a full C grade description. Finally, only 2 authors reported PVR grade before and after treatment. Conclusions. Our findings suggest that current classifications are of limited value in clinical practice due to the inconsistent and limited use and that it may be of benefit to produce a revised classification.

  1. Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification

    Directory of Open Access Journals (Sweden)

    R. Sathya

    2013-02-01

    Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.

  2. Support vector classification algorithm based on variable parameter linear programming

    Institute of Scientific and Technical Information of China (English)

    Xiao Jianhua; Lin Jian

    2007-01-01

    To solve the problems of SVM in dealing with large sample size and asymmetric distributed samples, a support vector classification algorithm based on variable parameter linear programming is proposed.In the proposed algorithm, linear programming is employed to solve the optimization problem of classification to decrease the computation time and to reduce its complexity when compared with the original model.The adjusted punishment parameter greatly reduced the classification error resulting from asymmetric distributed samples and the detailed procedure of the proposed algorithm is given.An experiment is conducted to verify whether the proposed algorithm is suitable for asymmetric distributed samples.

  3. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  4. Feature Referenced Error Correction Apparatus.

    Science.gov (United States)

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  5. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  6. Concepts of Classification and Taxonomy. Phylogenetic Classification

    CERN Document Server

    Fraix-Burnet, Didier

    2016-01-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 1 Why phylogenetic tools in astrophysics? 1.1 History of classification The need for classifying living organisms is very ancient, and the first classification system can be dated back to the Greeks. The goal was very practical since it was intended to distinguish between eatable and toxic aliments, or kind and dangerous animals. Simple resemblance was used and has been used for centuries. Basically, until the XVIIIth...

  7. Error Correction of Loudspeakers

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde

    Throughout this thesis, the topic of electrodynamic loudspeaker unit design and modelling are reviewed. The research behind this project has been to study loudspeaker design, based on new possibilities introduced by including digital signal processing, and thereby achieving more freedom in loudsp...

  8. Error rate information in attention allocation pilot models

    Science.gov (United States)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  9. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  10. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  11. Automated protein subfamily identification and classification.

    Directory of Open Access Journals (Sweden)

    Duncan P Brown

    2007-08-01

    Full Text Available Function prediction by homology is widely used to provide preliminary functional annotations for genes for which experimental evidence of function is unavailable or limited. This approach has been shown to be prone to systematic error, including percolation of annotation errors through sequence databases. Phylogenomic analysis avoids these errors in function prediction but has been difficult to automate for high-throughput application. To address this limitation, we present a computationally efficient pipeline for phylogenomic classification of proteins. This pipeline uses the SCI-PHY (Subfamily Classification in Phylogenomics algorithm for automatic subfamily identification, followed by subfamily hidden Markov model (HMM construction. A simple and computationally efficient scoring scheme using family and subfamily HMMs enables classification of novel sequences to protein families and subfamilies. Sequences representing entirely novel subfamilies are differentiated from those that can be classified to subfamilies in the input training set using logistic regression. Subfamily HMM parameters are estimated using an information-sharing protocol, enabling subfamilies containing even a single sequence to benefit from conservation patterns defining the family as a whole or in related subfamilies. SCI-PHY subfamilies correspond closely to functional subtypes defined by experts and to conserved clades found by phylogenetic analysis. Extensive comparisons of subfamily and family HMM performances show that subfamily HMMs dramatically improve the separation between homologous and non-homologous proteins in sequence database searches. Subfamily HMMs also provide extremely high specificity of classification and can be used to predict entirely novel subtypes. The SCI-PHY Web server at http://phylogenomics.berkeley.edu/SCI-PHY/ allows users to upload a multiple sequence alignment for subfamily identification and subfamily HMM construction. Biologists wishing to

  12. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  13. Experimental repetitive quantum error correction.

    Science.gov (United States)

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  14. Work complexity assessment, nursing interventions classification, and nursing outcomes classification: making connections.

    Science.gov (United States)

    Scherb, Cindy A; Weydt, Alice P

    2009-01-01

    When nurses understand what interventions are needed to achieve desired patient outcomes, they can more easily define their practice. Work Complexity Assessment (WCA) is a process that helps nurses to identify interventions performed on a routine basis for their specific patient population. This article describes the WCA process and links it to the Nursing Interventions Classification (NIC) and the Nursing Outcomes Classification (NOC). WCA, NIC, and NOC are all tools that help nurses understand the work they do and the outcomes they achieve, and that thereby acknowledge and validate nursing's contribution to patient care.

  15. Multiple sparse representations classification

    NARCIS (Netherlands)

    E. Plenge (Esben); S.K. Klein (Stefan); W.J. Niessen (Wiro); E. Meijering (Erik)

    2015-01-01

    textabstractSparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In t

  16. Library Classification 2020

    Science.gov (United States)

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  17. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  18. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  19. Common errors in disease mapping

    Directory of Open Access Journals (Sweden)

    Ricardo Ocaña-Riola

    2010-05-01

    Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.

  20. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  1. Classifier in Age classification

    Directory of Open Access Journals (Sweden)

    B. Santhi

    2012-12-01

    Full Text Available Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This study addresses the limitations in the existing classifiers, as it uses the Grey Level Co-occurrence Matrix (GLCM for feature extraction and Support Vector Machine (SVM for classification. This improves the accuracy of the classification as it outperforms the existing methods.

  2. Neural Networks for Emotion Classification

    CERN Document Server

    Sun, Yafei

    2011-01-01

    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  3. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  4. Design and evaluation of neural classifiers application to skin lesion classification

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan

    1995-01-01

    Addresses design and evaluation of neural classifiers for the problem of skin lesion classification. By using Gauss Newton optimization for the entropic cost function in conjunction with pruning by Optimal Brain Damage and a new test error estimate, the authors show that this scheme is capable...... of optimizing the architecture of neural classifiers. Furthermore, error-reject tradeoff theory indicates, that the resulting neural classifiers for the skin lesion classification problem are near-optimal...

  5. Kappa Coefficients for Circular Classifications

    NARCIS (Netherlands)

    Warrens, Matthijs J.; Pratiwi, Bunga C.

    2016-01-01

    Circular classifications are classification scales with categories that exhibit a certain periodicity. Since linear scales have endpoints, the standard weighted kappas used for linear scales are not appropriate for analyzing agreement between two circular classifications. A family of kappa coefficie

  6. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  7. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  8. Finite Block-Length Achievable Rates for Queuing Timing Channels

    OpenAIRE

    2011-01-01

    The exponential server timing channel is known to be the simplest, and in some sense canonical, queuing timing channel. The capacity of this infinite-memory channel is known. Here, we discuss practical finite-length restrictions on the codewords and attempt to understand the amount of maximal rate that can be achieved for a target error probability. By using Markov chain analysis, we prove a lower bound on the maximal channel coding rate achievable at blocklength $n$ and error probability $...

  9. Comparison of analytical error and sampling error for contaminated soil.

    Science.gov (United States)

    Gustavsson, Björn; Luthbom, Karin; Lagerkvist, Anders

    2006-11-16

    Investigation of soil from contaminated sites requires several sample handling steps that, most likely, will induce uncertainties in the sample. The theory of sampling describes seven sampling errors that can be calculated, estimated or discussed in order to get an idea of the size of the sampling uncertainties. With the aim of comparing the size of the analytical error to the total sampling error, these seven errors were applied, estimated and discussed, to a case study of a contaminated site. The manageable errors were summarized, showing a range of three orders of magnitudes between the examples. The comparisons show that the quotient between the total sampling error and the analytical error is larger than 20 in most calculation examples. Exceptions were samples taken in hot spots, where some components of the total sampling error get small and the analytical error gets large in comparison. Low concentration of contaminant, small extracted sample size and large particles in the sample contribute to the extent of uncertainty.

  10. ERRORS AND DIFFICULTIES IN TRANSLATING LEGAL TEXTS

    Directory of Open Access Journals (Sweden)

    Camelia, CHIRILA

    2014-11-01

    Full Text Available Nowadays the accurate translation of legal texts has become highly important as the mistranslation of a passage in a contract, for example, could lead to lawsuits and loss of money. Consequently, the translation of legal texts to other languages faces many difficulties and only professional translators specialised in legal translation should deal with the translation of legal documents and scholarly writings. The purpose of this paper is to analyze translation from three perspectives: translation quality, errors and difficulties encountered in translating legal texts and consequences of such errors in professional translation. First of all, the paper points out the importance of performing a good and correct translation, which is one of the most important elements to be considered when discussing translation. Furthermore, the paper presents an overview of the errors and difficulties in translating texts and of the consequences of errors in professional translation, with applications to the field of law. The paper is also an approach to the differences between languages (English and Romanian that can hinder comprehension for those who have embarked upon the difficult task of translation. The research method that I have used to achieve the objectives of the paper was the content analysis of various Romanian and foreign authors' works.

  11. Error Modeling and Analysis for InSAR Spatial Baseline Determination of Satellite Formation Flying

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Spatial baseline determination is a key technology for interferometric synthetic aperture radar (InSAR missions. Based on the intersatellite baseline measurement using dual-frequency GPS, errors induced by InSAR spatial baseline measurement are studied in detail. The classifications and characters of errors are analyzed, and models for errors are set up. The simulations of single factor and total error sources are selected to evaluate the impacts of errors on spatial baseline measurement. Single factor simulations are used to analyze the impact of the error of a single type, while total error sources simulations are used to analyze the impacts of error sources induced by GPS measurement, baseline transformation, and the entire spatial baseline measurement, respectively. Simulation results show that errors related to GPS measurement are the main error sources for the spatial baseline determination, and carrier phase noise of GPS observation and fixing error of GPS receiver antenna are main factors of errors related to GPS measurement. In addition, according to the error values listed in this paper, 1 mm level InSAR spatial baseline determination should be realized.

  12. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  13. [Classifications in forensic medicine and their logical basis].

    Science.gov (United States)

    Kovalev, A V; Shmarov, L A; Ten'kov, A A

    2014-01-01

    The objective of the present study was to characterize the main requirements for the correct construction of classifications used in forensic medicine, with special reference to the errors that occur in the relevant text-books, guidelines, and manuals and the ways to avoid them. This publication continues the series of thematic articles of the authors devoted to the logical errors in the expert conclusions. The preparation of further publications is underway to report the results of the in-depth analysis of the logical errors encountered in expert conclusions, text-books, guidelines, and manuals.

  14. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  15. CLASSIFICATIONS OF EEG SIGNALS FOR MENTAL TASKS USING ADAPTIVE RBF NETWORK

    Institute of Scientific and Technical Information of China (English)

    薛建中; 郑崇勋; 闫相国

    2004-01-01

    Objective This paper presents classifications of mental tasks based on EEG signals using an adaptive Radial Basis Function (RBF) network with optimal centers and widths for the Brain-Computer Interface (BCI) schemes. Methods Initial centers and widths of the network are selected by a cluster estimation method based on the distribution of the training set. Using a conjugate gradient descent method, they are optimized during training phase according to a regularized error function considering the influence of their changes to output values. Results The optimizing process improves the performance of RBF network, and its best cognition rate of three task pairs over four subjects achieves 87.0%. Moreover, this network runs fast due to the fewer hidden layer neurons. Conclusion The adaptive RBF network with optimal centers and widths has high recognition rate and runs fast. It may be a promising classifier for on-line BCI scheme.

  16. A Two Step Data Mining Approach for Amharic Text Classification

    Directory of Open Access Journals (Sweden)

    Seffi Gebeyehu

    2016-08-01

    Full Text Available Traditionally, text classifiers are built from labeled training examples (supervised. Labeling is usually done manually by human experts (or the users, which is a labor intensive and time consuming process. In the past few years, researchers have investigated various forms of semi-supervised learning to reduce the burden of manual labeling. In this paper is aimed to show as the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. In this paper, intended to implement an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation- Maximization (EM and two classifiers: Naive Bayes (NB and locally weighted learning (LWL. NB first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents while LWL uses a class of function approximation to build a model around the current point of interest. An experiment conducted on a mixture of labeled and unlabeled Amharic text documents showed that the new method achieved a significant performance in comparison with that of a supervised LWL and NB. The result also pointed out that the use of unlabeled data with EM reduces the classification absolute error by 27.6%. In general, since unlabeled documents are much less expensive and easier to collect than labeled documents, this method will be useful for text categorization tasks including online data sources such as web pages, e-mails and news group postings. If one uses this method, building text categorization systems will be significantly faster and less expensive than the supervised learning approach.

  17. Discriminant analysis with errors in variables

    CERN Document Server

    Loustau, Sébastien

    2012-01-01

    The effect of measurement error in discriminant analysis is investigated. Given observations $Z=X+\\epsilon$, where $\\epsilon$ denotes a random noise, the goal is to predict the density of $X$ among two possible candidates $f$ and $g$. We suppose that we have at our disposal two learning samples. The aim is to approach the best possible decision rule $G^*$ defined as a minimizer of the Bayes risk. In the free-noise case $(\\epsilon=0)$, minimax fast rates of convergence are well-known under the margin assumption in discriminant analysis (see \\cite{mammen}) or in the more general classification framework (see \\cite{tsybakov2004,AT}). In this paper we intend to establish similar results in the noisy case, i.e. when dealing with errors in variables. In particular, we discuss two possible complexity assumptions that can be set on the problem, which may alternatively concern the regularity of $f-g$ or the boundary of $G^*$. We prove minimax lower bounds for these both problems and explain how can these rates be atta...

  18. Inborn errors of metabolism: a clinical overview

    Directory of Open Access Journals (Sweden)

    Ana Maria Martins

    1999-11-01

    Full Text Available CONTEXT: Inborn errors of metabolism cause hereditary metabolic diseases (HMD and classically they result from the lack of activity of one or more specific enzymes or defects in the transportation of proteins. OBJECTIVES: A clinical review of inborn errors of metabolism (IEM to give a practical approach to the physician with figures and tables to help in understanding the more common groups of these disorders. DATA SOURCE: A systematic review of the clinical and biochemical basis of IEM in the literature, especially considering the last ten years and a classic textbook (Scriver CR et al, 1995. SELECTION OF STUDIES: A selection of 108 references about IEM by experts in the subject was made. Clinical cases are presented with the peculiar symptoms of various diseases. DATA SYNTHESIS: IEM are frequently misdiagnosed because the general practitioner, or pediatrician in the neonatal or intensive care units, does not think about this diagnosis until the more common cause have been ruled out. This review includes inheritance patterns and clinical and laboratory findings of the more common IEM diseases within a clinical classification that give a general idea about these disorders. A summary of treatment types for metabolic inherited diseases is given. CONCLUSIONS: IEM are not rare diseases, unlike previous thinking about them, and IEM patients form part of the clientele in emergency rooms at general hospitals and in intensive care units. They are also to be found in neurological, pediatric, obstetrics, surgical and psychiatric clinics seeking diagnoses, prognoses and therapeutic or supportive treatment.

  19. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  20. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  1. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  2. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Directory of Open Access Journals (Sweden)

    Muhammad Faisal Siddiqui

    Full Text Available A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT, principal component analysis (PCA, and least squares support vector machine (LS-SVM are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%. Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities

  3. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Science.gov (United States)

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the

  4. Towards the Automatic Classification of Avian Flight Calls for Bioacoustic Monitoring

    Science.gov (United States)

    Bello, Juan Pablo; Farnsworth, Andrew; Robbins, Matt; Keen, Sara; Klinck, Holger; Kelling, Steve

    2016-01-01

    Automatic classification of animal vocalizations has great potential to enhance the monitoring of species movements and behaviors. This is particularly true for monitoring nocturnal bird migration, where automated classification of migrants’ flight calls could yield new biological insights and conservation applications for birds that vocalize during migration. In this paper we investigate the automatic classification of bird species from flight calls, and in particular the relationship between two different problem formulations commonly found in the literature: classifying a short clip containing one of a fixed set of known species (N-class problem) and the continuous monitoring problem, the latter of which is relevant to migration monitoring. We implemented a state-of-the-art audio classification model based on unsupervised feature learning and evaluated it on three novel datasets, one for studying the N-class problem including over 5000 flight calls from 43 different species, and two realistic datasets for studying the monitoring scenario comprising hundreds of thousands of audio clips that were compiled by means of remote acoustic sensors deployed in the field during two migration seasons. We show that the model achieves high accuracy when classifying a clip to one of N known species, even for a large number of species. In contrast, the model does not perform as well in the continuous monitoring case. Through a detailed error analysis (that included full expert review of false positives and negatives) we show the model is confounded by varying background noise conditions and previously unseen vocalizations. We also show that the model needs to be parameterized and benchmarked differently for the continuous monitoring scenario. Finally, we show that despite the reduced performance, given the right conditions the model can still characterize the migration pattern of a specific species. The paper concludes with directions for future research. PMID:27880836

  5. Compensation of motion error in a high accuracy AFM

    Science.gov (United States)

    Cui, Yuguo; Arai, Yoshikazu; He, Gaofa; Asai, Takemi; Gao, Wei

    2008-10-01

    An atomic force microscope (AFM) system is used for large-area measurement with a spiral scanning strategy, which is composed of an air slide, an air spindle and a probe unit. The motion error which is brought from the air slide and the air spindle will increase with the increasing of the measurement area. Then the measurement accuracy will decrease. In order to achieve a high speed and high accuracy measurement, the probe scans along X-direction with constant height mode driven by the air slide, and at the same time, based on the change way of the motion error, it moves along Zdirection conducted by piezoactuator. According to the above method of error compensation, the profile measurement experiment of a micro-structured surface has been carried out. The experimental result shows that this method is effective for eliminating motion error, and it can achieve high speed and precision measurement of micro-structured surface.

  6. Using History to Teach Scientific Method: The Role of Errors

    Science.gov (United States)

    Giunta, Carmen J.

    2001-05-01

    Including tales of error along with tales of discovery is desirable in any use of history of science to teach about science. Tales of error, particularly when they involve justly well-regarded historical figures, serve to avoid two pitfalls to which use of historical material in science teaching is otherwise susceptible. Acknowledging the false steps of great scientists avoids putting those scientists on a pedestal and illustrates that there is no automatic or mechanical scientific method. This paper lists five kinds of error with examples of each from the development of chemistry in the 18th and 19th centuries: erroneous theories (such as phlogiston), seeing a new phenomenon everywhere one seeks it (e.g., Lavoisier and the decomposition of water), theories erroneous in detail but nonetheless fruitful (e.g., Dalton's atomic theory), rejection of correct theories (e.g., Avogadro's hypothesis), and incoherent insights (e.g., J. A. R. Newlands' classification of the elements).

  7. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... process in March 2012 (77 FR 5379). When verified by a futures classification, Smith-Doxey data serves as... Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed... for the addition of an optional cotton futures classification procedure--identified and known...

  8. Text Classification Using Sentential Frequent Itemsets

    Institute of Scientific and Technical Information of China (English)

    Shi-Zhu Liu; He-Ping Hu

    2007-01-01

    Text classification techniques mostly rely on single term analysis of the document data set, while more concepts,especially the specific ones, are usually conveyed by set of terms. To achieve more accurate text classifier, more informative feature including frequent co-occurring words in the same sentence and their weights are particularly important in such scenarios. In this paper, we propose a novel approach using sentential frequent itemset, a concept comes from association rule mining, for text classification, which views a sentence rather than a document as a transaction, and uses a variable precision rough set based method to evaluate each sentential frequent itemset's contribution to the classification. Experiments over the Reuters and newsgroup corpus are carried out, which validate the practicability of the proposed system.

  9. Inventory classification based on decoupling points

    Directory of Open Access Journals (Sweden)

    Joakim Wikner

    2015-01-01

    Full Text Available The ideal state of continuous one-piece flow may never be achieved. Still the logistics manager can improve the flow by carefully positioning inventory to buffer against variations. Strategies such as lean, postponement, mass customization, and outsourcing all rely on strategic positioning of decoupling points to separate forecast-driven from customer-order-driven flows. Planning and scheduling of the flow are also based on classification of decoupling points as master scheduled or not. A comprehensive classification scheme for these types of decoupling points is introduced. The approach rests on identification of flows as being either demand based or supply based. The demand or supply is then combined with exogenous factors, classified as independent, or endogenous factors, classified as dependent. As a result, eight types of strategic as well as tactical decoupling points are identified resulting in a process-based framework for inventory classification that can be used for flow design.

  10. Unbiased bootstrap error estimation for linear discriminant analysis.

    Science.gov (United States)

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  11. Update on diabetes classification.

    Science.gov (United States)

    Thomas, Celeste C; Philipson, Louis H

    2015-01-01

    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal.

  12. Learning Apache Mahout classification

    CERN Document Server

    Gupta, Ashish

    2015-01-01

    If you are a data scientist who has some experience with the Hadoop ecosystem and machine learning methods and want to try out classification on large datasets using Mahout, this book is ideal for you. Knowledge of Java is essential.

  13. Pattern Classification of Signals Using Fisher Kernels

    Directory of Open Access Journals (Sweden)

    Yashodhan Athavale

    2012-01-01

    Full Text Available The intention of this study is to gauge the performance of Fisher kernels for dimension simplification and classification of time-series signals. Our research work has indicated that Fisher kernels have shown substantial improvement in signal classification by enabling clearer pattern visualization in three-dimensional space. In this paper, we will exhibit the performance of Fisher kernels for two domains: financial and biomedical. The financial domain study involves identifying the possibility of collapse or survival of a company trading in the stock market. For assessing the fate of each company, we have collected financial time-series composed of weekly closing stock prices in a common time frame, using Thomson Datastream software. The biomedical domain study involves knee signals collected using the vibration arthrometry technique. This study uses the severity of cartilage degeneration for classifying normal and abnormal knee joints. In both studies, we apply Fisher Kernels incorporated with a Gaussian mixture model (GMM for dimension transformation into feature space, which is created as a three-dimensional plot for visualization and for further classification using support vector machines. From our experiments we observe that Fisher Kernel usage fits really well for both kinds of signals, with low classification error rates.

  14. Completion of the classification

    CERN Document Server

    Strade, Helmut

    2012-01-01

    This is the last of three volumes about ""Simple Lie Algebras over Fields of Positive Characteristic""by Helmut Strade, presenting the state of the art of the structure and classification of Lie algebras over fields of positive characteristic. In this monograph the proof of the Classification Theorem presented in the first volumeis concluded.Itcollects all the important results on the topic whichcan be found only in scatteredscientific literaturso far.

  15. Twitter content classification

    OpenAIRE

    2010-01-01

    This paper delivers a new Twitter content classification framework based sixteen existing Twitter studies and a grounded theory analysis of a personal Twitter history. It expands the existing understanding of Twitter as a multifunction tool for personal, profession, commercial and phatic communications with a split level classification scheme that offers broad categorization and specific sub categories for deeper insight into the real world application of the service.

  16. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  17. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated.

  18. 'No delays achiever'.

    Science.gov (United States)

    2007-05-01

    The latest version of the NHS Institute for Innovation and Improvement's 'no delays achiever', a web based tool created to help NHS organisations achieve the 18-week target for GP referrals to first treatment, is available at www.nodelaysachiever.nhs.uk.

  19. Introduction to error correcting codes in quantum computers

    CERN Document Server

    Salas, P J

    2006-01-01

    The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3

  20. Reflection error correction of gas turbine blade temperature

    Science.gov (United States)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  1. Classification of mistakes in patient care in a Nigerian hospital.

    Science.gov (United States)

    Iyayi, Festus

    2009-12-01

    Recent discussions on improving health outcomes in the hospital setting have emphasized the importance of classification of mistakes in health care institutions These discussions indicate that the existence of a shared classificatory scheme among members of the health team indicates that errors in patient care are recognised as significant events that require systematic action as opposed to defensive, one-dimensional behaviours within the health institution. In Nigeria discussions of errors in patient care are rare in the literature. Discussions of the classification of errors in patient care are even more rare. This study represents a first attempt to deal with this significant problem and examines whether and how mistakes in patient care are classified across five professional health groups in one of Nigeria's largest tertiary health care institutions. The study shows that there are wide variations within and between professional health groups in the classification of errors in patient care. The implications of the absence of a classificatory scheme for errors in patient care for service improvement and organisational learning in the hospital environment are discussed.

  2. Segmentation and Classification of Bone Marrow Cells Images Using Contextual Information for Medical Diagnosis of Acute Leukemias

    Science.gov (United States)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A.; Diaz-Hernandez, Raquel; Peregrina, Hayde; Olmos, Ivan; Alonso, Jose E.; Lobato, Ruben

    2015-01-01

    Morphological identification of acute leukemia is a powerful tool used by hematologists to determine the family of such a disease. In some cases, experienced physicians are even able to determine the leukemia subtype of the sample. However, the identification process may have error rates up to 40% (when classifying acute leukemia subtypes) depending on the physician’s experience and the sample quality. This problem raises the need to create automatic tools that provide hematologists with a second opinion during the classification process. Our research presents a contextual analysis methodology for the detection of acute leukemia subtypes from bone marrow cells images. We propose a cells separation algorithm to break up overlapped regions. In this phase, we achieved an average accuracy of 95% in the evaluation of the segmentation process. In a second phase, we extract descriptive features to the nucleus and cytoplasm obtained in the segmentation phase in order to classify leukemia families and subtypes. We finally created a decision algorithm that provides an automatic diagnosis for a patient. In our experiments, we achieved an overall accuracy of 92% in the supervised classification of acute leukemia families, 84% for the lymphoblastic subtypes, and 92% for the myeloblastic subtypes. Finally, we achieved accuracies of 95% in the diagnosis of leukemia families and 90% in the diagnosis of leukemia subtypes. PMID:26107374

  3. Segmentation and Classification of Bone Marrow Cells Images Using Contextual Information for Medical Diagnosis of Acute Leukemias.

    Science.gov (United States)

    Reta, Carolina; Altamirano, Leopoldo; Gonzalez, Jesus A; Diaz-Hernandez, Raquel; Peregrina, Hayde; Olmos, Ivan; Alonso, Jose E; Lobato, Ruben

    2015-01-01

    Morphological identification of acute leukemia is a powerful tool used by hematologists to determine the family of such a disease. In some cases, experienced physicians are even able to determine the leukemia subtype of the sample. However, the identification process may have error rates up to 40% (when classifying acute leukemia subtypes) depending on the physician's experience and the sample quality. This problem raises the need to create automatic tools that provide hematologists with a second opinion during the classification process. Our research presents a contextual analysis methodology for the detection of acute leukemia subtypes from bone marrow cells images. We propose a cells separation algorithm to break up overlapped regions. In this phase, we achieved an average accuracy of 95% in the evaluation of the segmentation process. In a second phase, we extract descriptive features to the nucleus and cytoplasm obtained in the segmentation phase in order to classify leukemia families and subtypes. We finally created a decision algorithm that provides an automatic diagnosis for a patient. In our experiments, we achieved an overall accuracy of 92% in the supervised classification of acute leukemia families, 84% for the lymphoblastic subtypes, and 92% for the myeloblastic subtypes. Finally, we achieved accuracies of 95% in the diagnosis of leukemia families and 90% in the diagnosis of leukemia subtypes.

  4. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  5. Error Analysis And Second Language Acquisition

    Institute of Scientific and Technical Information of China (English)

    王惠丽

    2016-01-01

    Based on the theories of error and error analysis, the article is trying to explore the effect of error and error analysis on SLA. Thus give some advice to the language teachers and language learners.

  6. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  7. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  8. Discretization error of Stochastic Integrals

    CERN Document Server

    Fukasawa, Masaaki

    2010-01-01

    Asymptotic error distribution for approximation of a stochastic integral with respect to continuous semimartingale by Riemann sum with general stochastic partition is studied. Effective discretization schemes of which asymptotic conditional mean-squared error attains a lower bound are constructed. Two applications are given; efficient delta hedging strategies with transaction costs and effective discretization schemes for the Euler-Maruyama approximation are constructed.

  9. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  10. Photometric Supernova Classification with Machine Learning

    Science.gov (United States)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  11. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses Using Structural Plasticity

    OpenAIRE

    Hussain, Shaista; Basu, Arindam

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  12. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses using Structural Plasticity

    OpenAIRE

    Shaista eHussain; Arindam eBasu

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  13. Determination of diametral error using finite elements and experimental method

    Directory of Open Access Journals (Sweden)

    A. Karabulut

    2010-01-01

    Full Text Available This study concerns experimental and numerical analysis on a one-sided bound workpiece on the lathe machine. Cutting force creates deflection on workpiece while turning process is on. Deflection quantity is estimated utilizing Laser Distance Sensor (LDS with no contact achieved. Also diametral values are detected from different sides of workpiece after each turning operation. It is observed that diametral error differs due to the quantity of the deflection. Diametral error reached a peak where deflection reached a peak. Model which constituted finite elements is verified by experimental results. And also, facts which caused diametral error are determined.

  14. Real-Time Minimization of Tracking Error for Aircraft Systems

    Science.gov (United States)

    Garud, Sumedha; Kaneshige, John T.; Krishnakumar, Kalmanje S.; Kulkarni, Nilesh V.; Burken, John

    2013-01-01

    This technology presents a novel, stable, discrete-time adaptive law for flight control in a Direct adaptive control (DAC) framework. Where errors are not present, the original control design has been tuned for optimal performance. Adaptive control works towards achieving nominal performance whenever the design has modeling uncertainties/errors or when the vehicle suffers substantial flight configuration change. The baseline controller uses dynamic inversion with proportional-integral augmentation. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to a dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. If the system senses that at least one aircraft component is experiencing an excursion and the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, then the neural network (NN) modeling of aircraft operation may be changed.

  15. Error resilient image transmission based on virtual SPIHT

    Science.gov (United States)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  16. Identifying medication error chains from critical incident reports: a new analytic approach.

    Science.gov (United States)

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety.

  17. LDA boost classification: boosting by topics

    Science.gov (United States)

    Lei, La; Qiao, Guo; Qimin, Cao; Qitao, Li

    2012-12-01

    AdaBoost is an efficacious classification algorithm especially in text categorization (TC) tasks. The methodology of setting up a classifier committee and voting on the documents for classification can achieve high categorization precision. However, traditional Vector Space Model can easily lead to the curse of dimensionality and feature sparsity problems; so it affects classification performance seriously. This article proposed a novel classification algorithm called LDABoost based on boosting ideology which uses Latent Dirichlet Allocation (LDA) to modeling the feature space. Instead of using words or phrase, LDABoost use latent topics as the features. In this way, the feature dimension is significantly reduced. Improved Naïve Bayes (NB) is designed as the weaker classifier which keeps the efficiency advantage of classic NB algorithm and has higher precision. Moreover, a two-stage iterative weighted method called Cute Integration in this article is proposed for improving the accuracy by integrating weak classifiers into strong classifier in a more rational way. Mutual Information is used as metrics of weights allocation. The voting information and the categorization decision made by basis classifiers are fully utilized for generating the strong classifier. Experimental results reveals LDABoost making categorization in a low-dimensional space, it has higher accuracy than traditional AdaBoost algorithms and many other classic classification algorithms. Moreover, its runtime consumption is lower than different versions of AdaBoost, TC algorithms based on support vector machine and Neural Networks.

  18. Classification of Scenes into Indoor/Outdoor

    Directory of Open Access Journals (Sweden)

    R. Raja

    2014-12-01

    Full Text Available Effective model for scene classification is essential, to access the desired images from large scale databases. This study presents an efficient scene classification approach by integrating low level features, to reduce the semantic gap between the visual features and richness of human perception. The objective of the study is to categorize an image into indoor or outdoor scene using relevant low level features such as color and texture. The color feature from HSV color model, texture feature through GLCM and entropy computed from UV color space forms the feature vector. To support automatic scene classification, Support Vector Machine (SVM is implemented on low level features for categorizing a scene into indoor/outdoor. Since the combination of these image features exhibit a distinctive disparity between images containing indoor or outdoor scenes, the proposed method achieves better performance in terms of classification accuracy of about 92.44%. The proposed method has been evaluated on IITM- SCID2 (Scene Classification Image Database and dataset of 3442 images collected from the web.

  19. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  20. Measurement Error Models in Astronomy

    CERN Document Server

    Kelly, Brandon C

    2011-01-01

    I discuss the effects of measurement error on regression and density estimation. I review the statistical methods that have been developed to correct for measurement error that are most popular in astronomical data analysis, discussing their advantages and disadvantages. I describe functional models for accounting for measurement error in regression, with emphasis on the methods of moments approach and the modified loss function approach. I then describe structural models for accounting for measurement error in regression and density estimation, with emphasis on maximum-likelihood and Bayesian methods. As an example of a Bayesian application, I analyze an astronomical data set subject to large measurement errors and a non-linear dependence between the response and covariate. I conclude with some directions for future research.

  1. Error Propagation in the Hypercycle

    CERN Document Server

    Campos, P R A; Stadler, P F

    1999-01-01

    We study analytically the steady-state regime of a network of n error-prone self-replicating templates forming an asymmetric hypercycle and its error tail. We show that the existence of a master template with a higher non-catalyzed self-replicative productivity, a, than the error tail ensures the stability of chains in which merror tail is guaranteed for catalytic coupling strengths (K) of order of a. We find that the hypercycle becomes more stable than the chains only for K of order of a2. Furthermore, we show that the minimal replication accuracy per template needed to maintain the hypercycle, the so-called error threshold, vanishes like sqrt(n/K) for large K and n<=4.

  2. FPU-Supported Running Error Analysis

    OpenAIRE

    T. Zahradnický; R. Lórencz

    2010-01-01

    A-posteriori forward rounding error analyses tend to give sharper error estimates than a-priori ones, as they use actual data quantities. One of such a-posteriori analysis – running error analysis – uses expressions consisting of two parts; one generates the error and the other propagates input errors to the output. This paper suggests replacing the error generating term with an FPU-extracted rounding error estimate, which produces a sharper error bound.

  3. Classification based polynomial image interpolation

    Science.gov (United States)

    Lenke, Sebastian; Schröder, Hartmut

    2008-02-01

    Due to the fast migration of high resolution displays for home and office environments there is a strong demand for high quality picture scaling. This is caused on the one hand by large picture sizes and on the other hand due to an enhanced visibility of picture artifacts on these displays [1]. There are many proposals for an enhanced spatial interpolation adaptively matched to picture contents like e.g. edges. The drawback of these approaches is the normally integer and often limited interpolation factor. In order to achieve rational factors there exist combinations of adaptive and non adaptive linear filters, but due to the non adaptive step the overall quality is notably limited. We present in this paper a content adaptive polyphase interpolation method which uses "offline" trained filter coefficients and an "online" linear filtering depending on a simple classification of the input situation. Furthermore we present a new approach to a content adaptive interpolation polynomial, which allows arbitrary polyphase interpolation factors at runtime and further improves the overall interpolation quality. The main goal of our new approach is to optimize interpolation quality by adapting higher order polynomials directly to the image content. In addition we derive filter constraints for enhanced picture quality. Furthermore we extend the classification based filtering to the temporal dimension in order to use it for an intermediate image interpolation.

  4. Assessing Measures of Order Flow Toxicity via Perfect Trade Classification

    DEFF Research Database (Denmark)

    Andersen, Torben G.; Bondarenko, Oleg

    . The VPIN metric involves decomposing volume into active buys and sells. We use the best-bid-offer (BBO) files from the CME Group to construct (near) perfect trade classification measures for the E-mini S&P 500 futures contract. We investigate the accuracy of the ELO Bulk Volume Classification (BVC) scheme...... and find it inferior to a standard tick rule based on individual transactions. Moreover, when VPIN is constructed from accurate classification, it behaves in a diametrically opposite way to BVC-VPIN. We also find the latter to have forecast power for short-term volatility solely because it generates...... systematic classification errors that are correlated with trading volume and return volatility. When controlling for trading intensity and volatility, the BVC-VPIN measure has no incremental predictive power for future volatility. We conclude that VPIN is not suitable for measuring order flow imbalances....

  5. Texture Classification Using Sparse Frame-Based Representations

    Directory of Open Access Journals (Sweden)

    Skretting Karl

    2006-01-01

    Full Text Available A new method for supervised texture classification, denoted by frame texture classification method (FTCM, is proposed. The method is based on a deterministic texture model in which a small image block, taken from a texture region, is modeled as a sparse linear combination of frame elements. FTCM has two phases. In the design phase a frame is trained for each texture class based on given texture example images. The design method is an iterative procedure in which the representation error, given a sparseness constraint, is minimized. In the classification phase each pixel in a test image is labeled by analyzing its spatial neighborhood. This block is represented by each of the frames designed for the texture classes under consideration, and the frame giving the best representation gives the class. The FTCM is applied to nine test images of natural textures commonly used in other texture classification work, yielding excellent overall performance.

  6. Academic Achievement Among Juvenile Detainees.

    Science.gov (United States)

    Grigorenko, Elena L; Macomber, Donna; Hart, Lesley; Naples, Adam; Chapman, John; Geib, Catherine F; Chart, Hilary; Tan, Mei; Wolhendler, Baruch; Wagner, Richard

    2015-01-01

    The literature has long pointed to heightened frequencies of learning disabilities (LD) within the population of law offenders; however, a systematic appraisal of these observations, careful estimation of these frequencies, and investigation of their correlates and causes have been lacking. Here we present data collected from all youth (1,337 unique admissions, mean age 14.81, 20.3% females) placed in detention in Connecticut (January 1, 2010-July 1, 2011). All youth completed a computerized educational screener designed to test a range of performance in reading (word and text levels) and mathematics. A subsample (n = 410) received the Wide Range Achievement Test, in addition to the educational screener. Quantitative (scale-based) and qualitative (grade-equivalence-based) indicators were then analyzed for both assessments. Results established the range of LD in this sample from 13% to 40%, averaging 24.9%. This work provides a systematic exploration of the type and severity of word and text reading and mathematics skill deficiencies among juvenile detainees and builds the foundation for subsequent efforts that may link these deficiencies to both more formal, structured, and variable definitions and classifications of LD, and to other types of disabilities (e.g., intellectual disability) and developmental disorders (e.g., ADHD) that need to be conducted in future research.

  7. R-Peak Detection using Daubechies Wavelet and ECG Signal Classification using Radial Basis Function Neural Network

    Science.gov (United States)

    Rai, H. M.; Trivedi, A.; Chatterjee, K.; Shukla, S.

    2014-01-01

    This paper employed the Daubechies wavelet transform (WT) for R-peak detection and radial basis function neural network (RBFNN) to classify the electrocardiogram (ECG) signals. Five types of ECG beats: normal beat, paced beat, left bundle branch block (LBBB) beat, right bundle branch block (RBBB) beat and premature ventricular contraction (PVC) were classified. 500 QRS complexes were arbitrarily extracted from 26 records in Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, which are available on Physionet website. Each and every QRS complex was represented by 21 points from p1 to p21 and these QRS complexes of each record were categorized according to types of beats. The system performance was computed using four types of parameter evaluation metrics: sensitivity, positive predictivity, specificity and classification error rate. The experimental result shows that the average values of sensitivity, positive predictivity, specificity and classification error rate are 99.8%, 99.60%, 99.90% and 0.12%, respectively with RBFNN classifier. The overall accuracy achieved for back propagation neural network (BPNN), multilayered perceptron (MLP), support vector machine (SVM) and RBFNN classifiers are 97.2%, 98.8%, 99% and 99.6%, respectively. The accuracy levels and processing time of RBFNN is higher than or comparable with BPNN, MLP and SVM classifiers.

  8. Multilayer perceptron, fuzzy sets, and classification

    Science.gov (United States)

    Pal, Sankar K.; Mitra, Sushmita

    1992-01-01

    A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.

  9. Concepts of Classification and Taxonomy Phylogenetic Classification

    Science.gov (United States)

    Fraix-Burnet, D.

    2016-05-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works.

  10. 3-PRS serial-parallel machine tool error calibration and parameter identification

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jun-wei; DAI Jun; HUANG Jun-jie

    2009-01-01

    3-PRS serial-parallel machine tool consists of a 3-degree-of-freedom (DOF) implementation platform and a 2-DOF X-Y platform. The error modeling and parameter identification methods were deduced based on 3-PRS serial-parallel machine tool. 3-PRS serial-parallel machine tool was researched, and the mechanism of error analysis, modeling, identification of error parameters and measurement equipment for the use of agency error of measurement were conducted. In order to achieve the geometric parameters calibration and error compensation of the serial-parallel machine tool, the nominal structural parameters of the controller was adjusted by identifying the structure of the machine tool. With the establishment of a vector space size chain, we can do the error analysis, error modeling, error measurement and error compensation can be done.

  11. Surface errors in the course of machining precision optics

    Science.gov (United States)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  12. Evaluation of partial classification algorithms using ROC curves.

    Science.gov (United States)

    Tusch, G

    1995-01-01

    When using computer programs for decision support in clinical routine, an assessment or a comparison of the underlying classification algorithms is essential. In classical (forced) classification, the classification rule always selects exactly one alternative. A number of proven discriminant measures are available here, e.g.sensitivity and error rate. For probabilistic classification, a series of additional measures has been developed [1]. However, for many clinical applications, there are models where an observation is classified into several classes (partial classification), e.g., models from artificial intelligence, decision analysis, or fuzzy set theory. In partial classification, the discriminatory ability (Murphy) can be adjusted a priori to any level, in most practical cases. Here the usual measures do not apply. We investigate the preconditions for assessment and comparison based on medical decision theory. We focus on problems in the medical domain and establish a methodological framework. When using partial classification procedures, a ROC analysis in the classical sense is no longer appropriate. In forced classification for two classes, the problem is to find a cutoff point on the ROC curve; while in partial classification, you have to find two of them. They characterize the elements being classified as coming from both classes. This extends to several classes. We propose measures corresponding to the usual discriminant measures for forced classification (e.g., sensitivity and error rate) and demonstrate the effects using the ROC approach. For this purpose, we extend the existing method for forced classification in a mathematically sound manner. Algorithms for the construction of thresholds can easily be adapted. Two specific measurement models, based on parametric and non-parametric approaches, will be introduced. The basic methodology is suitable for all partial classification problems, whereas the extended ROC analysis assumes a rank order of the

  13. Implementation of neural networks for classification of moss and lichen samples on the basis of gamma-ray spectrometric analysis.

    Science.gov (United States)

    Dragović, Snezana; Onjia, Antonije; Dragović, Ranko; Bacić, Goran

    2007-07-01

    Mosses and lichens have an important role in biomonitoring. The objective of this study is to develop a neural network model to classify these plants according to geographical origin. A three-layer feed-forward neural network was used. The activities of radionuclides ((226)Ra, (238)U, (235)U, (40)K, (232)Th, (134)Cs, (137)Cs and (7)Be) detected in plant samples by gamma-ray spectrometry were used as inputs for neural network. Five different training algorithms with different number of samples in training sets were tested and compared, in order to find the one with the minimum root mean square error. The best predictive power for the classification of plants from 12 regions was achieved using a network with 5 hidden layer nodes and 3,000 training epochs, using the online back-propagation randomized training algorithm. Implementation of this model to experimental data resulted in satisfactory classification of moss and lichen samples in terms of their geographical origin. The average classification rate obtained in this study was (90.7 +/- 4.8)%.

  14. Perceptual Classification Images from Vernier Acuity Masked by Noise

    Science.gov (United States)

    Ahumada, A. J.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    Letting external noise rather than internal noise limit discrimination performance allows information to be extracted about the observer's stimulus classification rule. A perceptual classification image is the correlation over trials between the noise amplitude at a spatial location and the observer's responses. If, for example, the observer followed the rule of the ideal observer, the perceptual classification image would be an estimate of the ideal observer filter, the difference between the two unmasked images being discriminated. Perceptual classification images were estimated for a vernier discrimination task. The display screen had 48 pixels per degree horizontally and vertically. The no-offset image had a dark horizontal line of 4 pixels, a 1 pixel space, and 4 more dark pixels. Classification images were based on 1600 discrimination trials with the line contrast adjusted to keep the error rate near 25 percent. In the offset image, the second line was one pixel higher. Unlike the ideal observer filter (a horizontal dipole), the observer perceptual classification images are strongly oriented. Fourier transforms of the classification images had a peak amplitude near one cycle per degree and an orientation near 25 degrees. The spatial spread is much more than image blur predicts, and probably indicates the spatial position uncertainty in the task.

  15. An Efficient Audio Classification Approach Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Lhoucine Bahatti

    2016-05-01

    Full Text Available In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines. As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT approach and includes original audio features related to the musical context in which the notes appear. The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies. Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.

  16. Trends and concepts in fern classification

    Science.gov (United States)

    Christenhusz, Maarten J. M.; Chase, Mark W.

    2014-01-01

    Background and Aims Throughout the history of fern classification, familial and generic concepts have been highly labile. Many classifications and evolutionary schemes have been proposed during the last two centuries, reflecting different interpretations of the available evidence. Knowledge of fern structure and life histories has increased through time, providing more evidence on which to base ideas of possible relationships, and classification has changed accordingly. This paper reviews previous classifications of ferns and presents ideas on how to achieve a more stable consensus. Scope An historical overview is provided from the first to the most recent fern classifications, from which conclusions are drawn on past changes and future trends. The problematic concept of family in ferns is discussed, with a particular focus on how this has changed over time. The history of molecular studies and the most recent findings are also presented. Key Results Fern classification generally shows a trend from highly artificial, based on an interpretation of a few extrinsic characters, via natural classifications derived from a multitude of intrinsic characters, towards more evolutionary circumscriptions of groups that do not in general align well with the distribution of these previously used characters. It also shows a progression from a few broad family concepts to systems that recognized many more narrowly and highly controversially circumscribed families; currently, the number of families recognized is stabilizing somewhere between these extremes. Placement of many genera was uncertain until the arrival of molecular phylogenetics, which has rapidly been improving our understanding of fern relationships. As a collective category, the so-called ‘fern allies’ (e.g. Lycopodiales, Psilotaceae, Equisetaceae) were unsurprisingly found to be polyphyletic, and the term should be abandoned. Lycopodiaceae, Selaginellaceae and Isoëtaceae form a clade (the lycopods) that is

  17. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Science.gov (United States)

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  18. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  19. Land-cover classification with an expert classification algorithm using digital aerial photographs

    Directory of Open Access Journals (Sweden)

    José L. de la Cruz

    2010-05-01

    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (Zea mays L., oats (Avena sativa L., rye (Secale cereale L., wheat (Triticum aestivum L. and barley (Hordeun vulgare L., (3 high protein crops, such as peas (Pisum sativum L. and beans (Vicia faba L., (4 alfalfa (Medicago sativa L., (5 woodlands and scrublands, including holly oak (Quercus ilex L. and common retama (Retama sphaerocarpa L., (6 urban soil, (7 olive groves (Olea europaea L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  20. Acoustic classification of dwellings

    DEFF Research Database (Denmark)

    Berardi, Umberto; Rasmussen, Birgit

    2014-01-01

    Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on soun...... exchanging experiences about constructions fulfilling different classes, reducing trade barriers, and finally increasing the sound insulation of dwellings.......Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on sound...... insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...

  1. Classification of hand eczema

    DEFF Research Database (Denmark)

    Agner, T; Aalto-Korte, K; Andersen, K E;

    2015-01-01

    BACKGROUND: Classification of hand eczema (HE) is mandatory in epidemiological and clinical studies, and also important in clinical work. OBJECTIVES: The aim was to test a recently proposed classification system of HE in clinical practice in a prospective multicentre study. METHODS: Patients were...... HE, protein contact dermatitis/contact urticaria, hyperkeratotic endogenous eczema and vesicular endogenous eczema, respectively. An additional diagnosis was given if symptoms indicated that factors additional to the main diagnosis were of importance for the disease. RESULTS: Four hundred and twenty......%) could not be classified. 38% had one additional diagnosis and 26% had two or more additional diagnoses. Eczema on feet was found in 30% of the patients, statistically significantly more frequently associated with hyperkeratotic and vesicular endogenous eczema. CONCLUSION: We find that the classification...

  2. Cellular image classification

    CERN Document Server

    Xu, Xiang; Lin, Feng

    2017-01-01

    This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...

  3. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between....... Descriptors, range of quality levels, number of quality classes, class intervals, denotations and descriptions vary across Europe. The diversity is an obstacle for exchange of experience about constructions fulfilling different classes, implying also trade barriers. Thus, a harmonized classification scheme...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  4. Supernova Photometric Classification Challenge

    CERN Document Server

    Kessler, Richard; Jha, Saurabh; Kuhlmann, Stephen

    2010-01-01

    We have publicly released a blinded mix of simulated SNe, with types (Ia, Ib, Ic, II) selected in proportion to their expected rate. The simulation is realized in the griz filters of the Dark Energy Survey (DES) with realistic observing conditions (sky noise, point spread function and atmospheric transparency) based on years of recorded conditions at the DES site. Simulations of non-Ia type SNe are based on spectroscopically confirmed light curves that include unpublished non-Ia samples donated from the Carnegie Supernova Project (CSP), the Supernova Legacy Survey (SNLS), and the Sloan Digital Sky Survey-II (SDSS-II). We challenge scientists to run their classification algorithms and report a type for each SN. A spectroscopically confirmed subset is provided for training. The goals of this challenge are to (1) learn the relative strengths and weaknesses of the different classification algorithms, (2) use the results to improve classification algorithms, and (3) understand what spectroscopically confirmed sub-...

  5. Information gathering for CLP classification.

    Science.gov (United States)

    Marcello, Ida; Giordano, Felice; Costamagna, Francesca Marina

    2011-01-01

    Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP). If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances) and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  6. The paradox of atheoretical classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may...... be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural...

  7. Information gathering for CLP classification

    Directory of Open Access Journals (Sweden)

    Ida Marcello

    2011-01-01

    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  8. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  9. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  10. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-02-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice.

  11. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  12. Aging transition by random errors

    Science.gov (United States)

    Sun, Zhongkui; Ma, Ning; Xu, Wei

    2017-01-01

    In this paper, the effects of random errors on the oscillating behaviors have been studied theoretically and numerically in a prototypical coupled nonlinear oscillator. Two kinds of noises have been employed respectively to represent the measurement errors accompanied with the parameter specifying the distance from a Hopf bifurcation in the Stuart-Landau model. It has been demonstrated that when the random errors are uniform random noise, the change of the noise intensity can effectively increase the robustness of the system. While the random errors are normal random noise, the increasing of variance can also enhance the robustness of the system under certain conditions that the probability of aging transition occurs reaches a certain threshold. The opposite conclusion is obtained when the probability is less than the threshold. These findings provide an alternative candidate to control the critical value of aging transition in coupled oscillator system, which is composed of the active oscillators and inactive oscillators in practice. PMID:28198430

  13. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    One of the simplest, and yet most consistently well-performing setof classifiers is the \\NB models. These models rely on twoassumptions: $(i)$ All the attributes used to describe an instanceare conditionally independent given the class of that instance,and $(ii)$ all attributes follow a specific...... parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  14. Bosniak Classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;

    2014-01-01

    . Purpose: To investigate the inter- and intra-observer agreement among experienced uroradiologists when categorizing complex renal cysts according to the Bosniak classification. Material and Methods: The original categories of 100 cystic renal masses were chosen as “Gold Standard” (GS), established...... to the calculated weighted κ all readers performed “very good” for both inter-observer and intra-observer variation. Most variation was seen in cysts catagorized as Bosniak II, IIF, and III. These results show that radiologists who evaluate complex renal cysts routinely may apply the Bosniak classification...

  15. Classification of iconic images

    OpenAIRE

    Zrianina, Mariia; Kopf, Stephan

    2016-01-01

    Iconic images represent an abstract topic and use a presentation that is intuitively understood within a certain cultural context. For example, the abstract topic “global warming” may be represented by a polar bear standing alone on an ice floe. Such images are widely used in media and their automatic classification can help to identify high-level semantic concepts. This paper presents a system for the classification of iconic images. It uses a variation of the Bag of Visual Words approach wi...

  16. Sequence Classification: 890773 [

    Lifescience Database Archive (English)

    Full Text Available oline as sole nitrogen source; deficiency of the human homolog causes HPII, an autosomal recessive inborn error of metabolism; Put2p || http://www.ncbi.nlm.nih.gov/protein/6321826 ...

  17. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  18. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  19. Dominant modes via model error

    Science.gov (United States)

    Yousuff, A.; Breida, M.

    1992-01-01

    Obtaining a reduced model of a stable mechanical system with proportional damping is considered. Such systems can be conveniently represented in modal coordinates. Two popular schemes, the modal cost analysis and the balancing method, offer simple means of identifying dominant modes for retention in the reduced model. The dominance is measured via the modal costs in the case of modal cost analysis and via the singular values of the Gramian-product in the case of balancing. Though these measures do not exactly reflect the more appropriate model error, which is the H2 norm of the output-error between the full and the reduced models, they do lead to simple computations. Normally, the model error is computed after the reduced model is obtained, since it is believed that, in general, the model error cannot be easily computed a priori. The authors point out that the model error can also be calculated a priori, just as easily as the above measures. Hence, the model error itself can be used to determine the dominant modes. Moreover, the simplicity of the computations does not presume any special properties of the system, such as small damping, orthogonal symmetry, etc.

  20. Quantum error correction for beginners.

    Science.gov (United States)

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  1. Error image aware content restoration

    Science.gov (United States)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  2. Compensatory neurofuzzy model for discrete data classification in biomedical

    Science.gov (United States)

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  3. Key-phrase based classification of public health web pages.

    Science.gov (United States)

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  4. Words semantic orientation classification based on HowNet

    Institute of Scientific and Technical Information of China (English)

    LI Dun; MA Yong-tao; GUO Jian-li

    2009-01-01

    Based on the text orientation classification, a new measurement approach to semantic orientation of words was proposed. According to the integrated and detailed definition of words in HowNet, seed sets including the words with intense orientations were built up. The orientation similarity between the seed words and the given word was then calculated using the sentiment weight priority to recognize the semantic orientation of common words. Finally, the words' semantic orientation and the context were combined to recognize the given words' orientation. The experiments show that the measurement approach achieves better results for common words' orientation classification and contributes particularly to the text orientation classification of large granularities.

  5. Development of a classification system for cup anemometers - CLASSCUP

    DEFF Research Database (Denmark)

    Friis Pedersen, Troels

    2003-01-01

    Errors associated with the measurements of the wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show a significant and not acceptable difference. TheEuropean CLASSCUP research project posed the object......Errors associated with the measurements of the wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show a significant and not acceptable difference. TheEuropean CLASSCUP research project posed...... the objectives to quantify the errors associated with the use of cup anemometers, and to determine the requirements for an optimum design of a cup anemometer, and to develop a classification system forquantification of systematic errors of cup anemometers. The present report describes this proposed...... classification system. A classification method for cup anemometers has been developed, which proposes general external operational ranges to be used. Anormal category range connected to ideal sites of the IEC power performance standard was made, and another extended category range for complex terrain...

  6. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  7. NCLB: Achievement Robin Hood?

    Science.gov (United States)

    Bracey, Gerald W.

    2008-01-01

    In his "Wall Street Journal" op-ed on the 25th of anniversary of "A Nation At Risk", former assistant secretary of education Chester E. Finn Jr. applauded the report for turning U.S. education away from equality and toward achievement. It was not surprising, then, that in mid-2008, Finn arranged a conference to examine the…

  8. Cognitive Processes and Achievement.

    Science.gov (United States)

    Hunt, Dennis; Randhawa, Bikkar S.

    For a group of 165 fourth- and fifth-grade students, four achievement test scores were correlated with success on nine tests designed to measure three cognitive functions: sustained attention, successive processing, and simultaneous processing. This experiment was designed in accordance with Luria's model of the three functional units of the…

  9. Classification of waste packages

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, H.P.; Sauer, M.; Rojahn, T. [Versuchsatomkraftwerk GmbH, Kahl am Main (Germany)

    2001-07-01

    A barrel gamma scanning unit has been in use at the VAK for the classification of radioactive waste materials since 1998. The unit provides the facility operator with the data required for classification of waste barrels. Once these data have been entered into the AVK data processing system, the radiological status of raw waste as well as pre-treated and processed waste can be tracked from the point of origin to the point at which the waste is delivered to a final storage. Since the barrel gamma scanning unit was commissioned in 1998, approximately 900 barrels have been measured and the relevant data required for classification collected and analyzed. Based on the positive results of experience in the use of the mobile barrel gamma scanning unit, the VAK now offers the classification of barrels as a service to external users. Depending upon waste quantity accumulation, this measurement unit offers facility operators a reliable and time-saving and cost-effective means of identifying and documenting the radioactivity inventory of barrels scheduled for final storage. (orig.)

  10. Improving Student Question Classification

    Science.gov (United States)

    Heiner, Cecily; Zachary, Joseph L.

    2009-01-01

    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  11. Event Classification using Concepts

    NARCIS (Netherlands)

    Boer, M.H.T. de; Schutte, K.; Kraaij, W.

    2013-01-01

    The semantic gap is one of the challenges in the GOOSE project. In this paper a Semantic Event Classification (SEC) system is proposed as an initial step in tackling the semantic gap challenge in the GOOSE project. This system uses semantic text analysis, multiple feature detectors using the BoW mod

  12. Nearest convex hull classification

    NARCIS (Netherlands)

    G.I. Nalbantov (Georgi); P.J.F. Groenen (Patrick); J.C. Bioch (Cor)

    2006-01-01

    textabstractConsider the classification task of assigning a test object to one of two or more possible groups, or classes. An intuitive way to proceed is to assign the object to that class, to which the distance is minimal. As a distance measure to a class, we propose here to use the distance to the

  13. Classification of myocardial infarction

    DEFF Research Database (Denmark)

    Saaby, Lotte; Poulsen, Tina Svenstrup; Hosbond, Susanne Elisabeth;

    2013-01-01

    The classification of myocardial infarction into 5 types was introduced in 2007 as an important component of the universal definition. In contrast to the plaque rupture-related type 1 myocardial infarction, type 2 myocardial infarction is considered to be caused by an imbalance between demand and...

  14. Recurrent neural collective classification.

    Science.gov (United States)

    Monner, Derek D; Reggia, James A

    2013-12-01

    With the recent surge in availability of data sets containing not only individual attributes but also relationships, classification techniques that take advantage of predictive relationship information have gained in popularity. The most popular existing collective classification techniques have a number of limitations-some of them generate arbitrary and potentially lossy summaries of the relationship data, whereas others ignore directionality and strength of relationships. Popular existing techniques make use of only direct neighbor relationships when classifying a given entity, ignoring potentially useful information contained in expanded neighborhoods of radius greater than one. We present a new technique that we call recurrent neural collective classification (RNCC), which avoids arbitrary summarization, uses information about relationship directionality and strength, and through recursive encoding, learns to leverage larger relational neighborhoods around each entity. Experiments with synthetic data sets show that RNCC can make effective use of relationship data for both direct and expanded neighborhoods. Further experiments demonstrate that our technique outperforms previously published results of several collective classification methods on a number of real-world data sets.

  15. Sandwich classification theorem

    Directory of Open Access Journals (Sweden)

    Alexey Stepanov

    2015-09-01

    Full Text Available The present note arises from the author's talk at the conference ``Ischia Group Theory 2014''. For subgroups FleN of a group G denote by Lat(F,N the set of all subgroups of N , containing F . Let D be a subgroup of G . In this note we study the lattice LL=Lat(D,G and the lattice LL ′ of subgroups of G , normalized by D . We say that LL satisfies sandwich classification theorem if LL splits into a disjoint union of sandwiches Lat(F,N G (F over all subgroups F such that the normal closure of D in F coincides with F . Here N G (F denotes the normalizer of F in G . A similar notion of sandwich classification is introduced for the lattice LL ′ . If D is perfect, i.,e. coincides with its commutator subgroup, then it turns out that sandwich classification theorem for LL and LL ′ are equivalent. We also show how to find basic subroup F of sandwiches for LL ′ and review sandwich classification theorems in algebraic groups over rings.

  16. Dynamic Latent Classification Model

    DEFF Research Database (Denmark)

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...... in the process as well as modeling dependences between attributes....

  17. Classifications in popular music

    NARCIS (Netherlands)

    van Venrooij, A.; Schmutz, V.; Wright, J.D.

    2015-01-01

    The categorical system of popular music, such as genre categories, is a highly differentiated and dynamic classification system. In this article we present work that studies different aspects of these categorical systems in popular music. Following the work of Paul DiMaggio, we focus on four questio

  18. Shark Teeth Classification

    Science.gov (United States)

    Brown, Tom; Creel, Sally; Lee, Velda

    2009-01-01

    On a recent autumn afternoon at Harmony Leland Elementary in Mableton, Georgia, students in a fifth-grade science class investigated the essential process of classification--the act of putting things into groups according to some common characteristics or attributes. While they may have honed these skills earlier in the week by grouping their own…

  19. Co-occurrence Models in Music Genre Classification

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan

    2005-01-01

    Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...... genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models....

  20. Effects of Classroom Sociometric Status on Achievement Prediction.

    Science.gov (United States)

    Peper, John B.

    The purpose of the study was to determine the relative importance of: (1) generalized ability; (2) prior specific learning; (3) self concept; (4) peer esteem; and (5) teacher esteem for pupils on the prediction of arithmetic achievement. The study included proportional numbers of fifth grade students from four community classification strata…

  1. Harmless error analysis: How do judges respond to confession errors?

    Science.gov (United States)

    Wallace, D Brian; Kassin, Saul M

    2012-04-01

    In Arizona v. Fulminante (1991), the U.S. Supreme Court opened the door for appellate judges to conduct a harmless error analysis of erroneously admitted, coerced confessions. In this study, 132 judges from three states read a murder case summary, evaluated the defendant's guilt, assessed the voluntariness of his confession, and responded to implicit and explicit measures of harmless error. Results indicated that judges found a high-pressure confession to be coerced and hence improperly admitted into evidence. As in studies with mock jurors, however, the improper confession significantly increased their conviction rate in the absence of other evidence. On the harmless error measures, judges successfully overruled the confession when required to do so, indicating that they are capable of this analysis.

  2. Text Classification Retrieval Based on Complex Network and ICA Algorithm

    Directory of Open Access Journals (Sweden)

    Hongxia Li

    2013-08-01

    Full Text Available With the development of computer science and information technology, the library is developing toward information and network. The library digital process converts the book into digital information. The high-quality preservation and management are achieved by computer technology as well as text classification techniques. It realizes knowledge appreciation. This paper introduces complex network theory in the text classification process and put forwards the ICA semantic clustering algorithm. It realizes the independent component analysis of complex network text classification. Through the ICA clustering algorithm of independent component, it realizes character words clustering extraction of text classification. The visualization of text retrieval is improved. Finally, we make a comparative analysis of collocation algorithm and ICA clustering algorithm through text classification and keyword search experiment. The paper gives the clustering degree of algorithm and accuracy figure. Through simulation analysis, we find that ICA clustering algorithm increases by 1.2% comparing with text classification clustering degree. Accuracy can be improved by 11.1% at most. It improves the efficiency and accuracy of text classification retrieval. It also provides a theoretical reference for text retrieval classification of eBook

  3. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  4. Pauli Exchange Errors in Quantum Computation

    CERN Document Server

    Ruskai, M B

    2000-01-01

    We argue that a physically reasonable model of fault-tolerant computation requires the ability to correct a type of two-qubit error which we call Pauli exchange errors as well as one qubit errors. We give an explicit 9-qubit code which can handle both Pauli exchange errors and all one-bit errors.

  5. Surgical options for correction of refractive error following cataract surgery.

    Science.gov (United States)

    Abdelghany, Ahmed A; Alio, Jorge L

    2014-01-01

    Refractive errors are frequently found following cataract surgery and refractive lens exchange. Accurate biometric analysis, selection and calculation of the adequate intraocular lens (IOL) and modern techniques for cataract surgery all contribute to achieving the goal of cataract surgery as a refractive procedure with no refractive error. However, in spite of all these advances, residual refractive error still occasionally occurs after cataract surgery and laser in situ keratomileusis (LASIK) can be considered the most accurate method for its correction. Lens-based procedures, such as IOL exchange or piggyback lens implantation are also possible alternatives especially in cases with extreme ametropia, corneal abnormalities, or in situations where excimer laser is unavailable. In our review, we have found that piggyback IOL is safer and more accurate than IOL exchange. Our aim is to provide a review of the recent literature regarding target refraction and residual refractive error in cataract surgery.

  6. Overview of Quantum Error Prevention and Leakage Elimination

    CERN Document Server

    Byrd, M S; Lidar, D A; Byrd, Mark S.; Wu, Lian-Ao; Lidar, Daniel A.

    2004-01-01

    Quantum error prevention strategies will be required to produce a scalable quantum computing device and are of central importance in this regard. Progress in this area has been quite rapid in the past few years. In order to provide an overview of the achievements in this area, we discuss the three major classes of error prevention strategies, the abilities of these methods and the shortcomings. We then discuss the combinations of these strategies which have recently been proposed in the literature. Finally we present recent results in reducing errors on encoded subspaces using decoupling controls. We show how to generally remove mixing of an encoded subspace with external states (termed leakage errors) using decoupling controls. Such controls are known as ``leakage elimination operations'' or ``LEOs.''

  7. Efficient Fingercode Classification

    Science.gov (United States)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  8. Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring

    Science.gov (United States)

    Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat

    2015-01-01

    We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863

  9. A Novel Vehicle Classification Using Embedded Strain Gauge Sensors

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2008-11-01

    Full Text Available Abstract: This paper presents a new vehicle classification and develops a traffic monitoring detector to provide reliable vehicle classification to aid traffic management systems. The basic principle of this approach is based on measuring the dynamic strain caused by vehicles across pavement to obtain the corresponding vehicle parameters – wheelbase and number of axles – to then accurately classify the vehicle. A system prototype with five embedded strain sensors was developed to validate the accuracy and effectiveness of the classification method. According to the special arrangement of the sensors and the different time a vehicle arrived at the sensors one can estimate the vehicle’s speed accurately, corresponding to the estimated vehicle wheelbase and number of axles. Because of measurement errors and vehicle characteristics, there is a lot of overlap between vehicle wheelbase patterns. Therefore, directly setting up a fixed threshold for vehicle classification often leads to low-accuracy results. Using the machine learning pattern recognition method to deal with this problem is believed as one of the most effective tools. In this study, support vector machines (SVMs were used to integrate the classification features extracted from the strain sensors to automatically classify vehicles into five types, ranging from small vehicles to combination trucks, along the lines of the Federal Highway Administration vehicle classification guide. Test bench and field experiments will be introduced in this paper. Two support vector machines classification algorithms (one-against-all, one-against-one are used to classify single sensor data and multiple sensor combination data. Comparison of the two classification method results shows that the classification accuracy is very close using single data or multiple data. Our results indicate that using multiclass SVM-based fusion multiple sensor data significantly improves

  10. Energy efficient error-correcting coding for wireless systems

    NARCIS (Netherlands)

    Shao, Xiaoying

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required t

  11. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  12. On quantifying post-classification subpixel landcover changes

    Science.gov (United States)

    Silván-Cárdenas, Jose L.; Wang, Le

    2014-12-01

    The post-classifications change matrix is well defined for hard classifications. However, for soft classifications, where partial membership of pixels to landcover classes is allowed, there is no single definition for such a matrix. In this paper, we argue that a natural definition of the post-classification change matrix for subpixel classifications can be done in terms of a constrained optimization problem, according to which the change matrix should allow an optimal prediction of the subpixel landcover fractions at the latest date from those of the earliest date. We first show that the traditional change matrix for crisp classification corresponds to the optimal solution of the unconstrained problem. Then, the formulation is generalized for subpixel classifications by incorporating certain constraints pertaining to desirable properties of a change matrix, thus resulting in a constrained least square (CLS) change matrix. In addition, based on intuitive criteria, a generalized product (GPROD) was parameterized in terms of an exponent parameter and used to derive another change matrix. It was shown that when the exponent parameter of the GPROD operator tends to infinity, one of the most commonly used methods for map comparison from subpixel fractions, namely the MINPROD composite operator, results. The three matrices (CLS, GPROD and MINPROD) were tested on both simulated and real subpixel changes derived from QuickBird and Landsat TM images. Results indicated that, for small exponent values (0-0.5), the GPROD matrix yielded the lowest errors of estimated landcover changes, whereas the MINPROD generally yielded the highest errors for the same estimations.

  13. A-posteriori error estimation for second order mechanical systems

    Institute of Scientific and Technical Information of China (English)

    Thomas Ruiner; J(ǒ)rg Fehr; Bernard Haasdonk; Peter Eberhard

    2012-01-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom.As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important.In this work,an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems.Due to the special second order structure of mechanical systems,an improvement of the a-posteriori error estimator is achieved· A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique.Therefore,it can be used for moment-matching based,Gramian matrices based or modal based model reduction techniques.The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system,and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  14. Error-thresholds for qudit-based topological quantum memories

    Science.gov (United States)

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.

    2014-03-01

    Extending the quantum computing paradigm from qubits to higher-dimensional quantum systems allows for increased channel capacity and a more efficient implementation of quantum gates. However, to perform reliable computations an efficient error-correction scheme adapted for these multi-level quantum systems is needed. A promising approach is via topological quantum error correction, where stability to external noise is achieved by encoding quantum information in non-local degrees of freedom. A key figure of merit is the error threshold which quantifies the fraction of physical qudits that can be damaged before logical information is lost. Here we analyze the resilience of generalized topological memories built from d-level quantum systems (qudits) to bit-flip errors. The error threshold is determined by mapping the quantum setup to a classical Potts-like model with bond disorder, which is then investigated numerically using large-scale Monte Carlo simulations. Our results show that topological error correction with qutrits exhibits an improved error threshold in comparison to qubit-based systems.

  15. Using quadtree segmentation to support error modelling in categorical raster data

    NARCIS (Netherlands)

    Bruin, de S.; Wit, de A.J.W.; Oort, van P.A.J.

    2004-01-01

    This paper explores the use of quadtree segmentation of a land-cover map to improve error modelling by (1) accounting for variation in classification accuracy among differently sized homogeneous map regions and (2) improving the statistical properties of map realizations generated by sequential indi

  16. Updated Classification System for Proximal Humeral Fractures

    Science.gov (United States)

    Guix, José M. Mora; Pedrós, Juan Sala; Serrano, Alejandro Castaño

    2009-01-01

    Proximal humeral fractures can restrict daily activities and, therefore, deserve efficient diagnoses that minimize complications and sequels. For good diagnosis and treatment, patient characteristics, variability in the forms of the fractures presented, and the technical difficulties in achieving fair results with surgical treatment should all be taken into account. Current classification systems for these fractures are based on anatomical and pathological principles, and not on systematic image reading. These fractures can appear in many different forms, with many characteristics that must be identified. However, many current classification systems lack good reliability, both inter-observer and intra-observer for different image types. A new approach to image reading, following a well-designed set and sequence of variables to check, is needed. We previously reported such an image reading system. In the present study, we report a classification system based on this image reading system. Here we define 21 fracture characteristics and apply them along with classical Codman approaches to classify fractures. We base this novel classification system for classifying proximal humeral fractures on a review of scientific literature and improvements to our image reading protocol. Patient status, fracture characteristics and surgeon circumstances have been important issues in developing this system. PMID:19574487

  17. Classification of Sporting Activities Using Smartphone Accelerometers

    Directory of Open Access Journals (Sweden)

    Noel E. O'Connor

    2013-04-01

    Full Text Available In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT. Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today’s society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach.

  18. Multiclass Bayes error estimation by a feature space sampling technique

    Science.gov (United States)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  19. A Conceptual Framework to use Remediation of Errors Based on Multiple External Remediation Applied to Learning Objects

    Directory of Open Access Journals (Sweden)

    Maici Duarte Leite

    2014-09-01

    Full Text Available This paper presents the application of some concepts of Intelligent Tutoring Systems (ITS to elaborate a conceptual framework that uses the remediation of errors with Multiple External Representations (MERs in Learning Objects (LO. To this is demonstrated a development of LO for teaching the Pythagorean Theorem through this framework. This study explored the remediation process of error by a classification of error in mathematical, providing support for the use of MERs with the remediation of error. The main objective of the proposed framework is to assist the individual learner in the recovery of a mistake made during the interaction with the LO, either through carelessness or lack of knowledge. Initially, we present the compilation of the classification of mathematical errors and their relationship with MERs. Later the concepts involved with conceptual framework proposed. Finally, an experiment with LO developed with a authoring tool called FARMA, using the conceptual framework for teaching the Pythagorean Theorem is presented.

  20. Towards automatic classification of all WISE sources

    Science.gov (United States)

    Kurcz, A.; Bilicki, M.; Solarz, A.; Krupa, M.; Pollo, A.; Małek, K.

    2016-07-01

    Context. The Wide-field Infrared Survey Explorer (WISE) has detected hundreds of millions of sources over the entire sky. Classifying them reliably is, however, a challenging task owing to degeneracies in WISE multicolour space and low levels of detection in its two longest-wavelength bandpasses. Simple colour cuts are often not sufficient; for satisfactory levels of completeness and purity, more sophisticated classification methods are needed. Aims: Here we aim to obtain comprehensive and reliable star, galaxy, and quasar catalogues based on automatic source classification in full-sky WISE data. This means that the final classification will employ only parameters available from WISE itself, in particular those which are reliably measured for the majority of sources. Methods: For the automatic classification we applied a supervised machine learning algorithm, support vector machines (SVM). It requires a training sample with relevant classes already identified, and we chose to use the SDSS spectroscopic dataset (DR10) for that purpose. We tested the performance of two kernels used by the classifier, and determined the minimum number of sources in the training set required to achieve stable classification, as well as the minimum dimension of the parameter space. We also tested SVM classification accuracy as a function of extinction and apparent magnitude. Thus, the calibrated classifier was finally applied to all-sky WISE data, flux-limited to 16 mag (Vega) in the 3.4 μm channel. Results: By calibrating on the test data drawn from SDSS, we first established that a polynomial kernel is preferred over a radial one for this particular dataset. Next, using three classification parameters (W1 magnitude, W1-W2 colour, and a differential aperture magnitude) we obtained very good classification efficiency in all the tests. At the bright end, the completeness for stars and galaxies reaches ~95%, deteriorating to ~80% at W1 = 16 mag, while for quasars it stays at a level of

  1. Achieveing Organizational Excellence Through

    Directory of Open Access Journals (Sweden)

    Mehdi Abzari

    2009-04-01

    Full Text Available AbstractToday, In order to create motivation and desirable behavior in employees, to obtain organizational goals,to increase human resources productivity and finally to achieve organizational excellence, top managers oforganizations apply new and effective strategies. One of these strategies to achieve organizational excellenceis creating desirable corporate culture. This research has been conducted to identify the path to reachorganizational excellence by creating corporate culture according to the standards and criteria oforganizational excellence. The result of the so-called research is this paper in which researchers foundtwenty models and components of corporate culture and based on the Industry, organizational goals andEFQM model developed a model called "The Eskimo model of Culture-Excellence". The method of theresearch is survey and field study and the questionnaires were distributed among 116 managers andemployees. To assess the reliability of questionnaires, Cronbach alpha was measured to be 95% in the idealsituation and 0/97 in the current situation. Systematic sampling was done and in the pre-test stage 45questionnaires were distributed. A comparison between the current and the ideal corporate culture based onthe views of managers and employees was done and finally it has been concluded that corporate culture isthe main factor to facilitate corporate excellence and success in order to achieve organizational effectiveness.The contribution of this paper is that it proposes a localized –applicable model of corporate excellencethrough reinforcing corporate culture.

  2. SAR images classification method based on Dempster-Shafer theory and kernel estimate

    Institute of Scientific and Technical Information of China (English)

    He Chu; Xia Guisong; Sun Hong

    2007-01-01

    To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Markov context and Dempster-Shafer evidence theory is proposed.Initially, a nonparametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images.And then under the Markov context, both the determinate PDF and the kernel estimate method are adopted respectively, to form a primary classification.Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification.Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification.Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results.Experimental results on real SAR images illustrate a rather impressive performance.

  3. Reliable and reproducible classification system for scoliotic radiograph using image processing techniques.

    Science.gov (United States)

    Anitha, H; Prabhu, G K; Karunakar, A K

    2014-11-01

    Scoliosis classification is useful for guiding the treatment and testing the clinical outcome. State-of-the-art classification procedures are inherently unreliable and non-reproducible due to technical and human judgmental error. In the current diagnostic system each examiner will have diagrammatic summary of classification procedure, number of scoliosis curves, apex level, etc. It is very difficult to define the required anatomical parameters in the noisy radiographs. The classification system demands automatic image understanding system. The proposed automated classification procedures extracts the anatomical features using image processing and applies classification procedures based on computer assisted algorithms. The reliability and reproducibility of the proposed computerized image understanding system are compared with manual and computer assisted system using Kappa values.

  4. The generalization ability of online SVM classification based on Markov sampling.

    Science.gov (United States)

    Xu, Jie; Yan Tang, Yuan; Zou, Bin; Xu, Zongben; Li, Luoqing; Lu, Yang

    2015-03-01

    In this paper, we consider online support vector machine (SVM) classification learning algorithms with uniformly ergodic Markov chain (u.e.M.c.) samples. We establish the bound on the misclassification error of an online SVM classification algorithm with u.e.M.c. samples based on reproducing kernel Hilbert spaces and obtain a satisfactory convergence rate. We also introduce a novel online SVM classification algorithm based on Markov sampling, and present the numerical studies on the learning ability of online SVM classification based on Markov sampling for benchmark repository. The numerical studies show that the learning performance of the online SVM classification algorithm based on Markov sampling is better than that of classical online SVM classification based on random sampling as the size of training samples is larger.

  5. Design and scheduling for periodic concurrent error detection and recovery in processor arrays

    Science.gov (United States)

    Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent

    1992-01-01

    Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.

  6. POSITION ERROR IN STATION-KEEPING SATELLITE

    Science.gov (United States)

    of an error in satellite orientation and the sun being in a plane other than the equatorial plane may result in errors in position determination. The nature of the errors involved is described and their magnitudes estimated.

  7. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  8. Redundant measurements for controlling errors

    Energy Technology Data Exchange (ETDEWEB)

    Ehinger, M. H.; Crawford, J. M.; Madeen, M. L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program.

  9. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  10. Automatic figure classification in bioscience literature.

    Science.gov (United States)

    Kim, Daehyun; Ramesh, Balaji Polepalli; Yu, Hong

    2011-10-01

    Millions of figures appear in biomedical articles, and it is important to develop an intelligent figure search engine to return relevant figures based on user entries. In this study we report a figure classifier that automatically classifies biomedical figures into five predefined figure types: Gel-image, Image-of-thing, Graph, Model, and Mix. The classifier explored rich image features and integrated them with text features. We performed feature selection and explored different classification models, including a rule-based figure classifier, a supervised machine-learning classifier, and a multi-model classifier, the latter of which integrated the first two classifiers. Our results show that feature selection improved figure classification and the novel image features we explored were the best among image features that we have examined. Our results also show that integrating text and image features achieved better performance than using either of them individually. The best system is a multi-model classifier which combines the rule-based hierarchical classifier and a support vector machine (SVM) based classifier, achieving a 76.7% F1-score for five-type classification. We demonstrated our system at http://figureclassification.askhermes.org/.

  11. Errors depending on costs in sample surveys

    OpenAIRE

    Marella, Daniela

    2007-01-01

    "This paper presents a total survey error model that simultaneously treats sampling error, nonresponse error and measurement error. The main aim for developing the model is to determine the optimal allocation of the available resources for the total survey error reduction. More precisely, the paper is concerned with obtaining the best possible accuracy in survey estimate through an overall economic balance between sampling and nonsampling error." (author's abstract)

  12. Toward a cognitive taxonomy of medical errors.

    OpenAIRE

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of e...

  13. Error-tolerant Tree Matching

    CERN Document Server

    Oflazer, K

    1996-01-01

    This paper presents an efficient algorithm for retrieving from a database of trees, all trees that match a given query tree approximately, that is, within a certain error tolerance. It has natural language processing applications in searching for matches in example-based translation systems, and retrieval from lexical databases containing entries of complex feature structures. The algorithm has been implemented on SparcStations, and for large randomly generated synthetic tree databases (some having tens of thousands of trees) it can associatively search for trees with a small error, in a matter of tenths of a second to few seconds.

  14. Most Used Rock Mass Classifications for Underground Opening

    Directory of Open Access Journals (Sweden)

    Al-Jbori A’ssim

    2010-01-01

    Full Text Available Problem statement: Rock mass characterization is an integral part of rock engineering practice. The empirical design methods based on rock mass classifications systems provide quick assessments of the support requirements for underground excavations at any stage of a project, even if the available geotechnical data are limited. The underground excavation industry tends to lean on empirical approaches such as rock mass classification methods, which provide a rapid means of assessing rock mass quality and support requirements. Approach: There were several classifications systems used in underground construction design. This study reviewed and summarized the must used classification methods in the mining and tunneling systems. Results: The method of this research was collected of the underground excavations classifications method with its parameters calculations procedures for each one, trying to find the simplest, less costs and more efficient method. Conclusion: The study concluded with reference to errors that may arise in particular conditions and the choice of rock mass classification depend on the sensitivity of the projects, costs and the efficient.

  15. Principal components null space analysis for image and video classification.

    Science.gov (United States)

    Vaswani, Namrata; Chellappa, Rama

    2006-07-01

    We present a new classification algorithm, principal component null space analysis (PCNSA), which is designed for classification problems like object recognition where different classes have unequal and nonwhite noise covariance matrices. PCNSA first obtains a principal components subspace (PCA space) for the entire data. In this PCA space, it finds for each class "i," an Mi-dimensional subspace along which the class' intraclass variance is the smallest. We call this subspace an approximate null space (ANS) since the lowest variance is usually "much smaller" than the highest. A query is classified into class "i" if its distance from the class' mean in the class' ANS is a minimum. We derive upper bounds on classification error probability of PCNSA and use these expressions to compare classification performance of PCNSA with that of subspace linear discriminant analysis (SLDA). We propose a practical modification of PCNSA called progressive-PCNSA that also detects "new" (untrained classes). Finally, we provide an experimental comparison of PCNSA and progressive PCNSA with SLDA and PCA and also with other classification algorithms-linear SVMs, kernel PCA, kernel discriminant analysis, and kernel SLDA, for object recognition and face recognition under large pose/expression variation. We also show applications of PCNSA to two classification problems in video--an action retrieval problem and abnormal activity detection.

  16. Generalization performance of graph-based semisupervised classification

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Semi-supervised learning has been of growing interest over the past few years and many methods have been proposed. Although various algorithms are provided to implement semi-supervised learning,there are still gaps in our understanding of the dependence of generalization error on the numbers of labeled and unlabeled data. In this paper,we consider a graph-based semi-supervised classification algorithm and establish its generalization error bounds. Our results show the close relations between the generalization performance and the structural invariants of data graph.

  17. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M;

    2008-01-01

    Summary Background Hand eczema is a long-lasting disease with a high prevalence in the background population. The disease has severe, negative effects on quality of life and sometimes on social status. Epidemiological studies have identified risk factors for onset and prognosis, but treatment...... of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... for hand eczema is needed. Objectives The present study attempts to characterize subdiagnoses of hand eczema with respect to basic demographics, medical history and morphology. Methods Clinical data from 416 patients with hand eczema from 10 European patch test clinics were assessed. Results...

  18. [New classification of vasculitis].

    Science.gov (United States)

    Anić, Branimir

    2014-01-01

    Vasculitis syndrome comprises a heterogenic group of inflammatory rheumatic diseases whose common feature is inflammation in the blood vessel wall. Establishing the diagnosis of vasculitis is one of the greatest challenges in medicine. Clinical presentation of vasculitis depends on the extent of an organ system affection, as well as on the total number of affected organs. A great range of clinical presentations of vasculitis and the low incidence of the disease impede systematic clinical investigation of vasculitis. The needs of clinical routine and the need for conducting systemic clinical investigations require a clear distinction of individual clinical entities. Different classifications of vasculitis syndrome have been proposed: according to etiology, pathogenesis, histological finding in the affected vessels, affection of individual organs and organ systems. This paper presents and comments news and recent classifications and nomenclature of vasculitic entities proposed at the second conference in Chapel Hill.

  19. Bosniak classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;

    2016-01-01

    at MR and CEUS imaging and those at CT. PURPOSE: To compare diagnostic accuracy of MR, CEUS, and CT when categorizing complex renal cystic masses according to the Bosniak classification. MATERIAL AND METHODS: From February 2011 to June 2012, 46 complex renal cysts were prospectively evaluated by three...... readers. Each mass was categorized according to the Bosniak classification and CT was chosen as gold standard. Kappa was calculated for diagnostic accuracy and data was compared with pathological results. RESULTS: CT images found 27 BII, six BIIF, seven BIII, and six BIV. Forty-three cysts could...... one category lower. Pathologic correlation in six lesions revealed four malignant and two benign lesions. CONCLUSION: CEUS and MR both up- and downgraded renal cysts compared to CT, and until these non-radiation modalities have been refined and adjusted, CT should remain the gold standard...

  20. Errors in the radiological evaluation of the alimentary tract: part I.

    Science.gov (United States)

    Mandato, Ylenia; Reginelli, Alfonso; Galasso, Rosario; Iacobellis, Francesca; Berritto, Daniela; Cappabianca, Salvatore

    2012-08-01

    Physicians are subjected to an increasing number of medical malpractice claims, and radiology is one of the specialties most liable to claims of medical negligence The etiology of radiological error is multifactorial, deriving by poor technique, failures of perception, lack of knowledge, and misjudgments. Reducing errors will improve patient care, may reduce costs, and will improve the image of the hospital. The main reason for studying medical errors is to try to prevent them. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and highlights the malpractice issues in methods for functional alimentary tract examination: swallowing act study, 3-dimensional endoanal ultrasound, defecography, and defecography in magnetic resonance.

  1. Classification and disease prediction via mathematical programming

    Science.gov (United States)

    Lee, Eva K.; Wu, Tsung-Lin

    2007-11-01

    In this chapter, we present classification models based on mathematical programming approaches. We first provide an overview on various mathematical programming approaches, including linear programming, mixed integer programming, nonlinear programming and support vector machines. Next, we present our effort of novel optimization-based classification models that are general purpose and suitable for developing predictive rules for large heterogeneous biological and medical data sets. Our predictive model simultaneously incorporates (1) the ability to classify any number of distinct groups; (2) the ability to incorporate heterogeneous types of attributes as input; (3) a high-dimensional data transformation that eliminates noise and errors in biological data; (4) the ability to incorporate constraints to limit the rate of misclassification, and a reserved-judgment region that provides a safeguard against over-training (which tends to lead to high misclassification rates from the resulting predictive rule) and (5) successive multi-stage classification capability to handle data points placed in the reserved judgment region. To illustrate the power and flexibility of the classification model and solution engine, and its multigroup prediction capability, application of the predictive model to a broad class of biological and medical problems is described. Applications include: the differential diagnosis of the type of erythemato-squamous diseases; predicting presence/absence of heart disease; genomic analysis and prediction of aberrant CpG island meythlation in human cancer; discriminant analysis of motility and morphology data in human lung carcinoma; prediction of ultrasonic cell disruption for drug delivery; identification of tumor shape and volume in treatment of sarcoma; multistage discriminant analysis of biomarkers for prediction of early atherosclerois; fingerprinting of native and angiogenic microvascular networks for early diagnosis of diabetes, aging, macular

  2. Classification of nanopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Larena, A; Tur, A [Department of Chemical Industrial Engineering and Environment, Universidad Politecnica de Madrid, E.T.S. Ingenieros Industriales, C/ Jose Gutierrez Abascal, Madrid (Spain); Baranauskas, V [Faculdade de Engenharia Eletrica e Computacao, Departamento de Semicondutores, Instrumentos e Fotonica, Universidade Estadual de Campinas, UNICAMP, Av. Albert Einstein N.400, 13 083-852 Campinas SP Brasil (Brazil)], E-mail: alarena@etsii.upm.es

    2008-03-15

    Nanopolymers with different structures, shapes, and functional forms have recently been prepared using several techniques. Nanopolymers are the most promising basic building blocks for mounting complex and simple hierarchical nanosystems. The applications of nanopolymers are extremely broad and polymer-based nanotechnologies are fast emerging. We propose a nanopolymer classification scheme based on self-assembled structures, non self-assembled structures, and on the number of dimensions in the nanometer range (nD)

  3. Evolvement of Classification Society

    Institute of Scientific and Technical Information of China (English)

    Xu Hua

    2011-01-01

    As an independent industry, the emergence of the classification society was perhaps the demand of beneficial interests between shipowners, cargo owners and insurers at the earliest time. Today, as an indispensable link of the international maritime industry, class role has changed fundamentally. Start off from the demand of the insurersSeaborne trade, transport and insurance industries began to emerge successively in the 17th century. The massive risk and benefit brought by seaborne transport provided a difficult problem to insurers.

  4. Classification and regression trees

    CERN Document Server

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  5. Subclinical naming errors in mild cognitive impairment: A semantic deficit?

    Directory of Open Access Journals (Sweden)

    Indra F. Willers

    Full Text Available Abstract Mild cognitive impairment (MCI is the transitional stage between normal aging and Alzheimer's disease (AD. Impairments in semantic memory have been demonstrated to be a critical factor in early AD. The Boston Naming Test (BNT is a straightforward method of examining semantic or visuo-perceptual processing and therefore represents a potential diagnostic tool. The objective of this study was to examine naming ability and identify error types in patients with amnestic mild cognitive impairment (aMCI. Methods: Twenty aMCI patients, twenty AD patients and twenty-one normal controls, matched by age, sex and education level were evaluated. As part of a further neuropsychological evaluation, all subjects performed the BNT. A comprehensive classification of error types was devised in order to compare performance and ascertain semantic or perceptual origin of errors. Results: AD patients obtained significantly lower total scores on the BNT than aMCI patients and controls. aMCI patients did not obtain significant differences in total scores, but showed significantly higher semantic errors compared to controls. Conclusion: This study reveals that semantic processing is impaired during confrontation naming in aMCI.

  6. On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks

    CERN Document Server

    Julius,; T., Sumana; Adityakrishna, C S

    2016-01-01

    Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.

  7. Classification of Meteorological Drought

    Institute of Scientific and Technical Information of China (English)

    Zhang Qiang; Zou Xukai; Xiao Fengjin; Lu Houquan; Liu Haibo; Zhu Changhan; An Shunqing

    2011-01-01

    Background The national standard of the Classification of Meteorological Drought (GB/T 20481-2006) was developed by the National Climate Center in cooperation with Chinese Academy of Meteorological Sciences,National Meteorological Centre and Department of Forecasting and Disaster Mitigation under the China Meteorological Administration (CMA),and was formally released and implemented in November 2006.In 2008,this Standard won the second prize of the China Standard Innovation and Contribution Awards issued by SAC.Developed through independent innovation,it is the first national standard published to monitor meteorological drought disaster and the first standard in China and around the world specifying the classification of drought.Since its release in 2006,the national standard of Classification of Meteorological Drought has been used by CMA as the operational index to monitor and drought assess,and gradually used by provincial meteorological sureaus,and applied to the drought early warning release standard in the Methods of Release and Propagation of Meteorological Disaster Early Warning Signal.

  8. Short Text Classification: A Survey

    Directory of Open Access Journals (Sweden)

    Ge Song

    2014-05-01

    Full Text Available With the recent explosive growth of e-commerce and online communication, a new genre of text, short text, has been extensively applied in many areas. So many researches focus on short text mining. It is a challenge to classify the short text owing to its natural characters, such as sparseness, large-scale, immediacy, non-standardization. It is difficult for traditional methods to deal with short text classification mainly because too limited words in short text cannot represent the feature space and the relationship between words and documents. Several researches and reviews on text classification are shown in recent times. However, only a few of researches focus on short text classification. This paper discusses the characters of short text and the difficulty of short text classification. Then we introduce the existing popular works on short text classifiers and models, including short text classification using sematic analysis, semi-supervised short text classification, ensemble short text classification, and real-time classification. The evaluations of short text classification are analyzed in our paper. Finally we summarize the existing classification technology and prospect for development trend of short text classification

  9. Regional manifold learning for disease classification.

    Science.gov (United States)

    Ye, Dong Hye; Desjardins, Benoit; Hamm, Jihun; Litt, Harold; Pohl, Kilian M

    2014-06-01

    While manifold learning from images itself has become widely used in medical image analysis, the accuracy of existing implementations suffers from viewing each image as a single data point. To address this issue, we parcellate images into regions and then separately learn the manifold for each region. We use the regional manifolds as low-dimensional descriptors of high-dimensional morphological image features, which are then fed into a classifier to identify regions affected by disease. We produce a single ensemble decision for each scan by the weighted combination of these regional classification results. Each weight is determined by the regional accuracy of detecting the disease. When applied to cardiac magnetic resonance imaging of 50 normal controls and 50 patients with reconstructive surgery of Tetralogy of Fallot, our method achieves significantly better classification accuracy than approaches learning a single manifold across the entire image domain.

  10. Musical Instrument Timbres Classification with Spectral Features

    Directory of Open Access Journals (Sweden)

    Agostini Giulio

    2003-01-01

    Full Text Available A set of features is evaluated for recognition of musical instruments out of monophonic musical signals. Aiming to achieve a compact representation, the adopted features regard only spectral characteristics of sound and are limited in number. On top of these descriptors, various classification methods are implemented and tested. Over a dataset of 1007 tones from 27 musical instruments, support vector machines and quadratic discriminant analysis show comparable results with success rates close to 70% of successful classifications. Canonical discriminant analysis never had momentous results, while nearest neighbours performed on average among the employed classifiers. Strings have been the most misclassified instrument family, while very satisfactory results have been obtained with brass and woodwinds. The most relevant features are demonstrated to be the inharmonicity, the spectral centroid, and the energy contained in the first partial.

  11. Musical Instrument Timbres Classification with Spectral Features

    Science.gov (United States)

    Agostini, Giulio; Longari, Maurizio; Pollastri, Emanuele

    2003-12-01

    A set of features is evaluated for recognition of musical instruments out of monophonic musical signals. Aiming to achieve a compact representation, the adopted features regard only spectral characteristics of sound and are limited in number. On top of these descriptors, various classification methods are implemented and tested. Over a dataset of 1007 tones from 27 musical instruments, support vector machines and quadratic discriminant analysis show comparable results with success rates close to 70% of successful classifications. Canonical discriminant analysis never had momentous results, while nearest neighbours performed on average among the employed classifiers. Strings have been the most misclassified instrument family, while very satisfactory results have been obtained with brass and woodwinds. The most relevant features are demonstrated to be the inharmonicity, the spectral centroid, and the energy contained in the first partial.

  12. SQL based cardiovascular ultrasound image classification.

    Science.gov (United States)

    Nandagopalan, S; Suryanarayana, Adiga B; Sudarshan, T S B; Chandrashekar, Dhanalakshmi; Manjunath, C N

    2013-01-01

    This paper proposes a novel method to analyze and classify the cardiovascular ultrasound echocardiographic images using Naïve-Bayesian model via database OLAP-SQL. Efficient data mining algorithms based on tightly-coupled model is used to extract features. Three algorithms are proposed for classification namely Naïve-Bayesian Classifier for Discrete variables (NBCD) with SQL, NBCD with OLAP-SQL, and Naïve-Bayesian Classifier for Continuous variables (NBCC) using OLAP-SQL. The proposed model is trained with 207 patient images containing normal and abnormal categories. Out of the three proposed algorithms, a high classification accuracy of 96.59% was achieved from NBCC which is better than the earlier methods.

  13. Automated classification of Hipparcos unsolved variables

    CERN Document Server

    Rimoldini, L; Süveges, M; López, M; Sarro, L M; Blomme, J; De Ridder, J; Cuypers, J; Guy, L; Mowlavi, N; Lecoeur-Taïbi, I; Beck, M; Jan, A; Nienartowicz, K; Ordóñez-Blanco, D; Lebzelter, T; Eyer, L; 10.1111/j.1365-2966.2012.21752.x

    2013-01-01

    We present an automated classification of stars exhibiting periodic, non-periodic and irregular light variations. The Hipparcos catalogue of unsolved variables is employed to complement the training set of periodic variables of Dubath et al. with irregular and non-periodic representatives, leading to 3881 sources in total which describe 24 variability types. The attributes employed to characterize light-curve features are selected according to their relevance for classification. Classifier models are produced with random forests and a multistage methodology based on Bayesian networks, achieving overall misclassification rates under 12 per cent. Both classifiers are applied to predict variability types for 6051 Hipparcos variables associated with uncertain or missing types in the literature.

  14. Urdu Text Classification using Majority Voting

    Directory of Open Access Journals (Sweden)

    Muhammad Usman

    2016-08-01

    Full Text Available Text classification is a tool to assign the predefined categories to the text documents using supervised machine learning algorithms. It has various practical applications like spam detection, sentiment detection, and detection of a natural language. Based on the idea we applied five well-known classification techniques on Urdu language corpus and assigned a class to the documents using majority voting. The corpus contains 21769 news documents of seven categories (Business, Entertainment, Culture, Health, Sports, and Weird. The algorithms were not able to work directly on the data, so we applied the preprocessing techniques like tokenization, stop words removal and a rule-based stemmer. After preprocessing 93400 features are extracted from the data to apply machine learning algorithms. Furthermore, we achieved up to 94% precision and recall using majority voting.

  15. Automated spectral classification using template matching

    Institute of Scientific and Technical Information of China (English)

    Fu-Qing Duan; Rong Liu; Ping Guo; Ming-Quan Zhou; Fu-Chao Wu

    2009-01-01

    An automated spectral classification technique for large sky surveys is pro-posed. We firstly perform spectral line matching to determine redshift candidates for an observed spectrum, and then estimate the spectral class by measuring the similarity be-tween the observed spectrum and the shifted templates for each redshift candidate. As a byproduct of this approach, the spectral redshift can also be obtained with high accuracy. Compared with some approaches based on computerized learning methods in the liter-ature, the proposed approach needs no training, which is time-consuming and sensitive to selection of the training set. Both simulated data and observed spectra are used to test the approach; the results show that the proposed method is efficient, and it can achieve a correct classification rate as high as 92.9%, 97.9% and 98.8% for stars, galaxies and quasars, respectively.

  16. A brief history of error.

    Science.gov (United States)

    Murray, Andrew W

    2011-10-03

    The spindle checkpoint monitors chromosome alignment on the mitotic and meiotic spindle. When the checkpoint detects errors, it arrests progress of the cell cycle while it attempts to correct the mistakes. This perspective will present a brief history summarizing what we know about the checkpoint, and a list of questions we must answer before we understand it.

  17. Error processing in Huntington's disease.

    Directory of Open Access Journals (Sweden)

    Christian Beste

    Full Text Available BACKGROUND: Huntington's disease (HD is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related negativity (Ne/ERN, a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD. Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. METHODOLOGY/PRINCIPLE FINDINGS: We assessed the error negativity (Ne in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. CONCLUSIONS/SIGNIFICANCE: The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated "cognitive" biomarker in HD.

  18. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  19. The error of our ways

    Science.gov (United States)

    Swartz, Clifford E.

    1999-10-01

    In Victorian literature it was usually some poor female who came to see the error of her ways. How prescient of her! How I wish that all writers of manuscripts for The Physics Teacher would come to similar recognition of this centerpiece of measurement. For, Brothers and Sisters, we all err.

  20. Typical errors of ESP users

    Science.gov (United States)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  1. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  2. Having Fun with Error Analysis

    Science.gov (United States)

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  3. Input/output error analyzer

    Science.gov (United States)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  4. Amplify Errors to Minimize Them

    Science.gov (United States)

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  5. Research on new software compensation method of static and quasi-static errors for precision motion controller

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To reduce mechanical vibrations induced by big errors compensation, a new software compensation method based on an improved digital differential analyzer (DDA) interpolator for static and quasi-static errors of machine tools is proposed. Based on principle of traditional DDA interpolator, a DDA interpolator is divided into command generator and command analyzer. There are three types of errors, considering the difference of positions between compensation points and interpolation segments. According to the classification, errors are distributed evenly in data processing and compensated to certain interpolation segments in machining. On-line implementation results show that the proposed approach greatly improves positioning accuracy of computer numerical control (CNC) machine tools.

  6. Error Locked Encoder and Decoder for Nanomemory Application

    Directory of Open Access Journals (Sweden)

    Y. Sharath

    2014-03-01

    Full Text Available Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10-18 upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 1011 bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead.

  7. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  8. Toward a cognitive taxonomy of medical errors.

    Science.gov (United States)

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  9. Achieving form in autobiography

    Directory of Open Access Journals (Sweden)

    Nicholas (Nick Meihuizen

    2014-06-01

    Full Text Available This article argues that, unlike biographies which tend to follow patterns based on conventional expectations, salient autobiographies achieve forms unique to themselves. The article draws on ideas from contemporary formalists such as Peter McDonald and Angela Leighton but also considers ideas on significant form stemming from earlier writers and critics such as P.N. Furbank and Willa Cather. In extracting from these writers the elements of what they consider comprise achieved form, the article does not seek to provide a rigid means of objectively testing the formal attributes of a piece of writing. It rather offers qualitative reminders of the need to be alert to the importance of form, even if the precise nature of this importance is not possible to define. Form is involved in meaning, and this continuously opens up possibilities regarding the reader’s relationship with the work in question. French genetic critic Debray Genette distinguishes between ‘semantic effect’ (the direct telling involved in writing and ‘semiological effect’ (the indirect signification involved. It is the latter, the article argues in summation, which gives a work its singular nature, producing a form that is not predictable but suggestive, imaginative.

  10. Methods for data classification

    Science.gov (United States)

    Garrity, George; Lilburn, Timothy G.

    2011-10-11

    The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.

  11. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  12. Assessing the Accuracy of Prediction Algorithms for Classification

    DEFF Research Database (Denmark)

    Baldi, P.; Brunak, Søren; Chauvin, Y.;

    2000-01-01

    We provide a unified overview of methods that currently are widely used to assess the accuracy of prediction algorithms, from raw percentages, quadratic error measures and other distances, ann correlation coefficients, and to information theoretic measures such as relative entropy and mutual...... information. We briefly discuss the advantages and disadvantages of each approach. For classification tasks, we derive new learning algorithms for the design of prediction systems by directly optimising the correlation coefficient. We observe and prove several results relating sensitivity nod specificity...

  13. Lacie phase 1 Classification and Mensuration Subsystem (CAMS) rework experiment

    Science.gov (United States)

    Chhikara, R. S.; Hsu, E. M.; Liszcz, C. J.

    1976-01-01

    An experiment was designed to test the ability of the Classification and Mensuration Subsystem rework operations to improve wheat proportion estimates for segments that had been processed previously. Sites selected for the experiment included three in Kansas and three in Texas, with the remaining five distributed in Montana and North and South Dakota. The acquisition dates were selected to be representative of imagery available in actual operations. No more than one acquisition per biophase were used, and biophases were determined by actual crop calendars. All sites were worked by each of four Analyst-Interpreter/Data Processing Analyst Teams who reviewed the initial processing of each segment and accepted or reworked it for an estimate of the proportion of small grains in the segment. Classification results, acquisitions and classification errors and performance results between CAMS regular and ITS rework are tabulated.

  14. Achieving diagnosis by consensus

    LENUS (Irish Health Repository)

    Kane, Bridget

    2009-08-01

    This paper provides an analysis of the collaborative work conducted at a multidisciplinary medical team meeting, where a patient’s definitive diagnosis is agreed, by consensus. The features that distinguish this process of diagnostic work by consensus are examined in depth. The current use of technology to support this collaborative activity is described, and experienced deficiencies are identified. Emphasis is placed on the visual and perceptual difficulty for individual specialities in making interpretations, and on how, through collaboration in discussion, definitive diagnosis is actually achieved. The challenge for providing adequate support for the multidisciplinary team at their meeting is outlined, given the multifaceted nature of the setting, i.e. patient management, educational, organizational and social functions, that need to be satisfied.

  15. Recognizing outstanding achievements

    Science.gov (United States)

    Speiss, Fred

    One function of any professional society is to provide an objective, informed means for recognizing outstanding achievements in its field. In AGU's Ocean Sciences section we have a variety of means for carrying out this duty. They include recognition of outstanding student presentations at our meetings, dedication of special sessions, nomination of individuals to be fellows of the Union, invitations to present Sverdrup lectures, and recommendations for Macelwane Medals, the Ocean Sciences Award, and the Ewing Medal.Since the decision to bestow these awards requires initiative and judgement by members of our section in addition to a deserving individual, it seems appropriate to review the selection process for each and to urge you to identify those deserving of recognition.

  16. Achieving English Spoken Fluency

    Institute of Scientific and Technical Information of China (English)

    王鲜杰

    2000-01-01

    Language is first and foremost oral,spoken language,speaking skill is the most important one of the four skills(L,S,R,W)and also it is the most difficult one of the four skills. To have an all-round command of a language one must be able to speak and to understand the spoken language, it is not enough for a language learner only to have a good reading and writing skills. As Englisn language teachers, we need to focus on improving learners' English speaking skill to meet the need of our society and our country and provide learner some useful techniques to achieving their English spoken fluency. This paper focuses on the spoken how to improving learners speaking skill.

  17. Automated laser trimming for ultralow error function GFF

    Science.gov (United States)

    Bernard, Pierre; Gregoire, Nathalie; Lafrance, Ghislain

    2003-04-01

    Gain flatness of optical amplifiers over the communication bandwidth is a key requirement of high performance optical wavelength division multiplexing (WDM) communication systems. Most often, a gain flattening filter (GFF) with a spectral response matching the inverse gain profile is incorporated within the amplifier. The chirped fiber Bragg grating (CFBG) is an attractive technology to produce GFFs, especially in cases where very low error functions are required. Error functions smaller than or equal to +/-0.1 dB for the full operating temperature range are now possible. Moreover, the systematic errors from cascaded filters are much smaller than for thin-film GFF, a factor of importance in a long chain of amplifiers. To achieve this performance level, the high-frequency ripples normally associated with CFBG-GFF have been reduced by combining state-of-the-art holographic phase masks and advanced UV-writing techniques. Lastly, to eliminate the residual low-frequency ripples and localized errors, we developed a laser annealing-trimming station. This fully automated station combines both the aging process and final trimming of the GFF refractive index profile to exactly match the required transmission spectra. The use of self-adjusting algorithms assures quick convergence of the error function within a very tight error band. The capital expenditure necessary to implement this new tool is small in relation to the gain in precision, reliability and manufacturing cycle time.

  18. Error bound results for convex inequality systems via conjugate duality

    CERN Document Server

    Bot, Radu Ioan

    2010-01-01

    The aim of this paper is to implement some new techniques, based on conjugate duality in convex optimization, for proving the existence of global error bounds for convex inequality systems. We deal first of all with systems described via one convex inequality and extend the achieved results, by making use of a celebrated scalarization function, to convex inequality systems expressed by means of a general vector function. We also propose a second approach for guaranteeing the existence of global error bounds of the latter, which meanwhile sharpens the classical result of Robinson.

  19. A novel TOA estimation method with effective NLOS error reduction

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yi-heng; CUI Qi-mei; LI Yu-xiang; ZHANG Ping

    2008-01-01

    It is well known that non-line-of-sight (NLOS)error has been the major factor impeding the enhancement ofaccuracy for time of arrival (TOA) estimation and wirelesspositioning. This article proposes a novel method of TOAestimation effectively reducing the NLOS error by 60%,comparing with the traditional timing and synchronizationmethod. By constructing the orthogonal training sequences,this method converts the traditional TOA estimation to thedetection of the first arrival path (FAP) in the NLOS multipathenvironment, and then estimates the TOA by the round-triptransmission (RTT) technology. Both theoretical analysis andnumerical simulations prove that the method proposed in thisarticle achieves better performance than the traditional methods.

  20. Achieving closure at Fernald

    Energy Technology Data Exchange (ETDEWEB)

    Bradburne, John; Patton, Tisha C.

    2001-02-25

    When Fluor Fernald took over the management of the Fernald Environmental Management Project in 1992, the estimated closure date of the site was more than 25 years into the future. Fluor Fernald, in conjunction with DOE-Fernald, introduced the Accelerated Cleanup Plan, which was designed to substantially shorten that schedule and save taxpayers more than $3 billion. The management of Fluor Fernald believes there are three fundamental concerns that must be addressed by any contractor hoping to achieve closure of a site within the DOE complex. They are relationship management, resource management and contract management. Relationship management refers to the interaction between the site and local residents, regulators, union leadership, the workforce at large, the media, and any other interested stakeholder groups. Resource management is of course related to the effective administration of the site knowledge base and the skills of the workforce, the attraction and retention of qualified a nd competent technical personnel, and the best recognition and use of appropriate new technologies. Perhaps most importantly, resource management must also include a plan for survival in a flat-funding environment. Lastly, creative and disciplined contract management will be essential to effecting the closure of any DOE site. Fluor Fernald, together with DOE-Fernald, is breaking new ground in the closure arena, and ''business as usual'' has become a thing of the past. How Fluor Fernald has managed its work at the site over the last eight years, and how it will manage the new site closure contract in the future, will be an integral part of achieving successful closure at Fernald.

  1. Ensemble polarimetric SAR image classification based on contextual sparse representation

    Science.gov (United States)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  2. Classification of melanoma lesions using sparse coded features and random forests

    Science.gov (United States)

    Rastgoo, Mojdeh; Lemaître, Guillaume; Morel, Olivier; Massich, Joan; Garcia, Rafael; Meriaudeau, Fabrice; Marzani, Franck; Sidibé, Désiré

    2016-03-01

    Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.

  3. Error Correction in Oral Classroom English Teaching

    Science.gov (United States)

    Jing, Huang; Xiaodong, Hao; Yu, Liu

    2016-01-01

    As is known to all, errors are inevitable in the process of language learning for Chinese students. Should we ignore students' errors in learning English? In common with other questions, different people hold different opinions. All teachers agree that errors students make in written English are not allowed. For the errors students make in oral…

  4. STRUCTURED BACKWARD ERRORS FOR STRUCTURED KKT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Xin-xiu Li; Xin-guo Liu

    2004-01-01

    In this paper we study structured backward errors for some structured KKT systems.Normwise structured backward errors for structured KKT systems are defined, and computable formulae of the structured backward errors are obtained. Simple numerical examples show that the structured backward errors may be much larger than the unstructured ones in some cases.

  5. Correction of errors in power measurements

    DEFF Research Database (Denmark)

    Pedersen, Knud Ole Helgesen

    1998-01-01

    Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors.......Small errors in voltage and current measuring transformers cause inaccuracies in power measurements.In this report correction factors are derived to compensate for such errors....

  6. Error Analysis of Band Matrix Method

    OpenAIRE

    Taniguchi, Takeo; Soga, Akira

    1984-01-01

    Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.

  7. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  8. 5 CFR 1601.34 - Error correction.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Error correction. 1601.34 Section 1601.34... Contribution Allocations and Interfund Transfer Requests § 1601.34 Error correction. Errors in processing... in the wrong investment fund, will be corrected in accordance with the error correction...

  9. PERFORMANCE EVALUATION OF DISTANCE MEASURES IN PROPOSED FUZZY TEXTURE MODEL FOR LAND COVER CLASSIFICATION OF REMOTELY SENSED IMAGE

    Directory of Open Access Journals (Sweden)

    S. Jenicka

    2014-04-01

    Full Text Available Land cover classification is a vital application area in satellite image processing domain. Texture is a useful feature in land cover classification. The classification accuracy obtained always depends on the effectiveness of the texture model, distance measure and classification algorithm used. In this work, texture features are extracted using the proposed multivariate descriptor, MFTM/MVAR that uses Multivariate Fuzzy Texture Model (MFTM supplemented with Multivariate Variance (MVAR. The K_Nearest Neighbour (KNN algorithm is used for classification due to its simplicity coupled with efficiency. The distance measures such as Log likelihood, Manhattan, Chi squared, Kullback Leibler and Bhattacharyya were used and the experiments were conducted on IRS P6 LISS-IV data. The classified images were evaluated based on error matrix, classification accuracy and Kappa statistics. From the experiments, it is found that log likelihood distance with MFTM/MVAR descriptor and KNN classifier gives 95.29% classification accuracy.

  10. On the rate of convergence for multi-category classification based on convex losses

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multi-category classification algorithms play an important role in both theory and practice of machine learning.In this paper,we consider an approach to the multi-category classification based on minimizing a convex surrogate of the nonstandard misclassification loss.We bound the excess misclassification error by the excess convex risk.We construct an adaptive procedure to search the classifier and furthermore obtain its convergence rate to the Bayes rule.

  11. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  12. Urban Tree Classification Using Full-Waveform Airborne Laser Scanning

    Science.gov (United States)

    Koma, Zs.; Koenig, K.; Höfle, B.

    2016-06-01

    Vegetation mapping in urban environments plays an important role in biological research and urban management. Airborne laser scanning provides detailed 3D geodata, which allows to classify single trees into different taxa. Until now, research dealing with tree classification focused on forest environments. This study investigates the object-based classification of urban trees at taxonomic family level, using full-waveform airborne laser scanning data captured in the city centre of Vienna (Austria). The data set is characterised by a variety of taxa, including deciduous trees (beeches, mallows, plane trees and soapberries) and the coniferous pine species. A workflow for tree object classification is presented using geometric and radiometric features. The derived features are related to point density, crown shape and radiometric characteristics. For the derivation of crown features, a prior detection of the crown base is performed. The effects of interfering objects (e.g. fences and cars which are typical in urban areas) on the feature characteristics and the subsequent classification accuracy are investigated. The applicability of the features is evaluated by Random Forest classification and exploratory analysis. The most reliable classification is achieved by using the combination of geometric and radiometric features, resulting in 87.5% overall accuracy. By using radiometric features only, a reliable classification with accuracy of 86.3% can be achieved. The influence of interfering objects on feature characteristics is identified, in particular for the radiometric features. The results indicate the potential of using radiometric features in urban tree classification and show its limitations due to anthropogenic influences at the same time.

  13. URBAN TREE CLASSIFICATION USING FULL-WAVEFORM AIRBORNE LASER SCANNING

    Directory of Open Access Journals (Sweden)

    Zs. Koma

    2016-06-01

    Full Text Available Vegetation mapping in urban environments plays an important role in biological research and urban management. Airborne laser scanning provides detailed 3D geodata, which allows to classify single trees into different taxa. Until now, research dealing with tree classification focused on forest environments. This study investigates the object-based classification of urban trees at taxonomic family level, using full-waveform airborne laser scanning data captured in the city centre of Vienna (Austria. The data set is characterised by a variety of taxa, including deciduous trees (beeches, mallows, plane trees and soapberries and the coniferous pine species. A workflow for tree object classification is presented using geometric and radiometric features. The derived features are related to point density, crown shape and radiometric characteristics. For the derivation of crown features, a prior detection of the crown base is performed. The effects of interfering objects (e.g. fences and cars which are typical in urban areas on the feature characteristics and the subsequent classification accuracy are investigated. The applicability of the features is evaluated by Random Forest classification and exploratory analysis. The most reliable classification is achieved by using the combination of geometric and radiometric features, resulting in 87.5% overall accuracy. By using radiometric features only, a reliable classification with accuracy of 86.3% can be achieved. The influence of interfering objects on feature characteristics is identified, in particular for the radiometric features. The results indicate the potential of using radiometric features in urban tree classification and show its limitations due to anthropogenic influences at the same time.

  14. Achievement Goals and Achievement Emotions: A Meta-Analysis

    Science.gov (United States)

    Huang, Chiungjung

    2011-01-01

    This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…

  15. Capacitor Mismatch Error Cancellation Technique for a Successive Approximation A/D Converter

    DEFF Research Database (Denmark)

    Zheng, Zhiliang; Moon, Un-Ku; Steensgaard-Madsen, Jesper;

    1999-01-01

    An error cancellation technique is described for suppressing capacitor mismatch in a successive approximation A/D converter. At the cost of a 50% increase in conversion time, the first-order capacitor mismatch error is cancelled. Methods for achieving top-plate parasitic insensitive operation are...

  16. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  17. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael G.

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  18. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition...... to segment breast tissue and pectoral muscle area from the background in mammogram. The second focus is the choices of metric and its influence to the feasibility of a classifier, especially on k-nearest neighbors (k-NN) algorithm, with medical applications on breast cancer prediction and calcification...

  19. SPORT FOOD ADDITIVE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    I. P. Prokopenko

    2015-01-01

    Full Text Available Correctly organized nutritive and pharmacological support is an important component of an athlete's preparation for competitions, an optimal shape maintenance, fast recovery and rehabilitation after traumas and defatigation. Special products of enhanced biological value (BAS for athletes nutrition are used with this purpose. Easy-to-use energy sources are administered into athlete's organism, yielded materials and biologically active substances which regulate and activate exchange reactions which proceed with difficulties during certain physical trainings. The article presents sport supplements classification which can be used before warm-up and trainings, after trainings and in competitions breaks.

  20. [Classification of headache disorders].

    Science.gov (United States)

    Heinze, A; Heinze-Kuhn, K; Göbel, H

    2007-06-01

    In 2003 the International Headache Society (IHS) published the second edition of the International Classification of Headache Disorders. Diagnostic criteria for no less than 206 separate headache diagnoses are presented in the parts (I) primary headaches, (II) secondary headaches and (III) cranial neuralgia, central and primary facial pain. The headaches are classified according to the etiology in case of the secondary headaches and according to the phenomenology in case of the primary headaches. It is the task of the headache specialist to identify the correct headache diagnose with the smallest effort possible. Both, the differentiation between secondary and primary headaches and the differentiation between the various primary headaches are of equal importance.

  1. Adaptive codebook selection schemes for image classification in correlated channels

    Science.gov (United States)

    Hu, Chia Chang; Liu, Xiang Lian; Liu, Kuan-Fu

    2015-09-01

    The multiple-input multiple-output (MIMO) system with the use of transmit and receive antenna arrays achieves diversity and array gains via transmit beamforming. Due to the absence of full channel state information (CSI) at the transmitter, the transmit beamforming vector can be quantized at the receiver and sent back to the transmitter by a low-rate feedback channel, called limited feedback beamforming. One of the key roles of Vector Quantization (VQ) is how to generate a good codebook such that the distortion between the original image and the reconstructed image is the minimized. In this paper, a novel adaptive codebook selection scheme for image classification is proposed with taking both spatial and temporal correlation inherent in the channel into consideration. The new codebook selection algorithm is developed to select two codebooks from the discrete Fourier transform (DFT) codebook, the generalized Lloyd algorithm (GLA) codebook and the Grassmannian codebook to be combined and used as candidates of the original image and the reconstructed image for image transmission. The channel is estimated and divided into four regions based on the spatial and temporal correlation of the channel and an appropriate codebook is assigned to each region. The proposed method can efficiently reduce the required information of feedback under the spatially and temporally correlated channels, where each region is adaptively. Simulation results show that in the case of temporally and spatially correlated channels, the bit-error-rate (BER) performance can be improved substantially by the proposed algorithm compared to the one with only single codebook.

  2. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    KAUST Repository

    Bryant, C. M.

    2015-01-01

    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  3. A long lifetime, low error rate RRAM design with self-repair module

    Science.gov (United States)

    Zhiqiang, You; Fei, Hu; Liming, Huang; Peng, Liu; Jishun, Kuang; Shiying, Li

    2016-11-01

    Resistive random access memory (RRAM) is one of the promising candidates for future universal memory. However, it suffers from serious error rate and endurance problems. Therefore, exploring a technical solution is greatly demanded to enhance endurance and reduce error rate. In this paper, we propose a reliable RRAM architecture that includes two reliability modules: error correction code (ECC) and self-repair modules. The ECC module is used to detect errors and decrease error rate. The self-repair module, which is proposed for the first time for RRAM, can get the information of error bits and repair wear-out cells by a repair voltage. Simulation results show that the proposed architecture can achieve lowest error rate and longest lifetime compared to previous reliable designs. Project supported by the New Century Excellent Talents in University (No. NCET-12-0165) and the National Natural Science Foundation of China (Nos. 61472123, 61272396).

  4. Offset Error Compensation in Roundness Measurement

    Institute of Scientific and Technical Information of China (English)

    朱喜林; 史俊; 李晓梅

    2004-01-01

    This paper analyses three causes of offset error in roundness measurement and presents corresponding compensation methods.The causes of offset error include excursion error resulting from the deflection of the sensor's line of measurement from the rotational center in measurement (datum center), eccentricity error resulting from the variance between the workpiece's geometrical center and the rotational center, and tilt error resulting from the tilt between the workpiece's geometrical axes and the rotational centerline.

  5. Completed Local Ternary Pattern for Rotation Invariant Texture Classification

    Directory of Open Access Journals (Sweden)

    Taha H. Rassem

    2014-01-01

    Full Text Available Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP and the Completed Local Binary Count (CLBC, have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

  6. Error-resilient DNA computation

    Energy Technology Data Exchange (ETDEWEB)

    Karp, R.M.; Kenyon, C.; Waarts, O. [Univ. of California, Berkeley, CA (United States)

    1996-12-31

    The DNA model of computation, with test tubes of DNA molecules encoding bit sequences, is based on three primitives, Extract-A-Bit, which splits a test tube into two test tubes according to the value of a particular bit x, Merge-Two-Tubes and Detect-Emptiness. Perfect operations can test the satisfiability of any boolean formula in linear time. However, in reality the Extract operation is faulty; it misclassifies a certain proportion of the strands. We consider the following problem: given an algorithm based on perfect Extract, Merge and Detect operations, convert it to one that works correctly with high probability when the Extract operation is faulty. The fundamental problem in such a conversion is to construct a sequence of faulty Extracts and perfect Merges that simulates a highly reliable Extract operation. We first determine (up to a small constant factor) the minimum number of faulty Extract operations inherently required to simulate a highly reliable Extract operation. We then go on to derive a general method for converting any algorithm based on error-free operations to an error-resilient one, and give optimal error-resilient algorithms for realizing simple n-variable boolean functions such as Conjunction, Disjunction and Parity.

  7. FAKTOR PENYEBAB MEDICATION ERROR DI INSTALASI RAWAT DARURAT FACTORS AFFECTING MEDICATION ERRORS AT EMERGENCY UNIT

    OpenAIRE

    2014-01-01

    Background: Incident of medication errors is an importantindicator in patient safety and medication error is most commonmedical errors. However, most of medication errors can beprevented and efforts to reduce such errors are available.Due to high number of medications errors in the emergencyunit, understanding of the causes is important for designingsuccessful intervention. This research aims to identify typesand causes of medication errors.Method: Qualitative study was used and data were col...

  8. Classification of smooth Fano polytopes

    DEFF Research Database (Denmark)

    Øbro, Mikkel

    Fano polytopes up to isomorphism. A smooth Fano -polytope can have at most vertices. In case of vertices an explicit classification is known. The thesis contains the classification in case of vertices. Classifications of smooth Fano -polytopes for fixed exist only for . In the thesis an algorithm...... for the classification of smooth Fano -polytopes for any given is presented. The algorithm has been implemented and used to obtain the complete classification for .......A simplicial lattice polytope containing the origin in the interior is called a smooth Fano polytope, if the vertices of every facet is a basis of the lattice. The study of smooth Fano polytopes is motivated by their connection to toric varieties. The thesis concerns the classification of smooth...

  9. Vietnam: achievements and challenges.

    Science.gov (United States)

    Tran Tien Duc

    1999-01-01

    The Vietnamese Government's successful development of the National Population and Family Planning Program has contributed in raising people's awareness on population issues and changing their attitudes and behavior regarding fostering small families. It has also been found to be very effective in substantially decreasing fertility level. In addition, economic levels of many households have been greatly improved since the adoption of a renovation policy. The advancement of welfare accompanied by the provision of better basic social services, including health services, has boost people's health. Several factors behind the achievements of the National Population and Family Planning Program include: 1) Strengthening of the political commitment of national and local leaders; 2) Nationwide mobilization of mass organizations and NGOs; 3) A strong advocacy and information, education and communication program; 4) Provision of various kinds of contraceptives; 5) Effective management of the program by priority; and 6) Support of the international community. Despite such successes, Vietnam is facing a number of new issues such as enlargement of the work force, shifting migration patterns and accelerating urbanization, aging of population, and change of household structure. Nevertheless, the Government of Vietnam is preparing a New Population Strategy aimed to address these issues.

  10. Logistic Regression for Evolving Data Streams Classification

    Institute of Scientific and Technical Information of China (English)

    YIN Zhi-wu; HUANG Shang-teng; XUE Gui-rong

    2007-01-01

    Logistic regression is a fast classifier and can achieve higher accuracy on small training data. Moreover,it can work on both discrete and continuous attributes with nonlinear patterns. Based on these properties of logistic regression, this paper proposed an algorithm, called evolutionary logistical regression classifier (ELRClass), to solve the classification of evolving data streams. This algorithm applies logistic regression repeatedly to a sliding window of samples in order to update the existing classifier, to keep this classifier if its performance is deteriorated by the reason of bursting noise, or to construct a new classifier if a major concept drift is detected. The intensive experimental results demonstrate the effectiveness of this algorithm.

  11. Adaptive multiclass classification for brain computer interfaces.

    Science.gov (United States)

    Llera, A; Gómez, V; Kappen, H J

    2014-06-01

    We consider the problem of multiclass adaptive classification for brain-computer interfaces and propose the use of multiclass pooled mean linear discriminant analysis (MPMLDA), a multiclass generalization of the adaptation rule introduced by Vidaurre, Kawanabe, von Bünau, Blankertz, and Müller (2010) for the binary class setting. Using publicly available EEG data sets and tangent space mapping (Barachant, Bonnet, Congedo, & Jutten, 2012) as a feature extractor, we demonstrate that MPMLDA can significantly outperform state-of-the-art multiclass static and adaptive methods. Furthermore, efficient learning rates can be achieved using data from different subjects.

  12. Combinatorial Approach of Associative Classification

    OpenAIRE

    P. R. Pal; R.C. Jain

    2010-01-01

    Association rule mining and classification are two important techniques of data mining in knowledge discovery process. Integration of these two has produced class association rule mining or associative classification techniques, which in many cases have shown better classification accuracy than conventional classifiers. Motivated by this study we have explored and applied the combinatorial mathematics in class association rule mining in this paper. Our algorithm is based on producing co...

  13. The future of general classification

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2013-01-01

    Discusses problems related to accessing multiple collections using a single retrieval language. Surveys the concepts of interoperability and switching language. Finds that mapping between more indexing languages always will be an approximation. Surveys the issues related to general classification...... and contrasts that to special classifications. Argues for the use of general classifications to provide access to collections nationally and internationally. © 2003 by The Haworth Press, Inc. All rights reserved....

  14. A Classification Leveraged Object Detector

    OpenAIRE

    Sun, Miao; Han, Tony X.; He, Zhihai

    2016-01-01

    Currently, the state-of-the-art image classification algorithms outperform the best available object detector by a big margin in terms of average precision. We, therefore, propose a simple yet principled approach that allows us to leverage object detection through image classification on supporting regions specified by a preliminary object detector. Using a simple bag-of- words model based image classification algorithm, we leveraged the performance of the deformable model objector from 35.9%...

  15. Efficient Protocols for Distributed Classification and Optimization

    CERN Document Server

    Daume, Hal; Saha, Avishek; Venkatasubramanian, Suresh

    2012-01-01

    In distributed learning, the goal is to perform a learning task over data distributed across multiple nodes with minimal (expensive) communication. Prior work (Daume III et al., 2012) proposes a general model that bounds the communication required for learning classifiers while allowing for $\\eps$ training error on linearly separable data adversarially distributed across nodes. In this work, we develop key improvements and extensions to this basic model. Our first result is a two-party multiplicative-weight-update based protocol that uses $O(d^2 \\log{1/\\eps})$ words of communication to classify distributed data in arbitrary dimension $d$, $\\eps$-optimally. This readily extends to classification over $k$ nodes with $O(kd^2 \\log{1/\\eps})$ words of communication. Our proposed protocol is simple to implement and is considerably more efficient than baselines compared, as demonstrated by our empirical results. In addition, we illustrate general algorithm design paradigms for doing efficient learning over distribute...

  16. CLASSIFICATION OF CRIMINAL GROUPS

    Directory of Open Access Journals (Sweden)

    Natalia Romanova

    2013-06-01

    Full Text Available New types of criminal groups are emerging in modern society.  These types have their special criminal subculture. The research objective is to develop new parameters of classification of modern criminal groups, create a new typology of criminal groups and identify some features of their subculture. Research methodology is based on the system approach that includes using the method of analysis of documentary sources (materials of a criminal case, method of conversations with themembers of the criminal group, method of testing the members of the criminal group and method of observation. As a result of the conducted research, we have created a new classification of criminal groups. The first type is a lawful group in its form and criminal according to its content (i.e., its target is criminal enrichment. The second type is a criminal organization which is run by so-called "white-collars" that "remain in the shadow". The third type is traditional criminal groups.  The fourth type is the criminal group, which openly demonstrates its criminal activity.

  17. Supply chain planning classification

    Science.gov (United States)

    Hvolby, Hans-Henrik; Trienekens, Jacques; Bonde, Hans

    2001-10-01

    Industry experience a need to shift in focus from internal production planning towards planning in the supply network. In this respect customer oriented thinking becomes almost a common good amongst companies in the supply network. An increase in the use of information technology is needed to enable companies to better tune their production planning with customers and suppliers. Information technology opportunities and supply chain planning systems facilitate companies to monitor and control their supplier network. In spite if these developments, most links in today's supply chains make individual plans, because the real demand information is not available throughout the chain. The current systems and processes of the supply chains are not designed to meet the requirements now placed upon them. For long term relationships with suppliers and customers, an integrated decision-making process is needed in order to obtain a satisfactory result for all parties. Especially when customized production and short lead-time is in focus. An effective value chain makes inventory available and visible among the value chain members, minimizes response time and optimizes total inventory value held throughout the chain. In this paper a supply chain planning classification grid is presented based current manufacturing classifications and supply chain planning initiatives.

  18. PSC: protein surface classification.

    Science.gov (United States)

    Tseng, Yan Yuan; Li, Wen-Hsiung

    2012-07-01

    We recently proposed to classify proteins by their functional surfaces. Using the structural attributes of functional surfaces, we inferred the pairwise relationships of proteins and constructed an expandable database of protein surface classification (PSC). As the functional surface(s) of a protein is the local region where the protein performs its function, our classification may reflect the functional relationships among proteins. Currently, PSC contains a library of 1974 surface types that include 25,857 functional surfaces identified from 24,170 bound structures. The search tool in PSC empowers users to explore related surfaces that share similar local structures and core functions. Each functional surface is characterized by structural attributes, which are geometric, physicochemical or evolutionary features. The attributes have been normalized as descriptors and integrated to produce a profile for each functional surface in PSC. In addition, binding ligands are recorded for comparisons among homologs. PSC allows users to exploit related binding surfaces to reveal the changes in functionally important residues on homologs that have led to functional divergence during evolution. The substitutions at the key residues of a spatial pattern may determine the functional evolution of a protein. In PSC (http://pocket.uchicago.edu/psc/), a pool of changes in residues on similar functional surfaces is provided.

  19. Holistic facial expression classification

    Science.gov (United States)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  20. Emotion of Physiological Signals Classification Based on TS Feature Selection

    Institute of Scientific and Technical Information of China (English)

    Wang Yujing; Mo Jianlin

    2015-01-01

    This paper propose a method of TS-MLP about emotion recognition of physiological signal.It can recognize emotion successfully by Tabu search which selects features of emotion’s physiological signals and multilayer perceptron that is used to classify emotion.Simulation shows that it has achieved good emotion classification performance.