WorldWideScience

Sample records for achieved classification error

  1. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  2. A classification of prescription errors.

    OpenAIRE

    Neville, R G; Robertson, F; Livingstone, S.; Crombie, I K

    1989-01-01

    Three independent methods of study of prescription errors led to the development of a classification of errors based on the potential effects and inconvenience to patients, pharmacists and doctors. Four types of error are described: type A (potentially serious to patient); type B (major nuisance - pharmacist/doctor contact required); type C (minor nuisance - pharmacist must use professional judgement); and type D (trivial). The types of frequency of errors are detailed for a group of eight pr...

  3. Human error classification and data collection

    International Nuclear Information System (INIS)

    Analysis of human error data requires human error classification. As the human factors/reliability subject has developed so too has the topic of human error classification. The classifications vary considerably depending on whether it has been developed from a theoretical psychological approach to understanding human behavior or error, or whether it has been based on an empirical practical approach. This latter approach is often adopted by nuclear power plants that need to make practical improvements as soon as possible. This document will review aspects of human error classification and data collection in order to show where potential improvements could be made. It will attempt to show why there are problems with human error classification and data collection schemes and that these problems will not be easy to resolve. The Annex of this document contains the papers presented at the meeting. A separate abstract was prepared for each of these 12 papers. Refs, figs and tabs

  4. Analysis of thematic map classification error matrices.

    Science.gov (United States)

    Rosenfield, G.H.

    1986-01-01

    The classification error matrix expresses the counts of agreement and disagreement between the classified categories and their verification. Thematic mapping experiments compare variables such as multiple photointerpretation or scales of mapping, and produce one or more classification error matrices. This paper presents a tutorial to implement a typical problem of a remotely sensed data experiment for solution by the linear model method.-from Author

  5. Classification error of the thresholded independence rule

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Fenger-Grøn, Morten; Jensen, Jens Ledet

    We consider classification in the situation of two groups with normally distributed data in the ‘large p small n’ framework. To counterbalance the high number of variables we consider the thresholded independence rule. An upper bound on the classification error is established which is taylored to a...

  6. HMM-FRAME: accurate protein domain classification for metagenomic sequences containing frameshift errors

    Directory of Open Access Journals (Sweden)

    Sun Yanni

    2011-05-01

    Full Text Available Abstract Background Protein domain classification is an important step in metagenomic annotation. The state-of-the-art method for protein domain classification is profile HMM-based alignment. However, the relatively high rates of insertions and deletions in homopolymer regions of pyrosequencing reads create frameshifts, causing conventional profile HMM alignment tools to generate alignments with marginal scores. This makes error-containing gene fragments unclassifiable with conventional tools. Thus, there is a need for an accurate domain classification tool that can detect and correct sequencing errors. Results We introduce HMM-FRAME, a protein domain classification tool based on an augmented Viterbi algorithm that can incorporate error models from different sequencing platforms. HMM-FRAME corrects sequencing errors and classifies putative gene fragments into domain families. It achieved high error detection sensitivity and specificity in a data set with annotated errors. We applied HMM-FRAME in Targeted Metagenomics and a published metagenomic data set. The results showed that our tool can correct frameshifts in error-containing sequences, generate much longer alignments with significantly smaller E-values, and classify more sequences into their native families. Conclusions HMM-FRAME provides a complementary protein domain classification tool to conventional profile HMM-based methods for data sets containing frameshifts. Its current implementation is best used for small-scale metagenomic data sets. The source code of HMM-FRAME can be downloaded at http://www.cse.msu.edu/~zhangy72/hmmframe/ and at https://sourceforge.net/projects/hmm-frame/.

  7. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  8. Reducing Support Vector Machine Classification Error by Implementing Kalman Filter

    Directory of Open Access Journals (Sweden)

    Muhsin Hassan

    2013-08-01

    Full Text Available The aim of this is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine classification errors in classifying pipeline corrosion depth. In pipeline defect classification, it is important to increase the accuracy of the SVM classification so that one can avoid misclassification which can lead to greater problems in monitoring pipeline defect and prediction of pipeline leakage. In this paper, it is found that noisy data can greatly affect the performance of SVM. Hence, Kalman Filter + SVM hybrid technique has been proposed as a solution to reduce SVM classification errors. The datasets has been added with Additive White Gaussian Noise in several stages to study the effect of noise on SVM classification accuracy. Three techniques have been studied in this experiment, namely SVM, hybrid of Discrete Wavelet Transform + SVM and hybrid of Kalman Filter + SVM. Experiment results have been compared to find the most promising techniques among them. MATLAB simulations show Kalman Filter and Support Vector Machine combination in a single system produced higher accuracy compared to the other two techniques.

  9. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  10. Detection and Classification of Measurement Errors in Bioimpedance Spectroscopy.

    Science.gov (United States)

    Ayllón, David; Gil-Pita, Roberto; Seoane, Fernando

    2016-01-01

    Bioimpedance spectroscopy (BIS) measurement errors may be caused by parasitic stray capacitance, impedance mismatch, cross-talking or their very likely combination. An accurate detection and identification is of extreme importance for further analysis because in some cases and for some applications, certain measurement artifacts can be corrected, minimized or even avoided. In this paper we present a robust method to detect the presence of measurement artifacts and identify what kind of measurement error is present in BIS measurements. The method is based on supervised machine learning and uses a novel set of generalist features for measurement characterization in different immittance planes. Experimental validation has been carried out using a database of complex spectra BIS measurements obtained from different BIS applications and containing six different types of errors, as well as error-free measurements. The method obtained a low classification error (0.33%) and has shown good generalization. Since both the features and the classification schema are relatively simple, the implementation of this pre-processing task in the current hardware of bioimpedance spectrometers is possible. PMID:27362862

  11. CLASSIFICATION OF CRYOSOLS: SIGNIFICANCE,ACHIEVEMENTS AND CHALLENGES

    Institute of Scientific and Technical Information of China (English)

    CHEN Jie; GONG Zi-tong; CHEN Zhi-cheng; TAN Man-zhi

    2003-01-01

    International concerns about the effects of global change on permafrost-affected soils and responses of permafrost terrestrial landscapes to such change have been increasing in the last two decades. To achieve a variety of goals including the determining of soil carbon stocks and dynamics in the Northern Hemisphere, the understanding of soil degradation and the best ways to protect the fragile ecosystems in permafrost environment, further study development on Cryosol classification is being in great demand. In this paper the existing Cryosol classifications contained in three representative soil taxonomies are introduced, and the problems in the practical application of the defining criteria used for category differentiation in these taxonomic systems are discussed. Meanwhile, the resumption and reconstruction of Chinese Cryosol classification within a taxonomic frame is proposed. In dealing with Cryosol classification the advantages that Chinese pedologists have and the challenges that they have to face are analyzed. Finally, several suggestions on the study development of the further taxonomic frame of Cryosol classification are put forward.

  12. Establishment and application of medication error classification standards in nursing care based on the International Classification of Patient Safety

    Directory of Open Access Journals (Sweden)

    Xiao-Ping Zhu

    2014-09-01

    Conclusion: Application of this classification system will help nursing administrators to accurately detect system- and process-related defects leading to medication errors, and enable the factors to be targeted to improve the level of patient safety management.

  13. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    Science.gov (United States)

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki

    2013-01-01

    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  14. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    Science.gov (United States)

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  15. Uncertainty in hydromorphological and ecological modelling of lowland river floodplains resulting from land cover classification errors

    NARCIS (Netherlands)

    Straatsma, M.W.; van der Perk, M.; Schipper, A.M.; de Nooij, R.J.W.; Leuven, R.S.E.W.; Huthoff, F.; Middelkoop, H.

    2013-01-01

    Land cover maps provide essential input data for various hydromorphological and ecological models, but the effect of land cover classification errors on these models has not been quantified systematically. This paper presents the uncertainty in hydromorphological and ecological model output for a la

  16. Uncertainty in hydromorphological and ecological modelling of lowland river floodplains resulting from land cover classification errors

    NARCIS (Netherlands)

    Straatsma, M.W.; Perk, M. van der; Schipper, A.M.; Nooij, R.J.W. de; Leuven, R.S.E.W.; Huthoff, F.; Middelkoop, H.

    2012-01-01

    Land cover maps provide essential input data for various hydromorphological and ecological models, but the effect of land cover classification errors on these models has not been quantified systematically. This paper presents the uncertainty in hydromorphological and ecological model output for a la

  17. Evaluating Method Engineer Performance: an error classification and preliminary empirical study

    Directory of Open Access Journals (Sweden)

    Steven Kelly

    1998-11-01

    Full Text Available We describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.

  18. Modifed Minimum Classification Error Learning and Its Application to Neural Networks

    OpenAIRE

    Shimodaira, Hiroshi; Rokui, Jun; Nakai, Mitsuru

    1998-01-01

    A novel method to improve the generalization performance of the Minimum Classification Error (MCE) / Generalized Probabilistic Descent (GPD) learning is proposed. The MCE/GPD learning proposed by Juang and Katagiri in 1992 results in better recognition performance than the maximum-likelihood (ML) based learning in various areas of pattern recognition. Despite its superiority in recognition performance, as well as other learning algorithms, it still suffers from the problem of "over-fitting...

  19. Classification of error in anatomic pathology: a proposal for an evidence-based standard.

    Science.gov (United States)

    Foucar, Elliott

    2005-05-01

    Error in anatomic pathology (EAP) is an appropriate problem to consider using the disease model with which all pathologists are familiar. In analogy to medical diseases, diagnostic errors represent a complex constellation of often-baffling deviations from the "normal" condition. Ideally, one would wish to approach such "diseases of diagnosis" with effective treatments or preventative measures, but interventions in the absence of a clear understanding of pathogenesis are often ineffective or even harmful. Medical therapy has its history of "bleeding and purging," and error-prevention has a history of "blaming and shaming." The urge to take action in dealing with either medical illnesses or diagnostic failings is, of course, admirable. However, the principle of primum non nocere should guide one's action in both circumstances. The first step in using the disease model to address EAP is the development of a valid taxonomy to allow for grouping together of abnormalities that have a similar pathogenesis. It is apparent that disease categories such as "tumor" are not valuable until they are further refined by precise and accurate classification. Likewise, "error" is an impossibly broad concept that must be parsed into meaningful subcategories before it can be understood with sufficient clarity to be prevented. One important EAP subtype that has been particularly difficult to understand and classify is knowledge-based interpretative (KBI) error. Not only is the latter sometimes confused with distinctly different error types such as human lapses, but there is danger of mistaking system-wide problems (eg, imprecise or inaccurate diagnostic criteria) for the KBI errors of individual pathologists. This paper presents a theoretically-sound taxonomic system for classification of error that can be used for evidence-based categorization of individual cases. Any taxonomy of error in medicine must distinguish between the various factors that may produce mistakes, and importantly

  20. Classification errors in contingency tables analyzed with hierarchical log-linear models. Technical report No. 20

    Energy Technology Data Exchange (ETDEWEB)

    Korn, E L

    1978-08-01

    This thesis is concerned with the effect of classification error on contingency tables being analyzed with hierarchical log-linear models (independence in an I x J table is a particular hierarchical log-linear model). Hierarchical log-linear models provide a concise way of describing independence and partial independences between the different dimensions of a contingency table. The structure of classification errors on contingency tables that will be used throughout is defined. This structure is a generalization of Bross' model, but here attention is paid to the different possible ways a contingency table can be sampled. Hierarchical log-linear models and the effect of misclassification on them are described. Some models, such as independence in an I x J table, are preserved by misclassification, i.e., the presence of classification error will not change the fact that a specific table belongs to that model. Other models are not preserved by misclassification; this implies that the usual tests to see if a sampled table belong to that model will not be of the right significance level. A simple criterion will be given to determine which hierarchical log-linear models are preserved by misclassification. Maximum likelihood theory is used to perform log-linear model analysis in the presence of known misclassification probabilities. It will be shown that the Pitman asymptotic power of tests between different hierarchical log-linear models is reduced because of the misclassification. A general expression will be given for the increase in sample size necessary to compensate for this loss of power and some specific cases will be examined.

  1. Reversible watermarking based on invariant image classification and dynamical error histogram shifting.

    Science.gov (United States)

    Pan, W; Coatrieux, G; Cuppens, N; Cuppens, F; Roux, Ch

    2011-01-01

    In this article, we present a novel reversible watermarking scheme. Its originality stands in identifying parts of the image that can be watermarked additively with the most adapted lossless modulation between: Pixel Histogram Shifting (PHS) or Dynamical Error Histogram Shifting (DEHS). This classification process makes use of a reference image derived from the image itself, a prediction of it, which has the property to be invariant to the watermark addition. In that way, watermark embedded and reader remain synchronized through this image of reference. DEHS is also an original contribution of this work. It shifts predict-errors between the image and its reference image taking care of the local specificities of the image, thus dynamically. Conducted experiments, on different medical image test sets issued from different modalities and some natural images, show that our method can insert more data with lower distortion than the most recent and efficient methods of the literature.

  2. Software platform for managing the classification of error- related potentials of observers

    Science.gov (United States)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  3. Factors that affect large subunit ribosomal DNA amplicon sequencing studies of fungal communities: classification method, primer choice, and error.

    Directory of Open Access Journals (Sweden)

    Teresita M Porter

    Full Text Available Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1 a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN; 2 a composition-based method (Ribosomal Database Project naïve bayesian classifier, NBC; and, 3 a phylogeny-based method (Statistical Assignment Package, SAP. We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50-100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys.

  4. Block-Based Motion Estimation Using the Pixelwise Classification of the Motion Compensation Error

    Directory of Open Access Journals (Sweden)

    Jun-Yong Kim

    2012-11-01

    Full Text Available In this paper, we propose block-based motion estimation (ME algorithms based on the pixelwise classification of two different motion compensation (MC errors: 1 displaced frame difference (DFD and 2 brightness constraint constancy term (BCCT. Block-based ME has drawbacks such as unreliable motion vectors (MVs and blocking artifacts, especially in object boundaries. The proposed block matching algorithm (BMA-based methods attempt to reduce artifacts in object-boundary blocks caused by incorrect assumption of a single rigid (translational motion. They yield more appropriate MVs in boundary blocks under the assumption that there exist up to three nonoverlapping regions with different motions. The proposed algorithms also reduce the blocking artifact in the conventional BMA, in which the overlappedblock motion compensation (OBMC is employed especially to the selected regions to prevent the degradation of details. Experimental results with several test sequences show the effectiveness of theproposed algorithms.

  5. Errors

    International Nuclear Information System (INIS)

    Data indicates that about one half of all errors are skill based. Yet, most of the emphasis is focused on correcting rule and knowledge based errors leading to more programs, supervision, and training. None of this corrective action applies to the 'mental lapse' error. Skill based errors are usually committed in performing a routine and familiar task. Workers went to the wrong unit or component, or wrong something. Too often some of these errors result in reactor scrams, turbine trips, or other unwanted actuation. The workers do not need more programs, supervision, or training. They need to know when they are vulnerable and they need to know how to think. Self check can prevent errors, but only if it is practiced intellectually, and with commitment. Skill based errors are usually the result of using habits and senses instead of using our intellect. Even human factors can play a role in the cause of an error on a routine task. Personal injury also, is usually an error. Sometimes they are called accidents, but most accidents are the result of inappropriate actions. Whether we can explain it or not, cause and effect were there. A proper attitude toward risk, and a proper attitude toward danger is requisite to avoiding injury. Many personal injuries can be avoided just by attitude. Errors, based on personal experience and interviews, examines the reasons for the 'mental lapse' errors, and why some of us become injured. The paper offers corrective action without more programs, supervision, and training. It does ask you to think differently. (author)

  6. Further results on fault-tolerant distributed classification using error-correcting codes

    Science.gov (United States)

    Wang, Tsang-Yi; Han, Yunghsiang S.; Varshney, Pramod K.

    2004-04-01

    In this paper, we consider the distributed classification problem in wireless sensor networks. The DCFECC-SD approach employing the binary code matrix has recently been proposed to cope with the errors caused by both sensor faults and the effect of fading channels. The DCFECC-SD approach extends the DCFECC approach by using soft decision decoding to combat channel fading. However, the performance of the system employing the binary code matrix could be degraded if the distance between different hypotheses can not be kept large. This situation could happen when the number of sensor is small or the number of hypotheses is large. In this paper, we design the DCFECC-SD approach employing the D-ary code matrix, where D>2. Simulation results show that the performance of the DCFECC-SD approach employing the D-ary code matrix is better than that of the DCFECC-SD approach employing the binary code matrix. Performance evaluation of DCFECC-SD using different number of bits of local decision information is also provided when the total channel energy output from each sensor node is fixed.

  7. New classification of operators' human errors at overseas nuclear power plants and preparation of easy-to-use case sheets

    International Nuclear Information System (INIS)

    At nuclear power plants, plant operators examine other human error cases, including those that occurred at other plants, so that they can learn from such experiences and avoid making similar errors again. Although there is little data available on errors made at domestic plants, nuclear operators in foreign countries are reporting even minor irregularities and signs of faults, and a large amount of data on human errors at overseas plants could be collected and examined. However, these overseas data have not been used effectively because most of them are poorly organized or not properly classified and are often hard to understand. Accordingly, we carried out a study on the cases of human errors at overseas power plants in order to help plant personnel clearly understand overseas experiences and avoid repeating similar errors, The study produced the following results, which were put to use at nuclear power plants and other facilities. (1) ''One-Point-Advice'' refers to a practice where a leader gives pieces of advice to his team of operators in order to prevent human errors before starting work. Based on this practice and those used in the aviation industry, we have developed a new method of classifying human errors that consists of four basic actions and three applied actions. (2) We used this new classification method to classify human errors made by operators at overseas nuclear power plants. The results show that the most frequent errors caused not by operators themselves but due to insufficient team monitoring, for which superiors and/or their colleagues were responsible. We therefore analyzed and classified possible factors contributing to insufficient team monitoring, and demonstrated that the frequent errors have also occurred at domestic power plants. (3) Using the new classification formula, we prepared a human error case sheets that is easy for plant personnel to understand. The sheets are designed to make data more understandable and easier to remember

  8. Systematic classification of unseeded batch crystallization systems for achievable shape and size analysis

    Science.gov (United States)

    Acevedo, David; Nagy, Zoltan K.

    2014-05-01

    The purpose of the current work is to develop a systematic classification scheme for crystallization systems considering simultaneous size and shape variations, and to study the effect of temperature profiles on the achievable final shape of crystals for various crystallization systems. A classification method is proposed based on the simultaneous consideration of the effect of temperature profiles on nucleation and growth rates of two different characteristic crystal dimensions. Hence the approach provides direct indication of the extent in which crystal shape may be controlled for a particular system class by manipulating the supersaturation. A multidimensional population balance model (PBM) was implemented for unseeded crystallization processes of four different compounds. The effect between the nucleation and growth mechanisms on the final aspect ratio (AR) was investigated and it was shown that for nucleation dominated systems the AR is independent of the supersaturation profile. The simulation results confirmed experimentally also show that most crystallization systems tend to achieve an equilibrium shape hence the variation in the aspect ratio that can be achieved by manipulating the supersaturation is limited, in particular when nucleation is also taken into account as a competing phenomenon.

  9. Stochastic analysis of multiple-passband spectral classifications systems affected by observation errors

    Science.gov (United States)

    Tsokos, C. P.

    1980-01-01

    The classification of targets viewed by a pushbroom type multiple band spectral scanner by algorithms suitable for implementation in high speed online digital circuits is considered. A class of algorithms suitable for use with a pipelined classifier is investigated through simulations based on observed data from agricultural targets. It is shown that time distribution of target types is an important determining factor in classification efficiency.

  10. Medication errors in outpatient setting of a tertiary care hospital: classification and root cause analysis

    Directory of Open Access Journals (Sweden)

    Sunil Basukala

    2015-12-01

    Conclusions: Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Hence, A focus on easy-to-use and inexpensive techniques for medication error reduction should be used to have the greatest impact. [Int J Basic Clin Pharmacol 2015; 4(6.000: 1235-1240

  11. Time Series Analysis of Temporal Data by Classification using Mean Absolute Error

    Directory of Open Access Journals (Sweden)

    Swati Soni

    2012-09-01

    Full Text Available There has been a lot of research on the application ofdata mining and knowledge discovery technologies into financialmarket prediction area. However, most of the existing researchfocused on mining structured or numeric data such as financialreports, historical quotes, etc. Another kind of data source –unstructured data such as financial news articles, comments onfinancial markets by experts, etc., which is usually of a muchhigher availability, seems to be neglected due to theirinconvenience to be represented as numeric feature vectors forfurther applying data mining algorithms. A new hybrid systemhas been developed for this purpose. It retrieves financial newsarticles from the internet periodically and using classificationmining techniques to categorize those articles into differentcategories according to their expected effects on the marketbehaviors, then the results will be compared with the real marketdata. This classification with 10 cross fold validation combinationof algorithms can be applied to do financial market prediction in the future

  12. Noise in remote-sensing systems - The effect on classification error

    Science.gov (United States)

    Landgrebe, D. A.; Malaret, E.

    1986-01-01

    Several types of noise in remote-sensing systems are treated. The purpose is to provide enhanced understanding of the relationship of noise sources to both analysis results and sensor design. The context of optical sensors and spectral pattern recognition analysis methods is used to enable tractability for quantitative results. First, the concept of multispectral classification is reviewed. Next, stochastic models are discussed for both signals and noise, including thermal, shot and quantization noise along with atmospheric effects. A model enabling the study of the combined effect of these sources is presented, and a system performance index is defined. Theoretical results showing the interrelated effects of the noise sources on system performance are given. Results of simulations using the system model are presented for several values of system parameters, using some noise parameters of the Thematic Mapper scanner as an illustration. Results show the relative importance of each of the noise sources on system performance, including how sensor noise interacts with atmospheric effects to degrade accuracy.

  13. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  14. Medication errors: classification of seriousness, type, and of medications involved in the reports from a university teaching hospital

    Directory of Open Access Journals (Sweden)

    Gabriella Rejane dos Santos Dalmolin

    2013-12-01

    Full Text Available Medication errors can be frequent in hospitals; these errors are multidisciplinary and occur at various stages of the drug therapy. The present study evaluated the seriousness, the type and the drugs involved in medication errors reported at the Hospital de Clínicas de Porto Alegre. We analyzed written error reports for 2010-2011. The sample consisted of 165 reports. The errors identified were classified according to seriousness, type and pharmacological class. 114 reports were categorized as actual errors (medication errors and 51 reports were categorized as potential errors. There were more medication error reports in 2011 compared to 2010, but there was no significant change in the seriousness of the reports. The most common type of error was prescribing error (48.25%. Errors that occurred during the process of drug therapy sometimes generated additional medication errors. In 114 reports of medication errors identified, 122 drugs were cited. The reflection on medication errors, the possibility of harm resulting from these errors, and the methods for error identification and evaluation should include a broad perspective of the aspects involved in the occurrence of errors. Patient safety depends on the process of communication involving errors, on the proper recording of information, and on the monitoring itself.

  15. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  16. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Science.gov (United States)

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  17. Use of Total Precipitable Water Classification of A Priori Error and Quality Control in Atmospheric Temperature and Water Vapor Sounding Retrieval

    Institute of Scientific and Technical Information of China (English)

    Eun-Han KWON; Jun LI; Jinlong LI; B. J. SOHN; Elisabeth WEISZ

    2012-01-01

    This study investigates the use of dynamic a priori error information according to atmospheric moistness and the use of quality controls in temperature and water vapor profile retrievals from hyperspectral infrared (IR) sounders.Temperature and water vapor profiles are retrieved from Atmospheric InfraRed Sounder (AIRS) radiance measurements by applying a physical iterative method using regression retrieval as the first guess. Based on the dependency of first-guess errors on the degree of atmospheric moistness,the a priori first-guess errors classified by total precipitable water (TPW) are applied in the AIRS physical retrieval procedure.Compared to the retrieval results from a fixed a priori error,boundary layer moisture retrievals appear to be improved via TPW classification of a priori first-guess errors.Six quality control (QC)tests,which check non-converged or bad retrievals,large residuals,high terrain and desert areas,and large temperature and moisture deviations from the first guess regression retrieval,are also applied in the AIRS physical retrievals.Significantly large errors are found for the retrievals rejected by these six QCs,and the retrieval errors are substantially reduced via QC over land,which suggest the usefulness and high impact of the QCs,especially over land.In conclusion,the use of dynamic a priori error information according to atmospheric moistness,and the use of appropriate QCs dealing with the geographical information and the deviation from the first-guess as well as the conventional inverse performance are suggested to improve temperature and moisture retrievals and their applications.

  18. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  19. Hybrid evolutionary techniques in feed forward neural network with distributed error for classification of handwritten Hindi `SWARS'

    Science.gov (United States)

    Kumar, Somesh; Pratap Singh, Manu; Goel, Rajkumar; Lavania, Rajesh

    2013-12-01

    In this work, the performance of feedforward neural network with a descent gradient of distributed error and the genetic algorithm (GA) is evaluated for the recognition of handwritten 'SWARS' of Hindi curve script. The performance index for the feedforward multilayer neural networks is considered here with distributed instantaneous unknown error i.e. different error for different layers. The objective of the GA is to make the search process more efficient to determine the optimal weight vectors from the population. The GA is applied with the distributed error. The fitness function of the GA is considered as the mean of square distributed error that is different for each layer. Hence the convergence is obtained only when the minimum of different errors is determined. It has been analysed that the proposed method of a descent gradient of distributed error with the GA known as hybrid distributed evolutionary technique for the multilayer feed forward neural performs better in terms of accuracy, epochs and the number of optimal solutions for the given training and test pattern sets of the pattern recognition problem.

  20. An Incremental Learning Vector Quantization Algorithm Based on Pattern Density and Classification Error Ratio%基于样本密度和分类误差率的增量学习矢量量化算法研究

    Institute of Scientific and Technical Information of China (English)

    李娟; 王宇平

    2015-01-01

    As a simple and mature classification method, the K nearest neighbor algorithm (KNN) has been widely applied to many fields such as data mining, pattern recognition, etc. However, it faces serious challenges such as huge computation load, high memory consumption and intolerable runtime burden when the processed dataset is large. To deal with the above problems, based on the single-layer competitive learning of the incremental learning vector quantization (ILVQ) network, we propose a new incremental learning vector quantization method that merges together pattern density and classification error rate. By adopting a series of new competitive learning strategies, the proposed method can obtain an incremental prototype set from the original training set quickly by learning, inserting, merging, splitting and deleting these representative points adaptively. The proposed method can achieve a higher reduction efficiency while guaranteeing a higher classification accuracy synchronously for large-scale dataset. In addition, we improve the classical nearest neighbor classification algorithm by absorbing pattern density and classification error ratio of the final prototype neighborhood set into the classification decision criteria. The proposed method can generate an effective representative prototype set after learning the training dataset by a single pass scan, and hence has a strong generality. Experimental results show that the method not only can maintain and even improve the classification accuracy and reduction ratio, but also has the advantage of rapid prototype acquisition and classification over its counterpart algorithms.%作为一种简单而成熟的分类方法, K 最近邻(K nearest neighbor, KNN)算法在数据挖掘、模式识别等领域获得了广泛的应用,但仍存在计算量大、高空间消耗、运行时间长等问题。针对这些问题,本文在增量学习型矢量量化(Incremental learning vector quantization, ILVQ)的单层竞争学习基

  1. Supervised, Multivariate, Whole-brain Reduction Did Not Help to Achieve High Classification Performance in Schizophrenia Research

    Directory of Open Access Journals (Sweden)

    Eva Janousova

    2016-08-01

    Full Text Available We examined how penalized linear discriminant analysis with resampling, which is a supervised, multivariate, whole-brain reduction technique, can help schizophrenia diagnostics and research. In an experiment with magnetic resonance brain images of 52 first-episode schizophrenia patients and 52 healthy controls, this method allowed us to select brain areas relevant to schizophrenia, such as the left prefrontal cortex, the anterior cingulum, the right anterior insula, the thalamus and the hippocampus. Nevertheless, the classification performance based on such reduced data was not significantly better than the classification of data reduced by mass univariate selection using a t-test or unsupervised multivariate reduction using principal component analysis. Moreover, we found no important influence of the type of imaging features, namely local deformations or grey matter volumes, and the classification method, specifically linear discriminant analysis or linear support vector machines, on the classification results. However, we ascertained significant effect of a cross-validation setting on classification performance as classification results were overestimated even though the resampling was performed during the selection of brain imaging features. Therefore, it is critically important to perform cross-validation in all steps of the analysis (not only during classification in case there is no external validation set to avoid optimistically biasing the results of classification studies.

  2. Comparison of maintenance worker's human error events occurred at United States and domestic nuclear power plants. The proposal of the classification method with insufficient knowledge and experience and the classification result of its application

    International Nuclear Information System (INIS)

    Human errors by maintenance workers in U.S. nuclear power plants were compared with those in Japanese nuclear power plants for the same period in order to identify the characteristics of such errors. As for U.S. events, cases which occurred during 2006 were selected from the Nuclear Information Database of the Institute to Nuclear Safety System while Japanese cases that occurred during the same period, were extracted from the Nuclear Information Archives (NUCIA) owned by JANTI. The most common cause of human errors was insufficient knowledge or experience' accounting for about 40% for U.S. cases and 50% or more of cases in Japan. To break down 'insufficient knowledge', we classified the contents of knowledge into five categories; method', 'nature', 'reason', 'scope' and 'goal', and classified the level of knowledge into four categories: 'known', 'comprehended', 'applied' and analytic'. By using this classification, the patterns of combination of each item of the content and the level of knowledge were compared. In the U.S. cases, errors due to 'insufficient knowledge of nature and insufficient knowledge of method' were prevalent while three other items', 'reason', scope' and 'goal' which involve work conditions among the contents of knowledge rarely occurred. In Japan, errors arising from 'nature not being comprehended' were rather prevalent while other cases were distributed evenly for all categories including the work conditions. For addressing insufficient knowledge or experience', we consider that the following approaches are valid: according to the knowledge level which is required for the work, the reflection of knowledge on the procedure or education materials, training and confirmation of understanding level, virtual practice and instruction of experience should be implemented. As for the knowledge on the work conditions, it is necessary to enter the work conditions in the procedure and education materials while conducting training or education. (author)

  3. Collection and classification of human error and human reliability data from Indian nuclear power plants for use in PSA

    International Nuclear Information System (INIS)

    Complex systems such as NPPs involve a large number of Human Interactions (HIs) in every phase of plant operations. Human Reliability Analysis (HRA) in the context of a PSA, attempts to model the HIs and evaluate/predict their impact on safety and reliability using human error/human reliability data. A large number of HRA techniques have been developed for modelling and integrating HIs into PSA but there is a significant lack of HAR data. In the face of insufficient data, human reliability analysts have had to resort to expert judgement methods in order to extend the insufficient data sets. In this situation, the generation of data from plant operating experience assumes importance. The development of a HRA data bank for Indian nuclear power plants was therefore initiated as part of the programme of work on HRA. Later, with the establishment of the coordinated research programme (CRP) on collection of human reliability data and use in PSA by IAEA in 1994-95, the development was carried out under the aegis of the IAEA research contract No. 8239/RB. The work described in this report covers the activities of development of a data taxonomy and a human error reporting form (HERF) based on it, data structuring, review and analysis of plant event reports, collection of data on human errors, analysis of the data and calculation of human error probabilities (HEPs). Analysis of plant operating experience does yield a good amount of qualitative data but obtaining quantitative data on human reliability in the form of HEPs is seen to be more difficult. The difficulties have been highlighted and some ways to bring about improvements in the data situation have been discussed. The implementation of a data system for HRA is described and useful features that can be incorporated in future systems are also discussed. (author)

  4. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    Science.gov (United States)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  5. Discriminative Structured Dictionary Learning for Image Classification

    Institute of Scientific and Technical Information of China (English)

    王萍; 兰俊花; 臧玉卫; 宋占杰

    2016-01-01

    In this paper, a discriminative structured dictionary learning algorithm is presented. To enhance the dictionary’s discriminative power, the reconstruction error, classification error and inhomogeneous representation error are integrated into the objective function. The proposed approach learns a single structured dictionary and a linear classifier jointly. The learned dictionary encourages the samples from the same class to have similar sparse codes, and the samples from different classes to have dissimilar sparse codes. The solution to the objective function is achieved by employing a feature-sign search algorithm and Lagrange dual method. Experimental results on three public databases demonstrate that the proposed approach outperforms several recently proposed dictionary learning techniques for classification.

  6. Sparse group lasso and high dimensional multinomial classification

    DEFF Research Database (Denmark)

    Vincent, Martin; Hansen, N.R.

    2014-01-01

    group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. An implementation of the multinomial sparse group lasso...

  7. Sampling method for monitoring classification of cultivated land in county area based on Kriging estimation error%基于Kriging估计误差的县域耕地等级监测布样方法

    Institute of Scientific and Technical Information of China (English)

    杨建宇; 汤赛; 郧文聚; 张超; 朱德海; 陈彦清

    2013-01-01

    China, an agricultural country, has a large population but not enough cultivated land. Until 2011, the cultivated land per capita was 1.38 mu (0.09 ha), only 40% of the world average, and it is getting worse with industrialization and urbanization. The next task for the Ministry of Land and Resources:Dynamic monitoring of cultivated land classification in which a number of counties will be sampled; in each county, a sample-based monitoring network would be established that reflects the distribution and its tendency of cultivated land classification in county area and estimates of non-sampled locations. Due to the correlation among samples, traditional methods such as simple random sampling, stratified sampling, and systematic sampling are insufficient to achieve the goal. Therefore, in this paper we introduced a spatial sampling method based on the Kriging estimation error. For our case, natural classifications of cultivated land identified from the last Land Resource Survey and Cultivated Land Evaluation are regarded as the true value and classifications of non-sampled cultivated lands would be predicted by interpolating the sample data. Finally, RMSE (root-mean-square error) of Kriging interpolation is redefined to measure the performance of the network. To be specific, five steps are needed for the monitoring network. First, the optimal sample size is determined by analyzing the variation trend between the number and the accuracy of samples. Then, set up the basic monitoring network using square grids. The suitable grid size can be chosen by comparing the grid sizes and the corresponding RMSEs from the Kriging interpolation of the samples data. Because some centers of grids do not overlap the area of cultivated land, the third step is to add some points near the centers of grids to create the global monitoring network. These points are selected from centroids of cultivated land spots which are closest to the centers and inside the searching circles around the

  8. Rhythm Analysis by Heartbeat Classification in the Electrocardiogram (Review article of the research achievements of the members of the Centre of Biomedical Engineering, Bulgarian Academy of Sciences

    Directory of Open Access Journals (Sweden)

    Irena Jekova

    2009-08-01

    Full Text Available The morphological and rhythm analysis of the electrocardiogram (ECG is based on ventricular beats detection, wave parameters measurement, as amplitudes, widths, polarities, intervals and relations between them, and a subsequent classification supporting the diagnostic process. Number of algorithms for detection and classification of the QRS complexes have been developed by researchers in the Centre of Biomedical Engineering - Bulgarian Academy of Sciences, and are reviewed in this material. Combined criteria have been introduced dealing with the QRS areas and amplitudes, the waveshapes evaluated by steep slopes and sharp peaks, vectorcardiographic (VCG loop descriptors, RR intervals irregularities. Algorithms have been designed for application on a single ECG lead, a synthesized lead derived by multichannel synchronous recordings, or simultaneous multilead analysis. Some approaches are based on templates matching, cross-correlation or rely on a continuous updating of adaptive thresholds. Various beat classification methods have been designed involving discriminant analysis, the K-th nearest neighbors, fuzzy sets, genetic algorithms, neural networks, etc. The efficiency of the developed methods has been assessed using internationally recognized arrhythmia ECG databases with annotated beats and rhythm disturbances. In general, high values for specificity and sensitivity competitive to those reported in the literature have been achieved.

  9. Classification with High-Dimensional Sparse Samples

    CERN Document Server

    Huang, Dayu

    2012-01-01

    The task of the binary classification problem is to determine which of two distributions has generated a length-$n$ test sequence. The two distributions are unknown; however two training sequences of length $N$, one from each distribution, are observed. The distributions share an alphabet of size $m$, which is significantly larger than $n$ and $N$. How does $N,n,m$ affect the probability of classification error? We characterize the achievable error rate in a high-dimensional setting in which $N,n,m$ all tend to infinity and $\\max\\{n,N\\}=o(m)$. The results are: * There exists an asymptotically consistent classifier if and only if $m=o(\\min\\{N^2,Nn\\})$. * The best achievable probability of classification error decays as $-\\log(P_e)=J \\min\\{N^2, Nn\\}(1+o(1))/m$ with $J>0$ (shown by achievability and converse results). * A weighted coincidence-based classifier has a non-zero generalized error exponent $J$. * The $\\ell_2$-norm based classifier has a zero generalized error exponent.

  10. Achieving the "triple aim" for inborn errors of metabolism: a review of challenges to outcomes research and presentation of a new practice-based evidence framework.

    Science.gov (United States)

    Potter, Beth K; Chakraborty, Pranesh; Kronick, Jonathan B; Wilson, Kumanan; Coyle, Doug; Feigenbaum, Annette; Geraghty, Michael T; Karaceper, Maria D; Little, Julian; Mhanni, Aizeddin; Mitchell, John J; Siriwardena, Komudi; Wilson, Brenda J; Syrowatka, Ania

    2013-06-01

    Across all areas of health care, decision makers are in pursuit of what Berwick and colleagues have called the "triple aim": improving patient experiences with care, improving health outcomes, and managing health system impacts. This is challenging in a rare disease context, as exemplified by inborn errors of metabolism. There is a need for evaluative outcomes research to support effective and appropriate care for inborn errors of metabolism. We suggest that such research should consider interventions at both the level of the health system (e.g., early detection through newborn screening, programs to provide access to treatments) and the level of individual patient care (e.g., orphan drugs, medical foods). We have developed a practice-based evidence framework to guide outcomes research for inborn errors of metabolism. Focusing on outcomes across the triple aim, this framework integrates three priority themes: tailoring care in the context of clinical heterogeneity; a shift from "urgent care" to "opportunity for improvement"; and the need to evaluate the comparative effectiveness of emerging and established therapies. Guided by the framework, a new Canadian research network has been established to generate knowledge that will inform the design and delivery of health services for patients with inborn errors of metabolism and other rare diseases. PMID:23222662

  11. Research on Controller Errors Classification and Analysis Model Based on Information Processing Theory%基于信息加工的管制人误分类分析模型研究

    Institute of Scientific and Technical Information of China (English)

    罗晓利; 秦凤姣; 孟斌; 李海龙

    2015-01-01

    On the basis of comparing and analyzing the existing human error identification models ,and in the light of controller's task characteristics ,the paper constructs the air traffic controller (ATCo) errors classification and analysis system model integrating the advantages of different human error analysis mod‐els and with cognitive psychology as the basis .The model takes into account the controller‐related factors while working ,also the controller errors can be analyzed from "reason and human error type identifica‐tion","information processing",and"internal and psychological mechenism".After these works ,an unsafe incident case of ATC is investigated by using the model .The results show that the model can be used to recognize ,analyze and prevent controller errors .%在比较分析已有人误因素识别模型的基础上,根据管制员任务特点,以认知心理学为基础,融合不同人误分析模型的优势成分,构建了基于信息加工的管制人误分类分析系统模型。该模型考虑了管制员任务执行过程中的相关条件因素,从“诱因及人误类型辨识”、“信息加工过程”、“内部和心理机制”进行管制员人误致因的辨识分析。最后,采用该模型对一起空管不安全事件进行了分析,结果表明,本模型可用于管制人误的有效识别、分类分析和预防。

  12. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    Science.gov (United States)

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  13. Análisis y Clasificación de Errores Cometidos por Alumnos de Secundaria en los Procesos de Sustitución Formal, Generalización y Modelización en Álgebra (Secondary Students´ Error Analysis and Classification in Formal Substitution, Generalization and Modelling Process in Algebra

    Directory of Open Access Journals (Sweden)

    Raquel M. Ruano

    2008-01-01

    Full Text Available Presentamos un estudio con alumnos de educación secundaria sobre tres procesos específicos del lenguaje algebraico: la sustitución formal, la generalización y la modelización. A partir de las respuestas a un cuestionario, realizamos una clasificación de los errores cometidos y se analizan sus posibles orígenes. Finalmente, formulamos algunas consecuencias didácticas que se derivan de estos resultados. We present a study with secondary students about three specific processes of algebraic language: Formal substitution, generalization, and modelling. Using a test, we develop a students´ errors classifications, and we analyze its possible origins. Finally we present some didactical conclusions from the results.

  14. Hyperspectral image classification based on spatial and spectral features and sparse representation

    Institute of Scientific and Technical Information of China (English)

    Yang Jing-Hui; Wang Li-Guo; Qian Jin-Xi

    2014-01-01

    To minimize the low classification accuracy and low utilization of spatial information in traditional hyperspectral image classification methods, we propose a new hyperspectral image classification method, which is based on the Gabor spatial texture features and nonparametric weighted spectral features, and the sparse representation classification method (Gabor–NWSF and SRC), abbreviated GNWSF–SRC. The proposed (GNWSF–SRC) method first combines the Gabor spatial features and nonparametric weighted spectral features to describe the hyperspectral image, and then applies the sparse representation method. Finally, the classification is obtained by analyzing the reconstruction error. We use the proposed method to process two typical hyperspectral data sets with different percentages of training samples. Theoretical analysis and simulation demonstrate that the proposed method improves the classification accuracy and Kappa coefficient compared with traditional classification methods and achieves better classification performance.

  15. Error Analysis in Mathematics Education.

    Science.gov (United States)

    Radatz, Hendrik

    1979-01-01

    Five types of errors in an information-processing classification are discussed: language difficulties; difficulties in obtaining spatial information; deficient mastery of prerequisite skills, facts, and concepts; incorrect associations; and application of irrelevant rules. (MP)

  16. The Research and Appliaction of the Multi-classification Algorithm of Error-Correcting Codes Based on Support Vector Machine%基于SVM的纠错编码多分类算法的研究与应用

    Institute of Scientific and Technical Information of China (English)

    祖文超; 苑津莎; 王峰; 刘磊

    2012-01-01

    In order to enhance the accuracy rate of transformer fault diagnosis,multiclass classification algorithm,which is based upon Error-correcting codes connects with SVM,has been proposedThe mathe-matical model of transformer fault diagnosis is set up according to the theory of Support Vector Machine. Firstly,the Error-correcting codes matrix constructs some irrelevant Support Vector Machine,so that the accuracy rate of classified model can be enhanced.Finally,taking the dissolved gases in the transformer oil as the practise and testing sample of Error-correcting codes and SVM to realize transformer fault diagno- sis.And checking the arithmetic by using UCI data.The multiclass classification algorithm has been verified through VS2008 combined with Libsvm has been verified.And the result shows the method has high ac- curacy of classification.%为了提高变压器故障诊断的准确率,提出了一种基于纠错编码和支持向量机相结合的多分类算法,根据SVM理论建立变压器故障诊断数学模型,首先基于纠错编码矩阵构造出若干个互不相关的子支持向量机,以提高分类模型的分类准确率。最后把变压器油中溶解气体(DGA)作为纠错编码支持向量机的训练以及测试样本,实现变压器的故障诊断,同时用UCI数据对该算法进行验证。通过VS2008和Libsvm相结合对其进行验证,结果表明该方法具有很高的分类精度。

  17. Modulation classification based on spectrogram

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The aim of modulation classification (MC) is to identify the modulation type of a communication signal. It plays an important role in many cooperative or noncooperative communication applications. Three spectrogram-based modulation classification methods are proposed. Their reccgnition scope and performance are investigated or evaluated by theoretical analysis and extensive simulation studies. The method taking moment-like features is robust to frequency offset while the other two, which make use of principal component analysis (PCA) with different transformation inputs,can achieve satisfactory accuracy even at low SNR (as low as 2 dB). Due to the properties of spectrogram, the statistical pattern recognition techniques, and the image preprocessing steps, all of our methods are insensitive to unknown phase and frequency offsets, timing errors, and the arriving sequence of symbols.

  18. Pitch Based Sound Classification

    OpenAIRE

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U.

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classif...

  19. An Analysis of Classification of Psychological Verb Errors of Thai Students Learning Chinese%泰国留学生汉语心理动词偏误类型分析

    Institute of Scientific and Technical Information of China (English)

    张勇

    2015-01-01

    在收集到大量的偏误语料基础上,通过定性和定量的方法,对泰国留学生学习汉语心理动词出现的偏误类型进行研究,通过分析笔者发现主要存在两类偏误情况,一类是词语偏误,一类是搭配偏误,本文主要是研究第一类词语偏误,主要是心理动词的遗漏、误加和误代三种类型。%This paper presents the author’ s qualitative and quantitative research on classification of errors of psychological verbs of the Thai students learning Chinese in China based on ample examples collected.According to the author, there are two categories of errors:1) lexicon; and 2) collocation.This paper focuses on the for-mer, i.e.omission, redundancy and wrong substitution of psychological verbs.

  20. Earthquake classification, location, and error analysis in a volcanic environment: implications for the magmatic system of the 1989-1990 eruptions at redoubt volcano, Alaska

    Science.gov (United States)

    Lahr, J.C.; Chouet, B.A.; Stephens, C.D.; Power, J.A.; Page, R.A.

    1994-01-01

    Determination of the precise locations of seismic events associated with the 1989-1990 eruptions of Redoubt Volcano posed a number of problems, including poorly known crustal velocities, a sparse station distribution, and an abundance of events with emergent phase onsets. In addition, the high relief of the volcano could not be incorporated into the hypoellipse earthquake location algorithm. This algorithm was modified to allow hypocenters to be located above the elevation of the seismic stations. The velocity model was calibrated on the basis of a posteruptive seismic survey, in which four chemical explosions were recorded by eight stations of the permanent network supplemented with 20 temporary seismographs deployed on and around the volcanic edifice. The model consists of a stack of homogeneous horizontal layers; setting the top of the model at the summit allows events to be located anywhere within the volcanic edifice. Detailed analysis of hypocentral errors shows that the long-period (LP) events constituting the vigorous 23-hour swarm that preceded the initial eruption on December 14 could have originated from a point 1.4 km below the crater floor. A similar analysis of LP events in the swarm preceding the major eruption on January 2 shows they also could have originated from a point, the location of which is shifted 0.8 km northwest and 0.7 km deeper than the source of the initial swarm. We suggest this shift in LP activity reflects a northward jump in the pathway for magmatic gases caused by the sealing of the initial pathway by magma extrusion during the last half of December. Volcano-tectonic (VT) earthquakes did not occur until after the initial 23-hour-long swarm. They began slowly just below the LP source and their rate of occurrence increased after the eruption of 01:52 AST on December 15, when they shifted to depths of 6 to 10 km. After January 2 the VT activity migrated gradually northward; this migration suggests northward propagating withdrawal of

  1. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  2. Error estimation for pattern recognition

    CERN Document Server

    Braga Neto, U

    2015-01-01

    This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to more specialized classifiers, it covers important topics and essential issues pertaining to the scientific validity of pattern classification. Additional features of the book include: * The latest results on the accuracy of error estimation * Performance analysis of resubstitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches * Highly interactive computer-based exercises and end-of-chapter problems

  3. Research on Software Error Behavior Classification Based on Software Failure Chain%基于软件失效链的软件错误行为分类研究

    Institute of Scientific and Technical Information of China (English)

    刘义颖; 江建慧

    2015-01-01

    目前软件应用广泛,对软件可靠性要求越来越高,研究软件的缺陷—错误—失效过程,提前预防失效的发生,减小软件失效带来的损失是十分必要的。研究描述软件错误行为的属性有助于独一无二地描述不同的错误行为,为建立软件故障模式库、软件故障预测和软件故障注入提供依据。文中基于软件失效链的理论,分析软件缺陷、软件错误和软件失效构成的因果链,由缺陷—错误—失效链之间的因果关系,进一步分析描述各个阶段异常的属性集合之间的联系。以现有的IEEE软件异常分类标准研究成果为基础,通过缺陷属性集合和失效属性集合来推导出错误属性集合,给出一种软件错误行为的分类方法,并给出属性集合以及参考值,选取基于最小相关和最大依赖度准则的属性约简算法进行实验,验证属性的合理性。%Software applications are more important than before. The requirements of reliability are more and more higher. It is very neces-sary to study the process of software defect-error-failure,to prevent failure happened in advance and reduce losses. It is helpful to de-scribe the unique software error behavior and help developers to communicate about this field. It also provides more support with software fault pattern library,software fault detection and fault injection. Based on software failure chain theory,analyze the causal chain of soft-ware defect-error-failure,further analyzing and describing each stage abnormal relationships between attributes sets. Based on the existing IEEE software anomaly classification standard,give out software error attributes sets and reference values and a way to classify error be-haviors. Verify rationality of attributes by the attribute reduction algorithm of minimal mutual information and maximal dependency.

  4. Nominal classification

    OpenAIRE

    Senft, G.

    2007-01-01

    This handbook chapter summarizes some of the problems of nominal classification in language, presents and illustrates the various systems or techniques of nominal classification, and points out why nominal classification is one of the most interesting topics in Cognitive Linguistics.

  5. Signal Classification for Acoustic Neutrino Detection

    CERN Document Server

    Neff, M; Enzenhöfer, A; Graf, K; Hößl, J; Katz, U; Lahmann, R; Richardt, C

    2011-01-01

    This article focuses on signal classification for deep-sea acoustic neutrino detection. In the deep sea, the background of transient signals is very diverse. Approaches like matched filtering are not sufficient to distinguish between neutrino-like signals and other transient signals with similar signature, which are forming the acoustic background for neutrino detection in the deep-sea environment. A classification system based on machine learning algorithms is analysed with the goal to find a robust and effective way to perform this task. For a well-trained model, a testing error on the level of one percent is achieved for strong classifiers like Random Forest and Boosting Trees using the extracted features of the signal as input and utilising dense clusters of sensors instead of single sensors.

  6. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  7. On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data

    CERN Document Server

    Richards, Joseph W; Butler, Nathaniel R; Bloom, Joshua S; Brewer, John M; Crellin-Quick, Arien; Higgins, Justin; Kennedy, Rachel; Rischard, Maxime

    2011-01-01

    With the coming data deluge from synoptic surveys, there is a growing need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly-observed variables based on a small number of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics ("feature"), detail methods to robustly estimate periodic light-curve features, introduce tree-ensemble methods for accurate variable star classification, and show how to rigorously evaluate the classification results using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% overall classification error using the random forest classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying sam...

  8. Coding design for error correcting output codes based on perceptron

    Science.gov (United States)

    Zhou, Jin-Deng; Wang, Xiao-Dan; Zhou, Hong-Jian; Cui, Yong-Hua; Jing, Sun

    2012-05-01

    It is known that error-correcting output codes (ECOC) is a common way to model multiclass classification problems, in which the research of encoding based on data is attracting more and more attention. We propose a method for learning ECOC with the help of a single-layered perception neural network. To achieve this goal, the code elements of ECOC are mapped to the weights of network for the given decoding strategy, and an object function with the constrained weights is used as a cost function of network. After the training, we can obtain a coding matrix including lots of subgroups of class. Experimental results on artificial data and University of California Irvine with logistic linear classifier and support vector machine as the binary learner show that our scheme provides better performance of classification with shorter length of coding matrix than other state-of-the-art encoding strategies.

  9. Error detection and reduction in blood banking.

    Science.gov (United States)

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle

  10. Bayesian Classification in Medicine: The Transferability Question *

    OpenAIRE

    Zagoria, Ronald J.; Reggia, James A.; Price, Thomas R.; Banko, Maryann

    1981-01-01

    Using probabilities derived from a geographically distant patient population, we applied Bayesian classification to categorize stroke patients by etiology. Performance was assessed both by error rate and with a new linear accuracy coefficient. This approach to patient classification was found to be surprisingly accurate when compared to classification by two neurologists and to classification by the Bayesian method using “low cost” local and subjective probabilities. We conclude that for some...

  11. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  12. Rademacher Complexity in Neyman-Pearson Classification

    Institute of Scientific and Technical Information of China (English)

    Min HAN; Di Rong CHEN; Zhao Xu SUN

    2009-01-01

    Neyman-Pearson(NP) criterion is one of the most important ways in hypothesis testing.It is also a criterion for classification. This paper addresses the problem of bounding the estimation error of NP classification, in terms of Rademacher averages. We investigate the behavior of the global and local Rademacher averages, and present new NP classification error bounds which are based on the localized averages, and indicate how the estimation error can be estimated without a priori knowledge of the class at hand.

  13. Improve mask inspection capacity with Automatic Defect Classification (ADC)

    Science.gov (United States)

    Wang, Crystal; Ho, Steven; Guo, Eric; Wang, Kechang; Lakkapragada, Suresh; Yu, Jiao; Hu, Peter; Tolani, Vikram; Pang, Linyong

    2013-09-01

    As optical lithography continues to extend into low-k1 regime, resolution of mask patterns continues to diminish. The adoption of RET techniques like aggressive OPC, sub-resolution assist features combined with the requirements to detect even smaller defects on masks due to increasing MEEF, poses considerable challenges for mask inspection operators and engineers. Therefore a comprehensive approach is required in handling defects post-inspections by correctly identifying and classifying the real killer defects impacting the printability on wafer, and ignoring nuisance defect and false defects caused by inspection systems. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at the SMIC mask shop for the 40nm technology node. Traditionally, each defect is manually examined and classified by the inspection operator based on a set of predefined rules and human judgment. At SMIC mask shop due to the significant total number of detected defects, manual classification is not cost-effective due to increased inspection cycle time, resulting in constrained mask inspection capacity, since the review has to be performed while the mask stays on the inspection system. Luminescent Technologies Automated Defect Classification (ADC) product offers a complete and systematic approach for defect disposition and classification offline, resulting in improved utilization of the current mask inspection capability. Based on results from implementation of ADC in SMIC mask production flow, there was around 20% improvement in the inspection capacity compared to the traditional flow. This approach of computationally reviewing defects post mask-inspection ensures no yield loss by qualifying reticles without the errors associated with operator mis-classification or human error. The ADC engine retrieves the high resolution inspection images and uses a decision-tree flow to classify a given defect. Some identification mechanisms adopted by ADC to

  14. [Medical device use errors].

    Science.gov (United States)

    Friesdorf, Wolfgang; Marsolek, Ingo

    2008-01-01

    Medical devices define our everyday patient treatment processes. But despite the beneficial effect, every use can also lead to damages. Use errors are thus often explained by human failure. But human errors can never be completely extinct, especially in such complex work processes like those in medicine that often involve time pressure. Therefore we need error-tolerant work systems in which potential problems are identified and solved as early as possible. In this context human engineering uses the TOP principle: technological before organisational and then person-related solutions. But especially in everyday medical work we realise that error-prone usability concepts can often only be counterbalanced by organisational or person-related measures. Thus human failure is pre-programmed. In addition, many medical work places represent a somewhat chaotic accumulation of individual devices with totally different user interaction concepts. There is not only a lack of holistic work place concepts, but of holistic process and system concepts as well. However, this can only be achieved through the co-operation of producers, healthcare providers and clinical users, by systematically analyzing and iteratively optimizing the underlying treatment processes from both a technological and organizational perspective. What we need is a joint platform like medilab V of the TU Berlin, in which the entire medical treatment chain can be simulated in order to discuss, experiment and model--a key to a safe and efficient healthcare system of the future. PMID:19213452

  15. A deep learning approach to the classification of 3D CAD models

    Institute of Scientific and Technical Information of China (English)

    Fei-wei QIN; Lu-ye LI; Shu-ming GAO; Xiao-ling YANG; Xiang CHEN

    2014-01-01

    Model classification is essential to the management and reuse of 3D CAD models. Manual model classification is laborious and error prone. At the same time, the automatic classification methods are scarce due to the intrinsic complexity of 3D CAD models. In this paper, we propose an automatic 3D CAD model classification approach based on deep neural networks. According to prior knowledge of the CAD domain, features are selected and extracted from 3D CAD models first, and then pre-processed as high dimensional input vectors for category recognition. By analogy with the thinking process of engineers, a deep neural network classifier for 3D CAD models is constructed with the aid of deep learning techniques. To obtain an optimal solution, multiple strategies are appropriately chosen and applied in the training phase, which makes our classifier achieve better per-formance. We demonstrate the efficiency and effectiveness of our approach through experiments on 3D CAD model datasets.

  16. Automated valve condition classification of a reciprocating compressor with seeded faults: experimentation and validation of classification strategy

    Science.gov (United States)

    Lin, Yih-Hwang; Liu, Huai-Sheng; Wu, Chung-Yung

    2009-09-01

    This paper deals with automatic valve condition classification of a reciprocating processor with seeded faults. The seeded faults are considered based on observation of valve faults in practice. They include the misplacement of valve and spring plates, incorrect tightness of the bolts for valve cover or valve seat, softening of the spring plate, and cracked or broken spring plate or valve plate. The seeded faults represent various stages of machine health condition and it is crucial to be able to correctly classify the conditions so that preventative maintenance can be performed before catastrophic breakdown of the compressor occurs. Considering the non-stationary characteristics of the system, time-frequency analysis techniques are applied to obtain the vibration spectrum as time develops. A data reduction algorithm is subsequently employed to extract the fault features from the formidable amount of time-frequency data and finally the probabilistic neural network is utilized to automate the classification process without the intervention of human experts. This study shows that the use of modification indices, as opposed to the original indices, greatly reduces the classification error, from about 80% down to about 20% misclassification for the 15 fault cases. Correct condition classification can be further enhanced if the use of similar fault cases is avoided. It is shown that 6.67% classification error is achievable when using the short-time Fourier transform and the mean variation method for the case of seven seeded faults with 10 training samples used. A stunning 100% correct classification can even be realized when the neural network is well trained with 30 training samples being used.

  17. Medication errors: prescribing faults and prescription errors

    OpenAIRE

    Velo, Giampaolo P; Minuz, Pietro

    2009-01-01

    Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients.Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common.Inadequate knowledge or competence and ...

  18. Hubble Classification

    Science.gov (United States)

    Murdin, P.

    2000-11-01

    A classification scheme for galaxies, devised in its original form in 1925 by Edwin P Hubble (1889-1953), and still widely used today. The Hubble classification recognizes four principal types of galaxy—elliptical, spiral, barred spiral and irregular—and arranges these in a sequence that is called the tuning-fork diagram....

  19. Network error correction with unequal link capacities

    OpenAIRE

    Kim, Sukwon; Ho, Tracey; Effros, Michelle; Avestimehr, Amir Salman

    2010-01-01

    We study network error correction with unequal link capacities. Previous results on network error correction assume unit link capacities. We consider network error correction codes that can correct arbitrary errors occurring on up to z links. We find the capacity of a network consisting of parallel links, and a generalized Singleton outer bound for any arbitrary network. We show by example that linear coding is insufficient for achieving capacity in general. In our exampl...

  20. Using convolutional neural networks for human activity classification on micro-Doppler radar spectrograms

    Science.gov (United States)

    Jordan, Tyler S.

    2016-05-01

    This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.

  1. Audio Classification from Time-Frequency Texture

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    Time-frequency representations of audio signals often resemble texture images. This paper derives a simple audio classification algorithm based on treating sound spectrograms as texture images. The algorithm is inspired by an earlier visual classification scheme particularly efficient at classifying textures. While solely based on time-frequency texture features, the algorithm achieves surprisingly good performance in musical instrument classification experiments.

  2. Classification problem in CBIR

    OpenAIRE

    Tatiana Jaworska

    2013-01-01

    At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR). Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results ...

  3. Expected energy-based restricted Boltzmann machine for classification.

    Science.gov (United States)

    Elfwing, S; Uchibe, E; Doya, K

    2015-04-01

    In classification tasks, restricted Boltzmann machines (RBMs) have predominantly been used in the first stage, either as feature extractors or to provide initialization of neural networks. In this study, we propose a discriminative learning approach to provide a self-contained RBM method for classification, inspired by free-energy based function approximation (FE-RBM), originally proposed for reinforcement learning. For classification, the FE-RBM method computes the output for an input vector and a class vector by the negative free energy of an RBM. Learning is achieved by stochastic gradient-descent using a mean-squared error training objective. In an earlier study, we demonstrated that the performance and the robustness of FE-RBM function approximation can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that the learning performance of RBM function approximation can be further improved by computing the output by the negative expected energy (EE-RBM), instead of the negative free energy. To create a deep learning architecture, we stack several RBMs on top of each other. We also connect the class nodes to all hidden layers to try to improve the performance even further. We validate the classification performance of EE-RBM using the MNIST data set and the NORB data set, achieving competitive performance compared with other classifiers such as standard neural networks, deep belief networks, classification RBMs, and support vector machines. The purpose of using the NORB data set is to demonstrate that EE-RBM with binary input nodes can achieve high performance in the continuous input domain. PMID:25318375

  4. Classification problem in CBIR

    Directory of Open Access Journals (Sweden)

    Tatiana Jaworska

    2013-04-01

    Full Text Available At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR. Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results of fuzzy rule-based classification in our CBIR. Further-more, these results are used to construct a search engine taking into account data mining.

  5. Harmonization of description and classification of fetal observations: Achievements and problems still unresolved. Report of the 7th Workshop on the Terminology in Developmental Toxicology Berlin, 4-6 May 2011

    NARCIS (Netherlands)

    Solecki, R.; Barbellion, S.; Bergmann, B.; Bürgin, H.; Buschmann, J.; Clark, R.; Comotto, L.; Fuchs, A.; Faqi, A.S.; Gerspach, R.; Grote, K.; Hakansson, H.; Heinrich, V.; Heinrich-Hirsch, B.; Hofmann, T.; Hübel, U.; Inazaki, T.H.; Khalil, S.; Knudsen, T.B.; Kudicke, S.; Lingk, W.; Makris, S.; Müller, S.; Paumgartten, F.; Pfeil, R.; Rama, E.M.; Schneider, S.; Shiota, K.; Tamborini, E.; Tegelenbosch, M.; Ulbrich, B.; Duijnhoven, E.A.J. van; Wise, D.; Chahoud, I.

    2013-01-01

    This article summarizes the 7th Workshop on the Terminology in Developmental Toxicology held in Berlin, May 4-6, 2011. The series of Berlin Workshops has been mainly concerned with the harmonization of terminology and classification of fetal anomalies in developmental toxicity studies. The main topi

  6. ACCUWIND - Methods for classification of cup anemometers

    Energy Technology Data Exchange (ETDEWEB)

    Dahlberg, J.Aa.; Friis Pedersen, T.; Busche, P.

    2006-05-15

    Errors associated with the measurement of wind speed are the major sources of uncertainties in power performance testing of wind turbines. Field comparisons of well-calibrated anemometers show significant and not acceptable difference. The European CLASSCUP project posed the objectives to quantify the errors associated with the use of cup anemometers, and to develop a classification system for quantification of systematic errors of cup anemometers. This classification system has now been implemented in the IEC 61400-12-1 standard on power performance measurements in annex I and J. The classification of cup anemometers requires general external climatic operational ranges to be applied for the analysis of systematic errors. A Class A category classification is connected to reasonably flat sites, and another Class B category is connected to complex terrain, General classification indices are the result of assessment of systematic deviations. The present report focuses on methods that can be applied for assessment of such systematic deviations. A new alternative method for torque coefficient measurements at inclined flow have been developed, which have then been applied and compared to the existing methods developed in the CLASSCUP project and earlier. A number of approaches including the use of two cup anemometer models, two methods of torque coefficient measurement, two angular response measurements, and inclusion and exclusion of influence of friction have been implemented in the classification process in order to assess the robustness of methods. The results of the analysis are presented as classification indices, which are compared and discussed. (au)

  7. A qualitative description of human error

    International Nuclear Information System (INIS)

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  8. High-Performance Neural Networks for Visual Object Classification

    CERN Document Server

    Cireşan, Dan C; Masci, Jonathan; Gambardella, Luca M; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better than more shallow ones. Learning is surprisingly rapid. NORB is completely trained within five epochs. Test error rates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.

  9. Does an awareness of differing types of spreadsheet errors aid end-users in identifying spreadsheets errors?

    CERN Document Server

    Purser, Michael

    2008-01-01

    The research presented in this paper establishes a valid, and simplified, revision of previous spreadsheet error classifications. This investigation is concerned with the results of a web survey and two web-based gender and domain-knowledge free spreadsheet error identification exercises. The participants of the survey and exercises were a test group of professionals (all of whom regularly use spreadsheets) and a control group of students from the University of Greenwich (UK). The findings show that over 85% of users are also the spreadsheet's developer, supporting the revised spreadsheet error classification. The findings also show that spreadsheet error identification ability is directly affected both by spreadsheet experience and by error-type awareness. In particular, that spreadsheet error-type awareness significantly improves the user's ability to identify, the more surreptitious, qualitative error.

  10. ON MACHINE-LEARNED CLASSIFICATION OF VARIABLE STARS WITH SPARSE AND NOISY TIME-SERIES DATA

    International Nuclear Information System (INIS)

    With the coming data deluge from synoptic surveys, there is a need for frameworks that can quickly and automatically produce calibrated classification probabilities for newly observed variables based on small numbers of time-series measurements. In this paper, we introduce a methodology for variable-star classification, drawing from modern machine-learning techniques. We describe how to homogenize the information gleaned from light curves by selection and computation of real-numbered metrics (features), detail methods to robustly estimate periodic features, introduce tree-ensemble methods for accurate variable-star classification, and show how to rigorously evaluate a classifier using cross validation. On a 25-class data set of 1542 well-studied variable stars, we achieve a 22.8% error rate using the random forest (RF) classifier; this represents a 24% improvement over the best previous classifier on these data. This methodology is effective for identifying samples of specific science classes: for pulsational variables used in Milky Way tomography we obtain a discovery efficiency of 98.2% and for eclipsing systems we find an efficiency of 99.1%, both at 95% purity. The RF classifier is superior to other methods in terms of accuracy, speed, and relative immunity to irrelevant features; the RF can also be used to estimate the importance of each feature in classification. Additionally, we present the first astronomical use of hierarchical classification methods to incorporate a known class taxonomy in the classifier, which reduces the catastrophic error rate from 8% to 7.8%. Excluding low-amplitude sources, the overall error rate improves to 14%, with a catastrophic error rate of 3.5%.

  11. Improvement of the classification accuracy in discriminating diabetic retinopathy by multifocal electroretinogram analysis

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The multifocal electroretinogram (mfERG) is a newly developed electrophysiological technique. In this paper, a classification method is proposed for early diagnosis of the diabetic retinopathy using mfERG data. MfERG records were obtained from eyes of healthy individuals and patients with diabetes at different stages. For each mfERG record, 103 local responses were extracted. Amplitude value of each point on all the mfERG local responses was looked as one potential feature to classify the experimental subjects. Feature subsets were selected from the feature space by comparing the inter-intra distance. Based on the selected feature subset, Fisher's linear classifiers were trained. And the final classification decision of the record was made by voting all the classifiers' outputs. Applying the method to classify all experimental subjects, very low error rates were achieved. Some crucial properties of the diabetic retinopathy classification method are also discussed.

  12. Tissue Classification

    DEFF Research Database (Denmark)

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  13. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  14. Refined Error Bounds for Several Learning Algorithms

    OpenAIRE

    Hanneke, Steve

    2015-01-01

    This article studies the achievable guarantees on the error rates of certain learning algorithms, with particular focus on refining logarithmic factors. Many of the results are based on a general technique for obtaining bounds on the error rates of sample-consistent classifiers with monotonic error regions, in the realizable case. We prove bounds of this type expressed in terms of either the VC dimension or the sample compression size. This general technique also enables us to derive several ...

  15. Transporter Classification Database (TCDB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  16. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Directory of Open Access Journals (Sweden)

    Lev V. Utkin

    2012-01-01

    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  17. Neuromuscular disease classification system.

    Science.gov (United States)

    Sáez, Aurora; Acha, Begoña; Montero-Sánchez, Adoración; Rivas, Eloy; Escudero, Luis M; Serrano, Carmen

    2013-06-01

    Diagnosis of neuromuscular diseases is based on subjective visual assessment of biopsies from patients by the pathologist specialist. A system for objective analysis and classification of muscular dystrophies and neurogenic atrophies through muscle biopsy images of fluorescence microscopy is presented. The procedure starts with an accurate segmentation of the muscle fibers using mathematical morphology and a watershed transform. A feature extraction step is carried out in two parts: 24 features that pathologists take into account to diagnose the diseases and 58 structural features that the human eye cannot see, based on the assumption that the biopsy is considered as a graph, where the nodes are represented by each fiber, and two nodes are connected if two fibers are adjacent. A feature selection using sequential forward selection and sequential backward selection methods, a classification using a Fuzzy ARTMAP neural network, and a study of grading the severity are performed on these two sets of features. A database consisting of 91 images was used: 71 images for the training step and 20 as the test. A classification error of 0% was obtained. It is concluded that the addition of features undetectable by the human visual inspection improves the categorization of atrophic patterns. PMID:23804164

  18. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  19. Robust Model Selection for Classification of Microarrays

    Directory of Open Access Journals (Sweden)

    Ikumi Suzuki

    2009-01-01

    Full Text Available Recently, microarray-based cancer diagnosis systems have been increasingly investigated. However, cost reduction and reliability assurance of such diagnosis systems are still remaining problems in real clinical scenes. To reduce the cost, we need a supervised classifier involving the smallest number of genes, as long as the classifier is sufficiently reliable. To achieve a reliable classifier, we should assess candidate classifiers and select the best one. In the selection process of the best classifier, however, the assessment criterion must involve large variance because of limited number of samples and non-negligible observation noise. Therefore, even if a classifier with a very small number of genes exhibited the smallest leave-one-out cross-validation (LOO error rate, it would not necessarily be reliable because classifiers based on a small number of genes tend to show large variance. We propose a robust model selection criterion, the min-max criterion, based on a resampling bootstrap simulation to assess the variance of estimation of classification error rates. We applied our assessment framework to four published real gene expression datasets and one synthetic dataset. We found that a state- of-the-art procedure, weighted voting classifiers with LOO criterion, had a non-negligible risk of selecting extremely poor classifiers and, on the other hand, that the new min-max criterion could eliminate that risk. These finding suggests that our criterion presents a safer procedure to design a practical cancer diagnosis system.

  20. Detection and Classification of Whale Acoustic Signals

    Science.gov (United States)

    Xian, Yin

    This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification. In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information. In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data. Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear. We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale

  1. Field error lottery

    Science.gov (United States)

    James Elliott, C.; McVey, Brian D.; Quimby, David C.

    1991-07-01

    The level of field errors in a free electron laser (FEL) is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is use of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond convenient mechanical tolerances of ± 25 μm, and amelioration of these may occur by a procedure using direct measurement of the magnetic fields at assembly time.

  2. Inborn errors of metabolism

    Science.gov (United States)

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman's Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 205. Rezvani I, Rezvani G. An ...

  3. Error And Error Analysis In Language Study

    OpenAIRE

    Zakaria, Teuku Azhari

    2015-01-01

    Students make mistakes during their language learning course; orally, written, listening or reading comprehension. Making mistakes is inevitable and considered natural in ones’ inter-lingual process. Believed to be part of the learning process, making error and mistake are not bad thing; as everybody would experience the same. Both students and teacher will benefit from the event as both will learn what has been done well and what needs to be reviewed and rehearsed. Understanding error and th...

  4. The Error in Total Error Reduction

    OpenAIRE

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons i...

  5. Multilingual documentation and classification.

    Science.gov (United States)

    Donnelly, Kevin

    2008-01-01

    Health care providers around the world have used classification systems for decades as a basis for documentation, communications, statistical reporting, reimbursement and research. In more recent years machine-readable medical terminologies have taken on greater importance with the adoption of electronic health records and the need for greater granularity of data in clinical systems. Use of a clinical terminology harmonised with classifications, implemented within a clinical information system, will enable the delivery of many patient health benefits including electronic clinical decision support, disease screening and enhanced patient safety. In order to be usable these systems must be translated into the language of use, without losing meaning. It is evident that today one system cannot meet all requirements which call for collaboration and harmonisation in order to achieve true interoperability on a multilingual basis.

  6. Medical errors in neurosurgery

    Directory of Open Access Journals (Sweden)

    John D Rolston

    2014-01-01

    23.7-27.8% were technical, related to the execution of the surgery itself, highlighting the importance of systems-level approaches to protecting patients and reducing errors. Conclusions: Overall, the magnitude of medical errors in neurosurgery and the lack of focused research emphasize the need for prospective categorization of morbidity with judicious attribution. Ultimately, we must raise awareness of the impact of medical errors in neurosurgery, reduce the occurrence of medical errors, and mitigate their detrimental effects.

  7. Network error correction with unequal link capacities

    CERN Document Server

    Kim, Sukwon; Effros, Michelle; Avestimehr, Amir Salman

    2010-01-01

    This paper studies the capacity of single-source single-sink noiseless networks under adversarial or arbitrary errors on no more than z edges. Unlike prior papers, which assume equal capacities on all links, arbitrary link capacities are considered. Results include new upper bounds, network error correction coding strategies, and examples of network families where our bounds are tight. An example is provided of a network where the capacity is 50% greater than the best rate that can be achieved with linear coding. While coding at the source and sink suffices in networks with equal link capacities, in networks with unequal link capacities, it is shown that intermediate nodes may have to do coding, nonlinear error detection, or error correction in order to achieve the network error correction capacity.

  8. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2016-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  9. Achieving Standardization

    DEFF Research Database (Denmark)

    Henningsson, Stefan

    2014-01-01

    competitive, national customs and regional economic organizations are seeking to establish a standardized solution for digital reporting of customs data. However, standardization has proven hard to achieve in the socio-technical e-Customs solution. In this chapter, the authors identify and describe what has...... to be harmonized in order for a global company to perceive e-Customs as standardized. In doing so, they contribute an explanation of the challenges associated with using a standardization mechanism for harmonizing socio-technical information systems....

  10. Extreme Entropy Machines: Robust information theoretic classification

    OpenAIRE

    Czarnecki, Wojciech Marian; Tabor, Jacek

    2015-01-01

    Most of the existing classification methods are aimed at minimization of empirical risk (through some simple point-based error measured with loss function) with added regularization. We propose to approach this problem in a more information theoretic way by investigating applicability of entropy measures as a classification model objective function. We focus on quadratic Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the construction of Extreme Entropy Machines (EEM). ...

  11. Unsupervised classification of operator workload from brain signals

    Science.gov (United States)

    Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin

    2016-06-01

    Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.

  12. Distributed Maintenance Error Information, Investigation and Intervention

    OpenAIRE

    Zolla, George; Boex, Tony; Flanders, Pat; Nelson, Doug; Tufts, Scott; Schmidt, John K.

    2001-01-01

    This paper describes a safety information management system designed to capture maintenance factors that contribute to aircraft mishaps. The Human Factors Analysis and Classification System-Maintenance Extension taxonomy (HFACS-ME), an effective framework for classifying and analyzing the presence of maintenance errors that lead to mishaps, incidents, and personal injuries, is the theoretical foundation. An existing desktop mishap application is updated, a prototype we...

  13. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  14. Error-prone signalling.

    Science.gov (United States)

    Johnstone, R A; Grafen, A

    1992-06-22

    The handicap principle of Zahavi is potentially of great importance to the study of biological communication. Existing models of the handicap principle, however, make the unrealistic assumption that communication is error free. It seems possible, therefore, that Zahavi's arguments do not apply to real signalling systems, in which some degree of error is inevitable. Here, we present a general evolutionarily stable strategy (ESS) model of the handicap principle which incorporates perceptual error. We show that, for a wide range of error functions, error-prone signalling systems must be honest at equilibrium. Perceptual error is thus unlikely to threaten the validity of the handicap principle. Our model represents a step towards greater realism, and also opens up new possibilities for biological signalling theory. Concurrent displays, direct perception of quality, and the evolution of 'amplifiers' and 'attenuators' are all probable features of real signalling systems, yet handicap models based on the assumption of error-free communication cannot accommodate these possibilities. PMID:1354361

  15. Harmonization of description and classification of fetal observations: achievements and problems still unresolved: report of the 7th Workshop on the Terminology in Developmental Toxicology Berlin, 4-6 May 2011.

    Science.gov (United States)

    Solecki, Roland; Barbellion, Stephane; Bergmann, Brigitte; Bürgin, Heinrich; Buschmann, Jochen; Clark, Ruth; Comotto, Laura; Fuchs, Antje; Faqi, Ali Said; Gerspach, Ralph; Grote, Konstanze; Hakansson, Helen; Heinrich, Verena; Heinrich-Hirsch, Barbara; Hofmann, Thomas; Hübel, Ulrich; Inazaki, Thelma Helena; Khalil, Samia; Knudsen, Thomas B; Kudicke, Sabine; Lingk, Wolfgang; Makris, Susan; Müller, Simone; Paumgartten, Francisco; Pfeil, Rudolf; Rama, Elkiane Macedo; Schneider, Steffen; Shiota, Kohei; Tamborini, Eva; Tegelenbosch, Mariska; Ulbrich, Beate; van Duijnhoven, E A J; Wise, David; Chahoud, Ibrahim

    2013-01-01

    This article summarizes the 7th Workshop on the Terminology in Developmental Toxicology held in Berlin, May 4-6, 2011. The series of Berlin Workshops has been mainly concerned with the harmonization of terminology and classification of fetal anomalies in developmental toxicity studies. The main topics of the 7th Workshop were knowledge on the fate of anomalies after birth, use of Version 2 terminology for maternal-fetal observations and non-routinely used species, reclassification of "grey zone" anomalies and categorization of fetal observations for human health risk assessment. The paucity of data on health consequences of the postnatal permanence of fetal anomalies is relevant and further studies are needed. The Version 2 terminology is an important step forward and the terms listed in this glossary are considered also to be appropriate for most observations in non-routinely used species. Continuation of the Berlin Workshops was recommended. Topics suggested for the next Workshop were grouping of fetal observations for reporting and statistical analysis. PMID:22781580

  16. 28 CFR 524.73 - Classification procedures.

    Science.gov (United States)

    2010-07-01

    ... of Prisons from state or territorial jurisdictions. All state prisoners while solely in service of... classification may be made at any level to achieve the immediate effect of requiring prior clearance for...

  17. Volumetric magnetic resonance imaging classification for Alzheimer's disease based on kernel density estimation of local features

    Institute of Scientific and Technical Information of China (English)

    YAN Hao; WANG Hu; WANG Yong-hui; ZHANG Yu-mei

    2013-01-01

    Background The classification of Alzheimer's disease (AD) from magnetic resonance imaging (MRI) has been challenged by lack of effective and reliable biomarkers due to inter-subject variability.This article presents a classification method for AD based on kernel density estimation (KDE) of local features.Methods First,a large number of local features were extracted from stable image blobs to represent various anatomical patterns for potential effective biomarkers.Based on distinctive descriptors and locations,the local features were robustly clustered to identify correspondences of the same underlying patterns.Then,the KDE was used to estimate distribution parameters of the correspondences by weighting contributions according to their distances.Thus,biomarkers could be reliably quantified by reducing the effects of further away correspondences which were more likely noises from inter-subject variability.Finally,the Bayes classifier was applied on the distribution parameters for the classification of AD.Results Experiments were performed on different divisions of a publicly available database to investigate the accuracy and the effects of age and AD severity.Our method achieved an equal error classification rate of 0.85 for subject aged 60-80 years exhibiting mild AD and outperformed a recent local feature-based work regardless of both effects.Conclusions We proposed a volumetric brain MRI classification method for neurodegenerative disease based on statistics of local features using KDE.The method may be potentially useful for the computer-aided diagnosis in clinical settings.

  18. Habitat Classification of Temperate Marine Macroalgal Communities Using Bathymetric LiDAR

    Directory of Open Access Journals (Sweden)

    Richard Zavalas

    2014-03-01

    Full Text Available Here, we evaluated the potential of using bathymetric Light Detection and Ranging (LiDAR to characterise shallow water (<30 m benthic habitats of high energy subtidal coastal environments. Habitat classification, quantifying benthic substrata and macroalgal communities, was achieved in this study with the application of LiDAR and underwater video groundtruth data using automated classification techniques. Bathymetry and reflectance datasets were used to produce secondary terrain derivative surfaces (e.g., rugosity, aspect that were assumed to influence benthic patterns observed. An automated decision tree classification approach using the Quick Unbiased Efficient Statistical Tree (QUEST was applied to produce substrata, biological and canopy structure habitat maps of the study area. Error assessment indicated that habitat maps produced were primarily accurate (>70%, with varying results for the classification of individual habitat classes; for instance, producer accuracy for mixed brown algae and sediment substrata, was 74% and 93%, respectively. LiDAR was also successful for differentiating canopy structure of macroalgae communities (i.e., canopy structure classification, such as canopy forming kelp versus erect fine branching algae. In conclusion, habitat characterisation using bathymetric LiDAR provides a unique potential to collect baseline information about biological assemblages and, hence, potential reef connectivity over large areas beyond the range of direct observation. This research contributes a new perspective for assessing the structure of subtidal coastal ecosystems, providing a novel tool for the research and management of such highly dynamic marine environments.

  19. Classification in Australia.

    Science.gov (United States)

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  20. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  1. Multi-borders classification

    OpenAIRE

    Mills, Peter

    2014-01-01

    The number of possible methods of generalizing binary classification to multi-class classification increases exponentially with the number of class labels. Often, the best method of doing so will be highly problem dependent. Here we present classification software in which the partitioning of multi-class classification problems into binary classification problems is specified using a recursive control language.

  2. Analysis of the influencing factors of nursing related medication errors based on the conceptual framework of international classification of patient safety%基于 ICPS 分类法框架的护理相关药物事件的影响因素分析

    Institute of Scientific and Technical Information of China (English)

    朱晓萍; 田梅梅; 施雁; 孙晓; 龚美芳; 毛雅芬

    2014-01-01

    Objective To identify the influencing factors of nursing related medication errors , and put forward the effective prevention and control measures .Methods One thousand three hundred and forty-three cases with medication errors from 15 tertiary hospitals in Shanghai Nursing Quality Control Center were chosen . The influencing factors were analyzed by the research tools which were constructed by the level of influencing factors depending on the international classification of patient safety ( ICPS ) through the method of content analysis.Results The occurrences of medication errors were most frequent (62.84%) in the early morning (8:00-16:00), and were related to the most therapeutic nursing .The nursing related medication errors happened frequently in elderly patients with more than 70 years old (32.45%), and which suggested that the abilities of self-care and communication in elderly patients were weak , and the elderly patients were the highest risk of medication errors .The results of safety events of patients in the ICPS were divided into 5 levels including the no, mild, moderate, severe, death.The percent of medication errors including the no , mild, moderate, severe, death in 1 343 cases were respectively 91.88%, 3.35%, 2.76%, 2.01%, 0%.The frequencies of influencing factors of nursing related medication errors in 1 343 cases were 3 185, and the frequencies from high to low were respectively routine violations ,“negligence” and “fault” in the technical mistakes ,“misapplication of good rules” in the error of rule, knowledge-based mistake, communication and illusion.Conclusions Application of the influencing factor of ICPS is helpful to discriminate the system and process defect from the perspective of human error in the nursing managers , and can improve the level of management of patient safety .%目的:识别护理相关药物事件的影响因素,提出有效的预控措施。方法随机抽取上海市15所三级医疗机构,对选取

  3. Uncorrected refractive errors

    OpenAIRE

    Naidoo, Kovin S; Jyoti Jaggernath

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error S...

  4. A gender-based analysis of Iranian EFL learners' types of written errors

    Directory of Open Access Journals (Sweden)

    Faezeh Boroomand

    2013-05-01

    Full Text Available Committing errors is inevitable in process of language acquisition and learning. Analysis of learners' errors from different perspectives, contributes to the improvement of language learning and teaching. Although the issue of gender differences has received considerable attention in the context of second or foreign language learning and teaching, few studies on the relationship between gender and EFL learners' written errors have been carried out. The present study conducted on 100 Iranian advanced EFL learners' written errors (50 male learners and 50 female learners, presents different classifications and subdivisions of errors, and carries out an analysis on these errors. Detecting the most committed errors in each classification, findings reveal significant differences between error frequencies of the two male and female groups (more error frequency in female written productions.

  5. Error coding simulations

    Science.gov (United States)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  6. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  7. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  8. We need to talk about error: causes and types of error in veterinary practice.

    Science.gov (United States)

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error. PMID:26489997

  9. We need to talk about error: causes and types of error in veterinary practice.

    Science.gov (United States)

    Oxtoby, C; Ferguson, E; White, K; Mossop, L

    2015-10-31

    Patient safety research in human medicine has identified the causes and common types of medical error and subsequently informed the development of interventions which mitigate harm, such as the WHO's safe surgery checklist. There is no such evidence available to the veterinary profession. This study therefore aims to identify the causes and types of errors in veterinary practice, and presents an evidence based system for their classification. Causes of error were identified from retrospective record review of 678 claims to the profession's leading indemnity insurer and nine focus groups (average N per group=8) with vets, nurses and support staff were performed using critical incident technique. Reason's (2000) Swiss cheese model of error was used to inform the interpretation of the data. Types of error were extracted from 2978 claims records reported between the years 2009 and 2013. The major classes of error causation were identified with mistakes involving surgery the most common type of error. The results were triangulated with findings from the medical literature and highlight the importance of cognitive limitations, deficiencies in non-technical skills and a systems approach to veterinary error.

  10. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    International Nuclear Information System (INIS)

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  11. CONSTRUCTION OF A CALIBRATED PROBABILISTIC CLASSIFICATION CATALOG: APPLICATION TO 50k VARIABLE SOURCES IN THE ALL-SKY AUTOMATED SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Richards, Joseph W.; Starr, Dan L.; Miller, Adam A.; Bloom, Joshua S.; Brink, Henrik; Crellin-Quick, Arien [Astronomy Department, University of California, Berkeley, CA 94720-3411 (United States); Butler, Nathaniel R., E-mail: jwrichar@stat.berkeley.edu [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States)

    2012-12-15

    With growing data volumes from synoptic surveys, astronomers necessarily must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace these human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of classification purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All-Sky Automated Survey (ASAS), and release the Machine-learned ASAS Classification Catalog (MACC), a 28 class probabilistic classification catalog of 50,124 ASAS sources in the ASAS Catalog of Variable Stars. We estimate that MACC achieves a sub-20% classification error rate and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes.

  12. Remote Sensing Information Classification

    Science.gov (United States)

    Rickman, Douglas L.

    2008-01-01

    This viewgraph presentation reviews the classification of Remote Sensing data in relation to epidemiology. Classification is a way to reduce the dimensionality and precision to something a human can understand. Classification changes SCALAR data into NOMINAL data.

  13. AR-based Method for ECG Classification and Patient Recognition

    Directory of Open Access Journals (Sweden)

    Branislav Vuksanovic

    2013-09-01

    Full Text Available The electrocardiogram (ECG is the recording of heart activity obtained by measuring the signals from electrical contacts placed on the skin of the patient. By analyzing ECG, it is possible to detect the rate and consistency of heartbeats and identify possible irregularities in heart operation. This paper describes a set of techniques employed to pre-process the ECG signals and extract a set of features – autoregressive (AR signal parameters used to characterise ECG signal. Extracted parameters are in this work used to accomplish two tasks. Firstly, AR features belonging to each ECG signal are classified in groups corresponding to three different heart conditions – normal, arrhythmia and ventricular arrhythmia. Obtained classification results indicate accurate, zero-error classification of patients according to their heart condition using the proposed method. Sets of extracted AR coefficients are then extended by adding an additional parameter – power of AR modelling error and a suitability of developed technique for individual patient identification is investigated. Individual feature sets for each group of detected QRS sections are classified in p clusters where p represents the number of patients in each group. Developed system has been tested using ECG signals available in MIT/BIH and Politecnico of Milano VCG/ECG database. Achieved recognition rates indicate that patient identification using ECG signals could be considered as a possible approach in some applications using the system developed in this work. Pre-processing stages, applied parameter extraction techniques and some intermediate and final classification results are described and presented in this paper.

  14. Medical error and disclosure.

    Science.gov (United States)

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. PMID:24182370

  15. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    , which appear as self-clutter in the radar image. When digital techniques are used for generation and processing or the radar signal it is possible to reduce these error signals. In the paper the quadrature devices are analyzed, and two different error compensation methods are considered. The practical...

  16. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  17. HYBRID INTERNET TRAFFIC CLASSIFICATION TECHNIQUE1

    Institute of Scientific and Technical Information of China (English)

    Li Jun; Zhang Shunyi; Lu Yanqing; Yan Junrong

    2009-01-01

    Accurate and real-time classification of network traffic is significant to network operation and management such as QoS differentiation, traffic shaping and security surveillance. However, with many newly emerged P2P applications using dynamic port numbers, masquerading techniques, and payload encryption to avoid detection, traditional classification approaches turn to be ineffective. In this paper, we present a layered hybrid system to classify current Internet traffic, motivated by variety of network activities and their requirements of traffic classification. The proposed method could achieve fast and accurate traffic classification with low overheads and robustness to accommodate both known and unknown/encrypted applications. Furthermore, it is feasible to be used in the context of real-time traffic classification. Our experimental results show the distinct advantages of the proposed classification system, compared with the one-step Machine Learning (ML) approach.

  18. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  19. Adaptive Error Resilience for Video Streaming

    Directory of Open Access Journals (Sweden)

    Lakshmi R. Siruvuri

    2009-01-01

    Full Text Available Compressed video sequences are vulnerable to channel errors, to the extent that minor errors and/or small losses can result in substantial degradation. Thus, protecting compressed data against channel errors is imperative. The use of channel coding schemes can be effective in reducing the impact of channel errors, although this requires that extra parity bits to be transmitted, thus utilizing more bandwidth. However, this can be ameliorated if the transmitter can tailor the parity data rate based on its knowledge regarding current channel conditions. This can be achieved via feedback from the receiver to the transmitter. This paper describes a channel emulation system comprised of a server/proxy/client combination that utilizes feedback from the client to adapt the number of Reed-Solomon parity symbols used to protect compressed video sequences against channel errors.

  20. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  1. SHIP CLASSIFICATION FROM MULTISPECTRAL VIDEOS

    Directory of Open Access Journals (Sweden)

    Frederique Robert-Inacio

    2012-05-01

    Full Text Available Surveillance of a seaport can be achieved by different means: radar, sonar, cameras, radio communications and so on. Such a surveillance aims, on the one hand, to manage cargo and tanker traffic, and, on the other hand, to prevent terrorist attacks in sensitive areas. In this paper an application to video-surveillance of a seaport entrance is presented, and more particularly, the different steps enabling to classify mobile shapes. This classification is based on a parameter measuring the similarity degree between the shape under study and a set of reference shapes. The classification result describes the considered mobile in terms of shape and speed.

  2. Systematic error mitigation in multiple field astrometry

    CERN Document Server

    Gai, Mario

    2011-01-01

    Combination of more than two fields provides constraints on the systematic error of simultaneous observations. The concept is investigated in the context of the Gravitation Astrometric Measurement Experiment (GAME), which aims at measurement of the PPN parameter $\\gamma$ at the $10^{-7}-10^{-8}$ level. Robust self-calibration and control of systematic error is crucial to the achievement of the precision goal. The present work is focused on the concept investigation and practical implementation strategy of systematic error control over four simultaneously observed fields, implementing a "double differential" measurement technique. Some basic requirements on geometry, observing and calibration strategy are derived, discussing the fundamental characteristics of the proposed concept.

  3. Unsupervised Hybrid Classification for Texture Analysis Using Fixed and Optimal Window Size

    Directory of Open Access Journals (Sweden)

    S.S SREEJA MOLE

    2010-12-01

    Full Text Available For achieving better classification results in texture analysis, it is to combine different classification methods. Though there are existing methods which have been using fixed window size that resulted lack of classification accuracy and in order to improve the classification accuracy, the window size must be increased. Moreover the optimal window size is to be selected is also an important thing in the improvement of better classification output. In addition, since some classification techniques are used for micro textured structures and some are for large scale textured images, it is better to integrate different classification methods to achieve higher classification rate. This paper presents a new classification technique named unsupervised hybrid classification for texture analysis (UHCTA that extracts the properties of different methods forachieving higher classification rate. Also comparison with theexisting methods conform the merits of the proposed unsupervised hybrid classification for texture analysis method in terms of accuracy in various image conditions.

  4. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    Science.gov (United States)

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  5. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  6. A Literature Review of Research on Error Analysis Abroad

    Institute of Scientific and Technical Information of China (English)

    肖倩

    2014-01-01

    Error constitutes an important part of interlanguage.Error analysis is an approach influenced by behaviorism,it based on the cognitive theory. The aim of error analysis is to explore the errors made by second language learners, exploring the mental process of learners’second language acquisition,which is of great importance to both learners and teachers. However,as a research tool,error analysis has its limitations. In order to better understand and make best use of error analysis,its background, definition, basic assumptions, classification, procedure, explanation, implication as well as its application will be illustrated. Its limitations will be analyzed from the prospectives of its nature, definition categories.The literature review abroad sheds insight on implication for second language teaching.

  7. Errors generated with the use of rectangular collimation

    Energy Technology Data Exchange (ETDEWEB)

    Parks, E.T. (Department of Allied Health, Western Kentucky University, Bowling Green (USA))

    1991-04-01

    This study was designed to determine whether various techniques for achieving rectangular collimation generate different numbers and types of errors and remakes and to determine whether operator skill level influences errors and remakes. Eighteen students exposed full-mouth series of radiographs on manikins with the use of six techniques. The students were grouped according to skill level. The radiographs were evaluated for errors and remakes resulting from errors in the following categories: cone cutting, vertical angulation, and film placement. Significant differences were found among the techniques in cone cutting errors and remakes, vertical angulation errors and remakes, and total errors and remakes. Operator skill did not appear to influence the number or types of errors or remakes generated. Rectangular collimation techniques produced more errors than did the round collimation techniques. However, only one rectangular collimation technique generated significantly more remakes than the other techniques.

  8. Classification of the web

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  9. Minimax Optimal Rates of Convergence for Multicategory Classifications

    Institute of Scientific and Technical Information of China (English)

    Di Rong CHEN; Xu YOU

    2007-01-01

    In the problem of classification (or pattern recognition),given a set of n samples,weattempt to construct a classifier gn with a small misclassification error.It is important to study the convergence rates of the misclassification error as n tends to infinity.It is known that such a rate can'texist for the set of all distributions.In this paper we obtain the optimal convergence rates for a classof distributions D(λ,ω) in multicategory classification and nonstandard binary classification.

  10. Thermodynamics of Error Correction

    Science.gov (United States)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  11. Contextualizing Object Detection and Classification.

    Science.gov (United States)

    Chen, Qiang; Song, Zheng; Dong, Jian; Huang, Zhongyang; Hua, Yang; Yan, Shuicheng

    2015-01-01

    We investigate how to iteratively and mutually boost object classification and detection performance by taking the outputs from one task as the context of the other one. While context models have been quite popular, previous works mainly concentrate on co-occurrence relationship within classes and few of them focus on contextualization from a top-down perspective, i.e. high-level task context. In this paper, our system adopts a new method for adaptive context modeling and iterative boosting. First, the contextualized support vector machine (Context-SVM) is proposed, where the context takes the role of dynamically adjusting the classification score based on the sample ambiguity, and thus the context-adaptive classifier is achieved. Then, an iterative training procedure is presented. In each step, Context-SVM, associated with the output context from one task (object classification or detection), is instantiated to boost the performance for the other task, whose augmented outputs are then further used to improve the former task by Context-SVM. The proposed solution is evaluated on the object classification and detection tasks of PASCAL Visual Object Classes Challenge (VOC) 2007, 2010 and SUN09 data sets, and achieves the state-of-the-art performance.

  12. Integrating TM and Ancillary Geographical Data with Classification Trees for Land Cover Classification of Marsh Area

    Institute of Scientific and Technical Information of China (English)

    NA Xiaodong; ZHANG Shuqing; ZHANG Huaiqing; LI Xiaofeng; YU Huan; LIU Chunyue

    2009-01-01

    The main objective of this research is to determine the capacity of land cover classification combining spectral and textural features of Landsat TM imagery with ancillary geographical data in wetlands of the Sanjiang Plain, Heilongjiang Province, China. Semi-variograms and Z-test value were calculated to assess the separability of grey-level co-occurrence texture measures to maximize the difference between land cover types. The degree of spatial autocorrelation showed that window sizes of 3×3 pixels and 11×11 pixels were most appropriate for Landsat TM image texture calculations. The texture analysis showed that co-occurrence entropy, dissimilarity, and variance texture measures, derived from the Landsat TM spectrum bands and vegetation indices provided the most significant statistical differentiation between land cover types. Subsequently, a Classification and Regression Tree (CART) algorithm was applied to three different combinations of predictors: 1) TM imagery alone (TM-only); 2) TM imagery plus image texture (TM+TXT model); and 3) all predictors including TM imagery, image texture and additional ancillary GIS information (TM+TXT+GIS model). Compared with traditional Maximum Likelihood Classification (MLC) supervised classification, three classification trees predictive models reduced the overall error rate significantly. Image texture measures and ancillary geographical variables depressed the speckle noise effectively and reduced classification error rate of marsh obviously. For classification trees model making use of all available predictors, omission error rate was 12.90% and commission error rate was 10.99% for marsh. The developed method is portable, relatively easy to implement and should be applicable in other settings and over larger extents.

  13. Effect of dose ascertainment errors on observed risk

    International Nuclear Information System (INIS)

    Inaccuracies in dose assignments can lead to misclassification in epidemiological studies. The extent of this misclassification is examined for different error functions, classification intervals, and actual dose distributions. The error function model is one which results in a truncated lognormal distribution of the assigned dose for each actual dose. The error function may vary as the actual dose changes. The effect of misclassification on the conclusions about dose effect relationships is examined for the linear and quadratic dose effect models. 10 references, 9 figures, 8 tables

  14. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue.…

  15. Error Reduction for Visible Watermarking in Still Images

    Institute of Scientific and Technical Information of China (English)

    LjubisaRadunovic; 王朔中; 等

    2002-01-01

    Different digital watermarking techniques and their applications are briefly reviewed.Solution to a practical problem with visible image marking is presented,together with experimental results and discussion.Main focusis on reduction of error caused by the mark addition and subtraction.Image classification based on its mean gray level and adjustment of out-of-range gray levels are implemented.

  16. Lexical Errors in Second Language Scientific Writing: Some Conceptual Implications

    Science.gov (United States)

    Carrió Pastor, María Luisa; Mestre-Mestre, Eva María

    2014-01-01

    Nowadays, scientific writers are required not only a thorough knowledge of their subject field, but also a sound command of English as a lingua franca. In this paper, the lexical errors produced in scientific texts written in English by non-native researchers are identified to propose a classification of the categories they contain. This study…

  17. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  18. Quadratic dynamical decoupling with nonuniform error suppression

    Energy Technology Data Exchange (ETDEWEB)

    Quiroz, Gregory; Lidar, Daniel A. [Department of Physics and Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, California 90089 (United States); Departments of Electrical Engineering, Chemistry, and Physics, and Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, California 90089 (United States)

    2011-10-15

    We analyze numerically the performance of the near-optimal quadratic dynamical decoupling (QDD) single-qubit decoherence errors suppression method [J. West et al., Phys. Rev. Lett. 104, 130501 (2010)]. The QDD sequence is formed by nesting two optimal Uhrig dynamical decoupling sequences for two orthogonal axes, comprising N{sub 1} and N{sub 2} pulses, respectively. Varying these numbers, we study the decoherence suppression properties of QDD directly by isolating the errors associated with each system basis operator present in the system-bath interaction Hamiltonian. Each individual error scales with the lowest order of the Dyson series, therefore immediately yielding the order of decoherence suppression. We show that the error suppression properties of QDD are dependent upon the parities of N{sub 1} and N{sub 2}, and near-optimal performance is achieved for general single-qubit interactions when N{sub 1}=N{sub 2}.

  19. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  20. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  1. Tanks for liquids: calibration and errors assessment

    International Nuclear Information System (INIS)

    After a brief reference to some of the problems raised by tanks calibration, two methods, theoretical and experimental are presented, so as to achieve it taking into account measurement errors. The method is applied to the transfer of liquid from one tank to another. Further, a practical example is developed. (author)

  2. Measuring Test Measurement Error: A General Approach

    Science.gov (United States)

    Boyd, Donald; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2013-01-01

    Test-based accountability as well as value-added asessments and much experimental and quasi-experimental research in education rely on achievement tests to measure student skills and knowledge. Yet, we know little regarding fundamental properties of these tests, an important example being the extent of measurement error and its implications for…

  3. Automatic web services classification based on rough set theory

    Institute of Scientific and Technical Information of China (English)

    陈立; 张英; 宋自林; 苗壮

    2013-01-01

    With development of web services technology, the number of existing services in the internet is growing day by day. In order to achieve automatic and accurate services classification which can be beneficial for service related tasks, a rough set theory based method for services classification was proposed. First, the services descriptions were preprocessed and represented as vectors. Elicited by the discernibility matrices based attribute reduction in rough set theory and taking into account the characteristic of decision table of services classification, a method based on continuous discernibility matrices was proposed for dimensionality reduction. And finally, services classification was processed automatically. Through the experiment, the proposed method for services classification achieves approving classification result in all five testing categories. The experiment result shows that the proposed method is accurate and could be used in practical web services classification.

  4. Decomposing model systematic error

    Science.gov (United States)

    Keenlyside, Noel; Shen, Mao-Lin

    2014-05-01

    Seasonal forecasts made with a single model are generally overconfident. The standard approach to improve forecast reliability is to account for structural uncertainties through a multi-model ensemble (i.e., an ensemble of opportunity). Here we analyse a multi-model set of seasonal forecasts available through ENSEMBLES and DEMETER EU projects. We partition forecast uncertainties into initial value and structural uncertainties, as function of lead-time and region. Statistical analysis is used to investigate sources of initial condition uncertainty, and which regions and variables lead to the largest forecast error. Similar analysis is then performed to identify common elements of model error. Results of this analysis will be used to discuss possibilities to reduce forecast uncertainty and improve models. In particular, better understanding of error growth will be useful for the design of interactive multi-model ensembles.

  5. Random errors revisited

    DEFF Research Database (Denmark)

    Jacobsen, Finn

    2000-01-01

    It is well known that the random errors of sound intensity estimates can be much larger than the theoretical minimum value determined by the BT-product, in particular under reverberant conditions and when there are several sources present. More than ten years ago it was shown that one can predict...... the random errors of estimates of the sound intensity in, say, one-third octave bands from the power and cross power spectra of the signals from an intensity probe determined with a dual channel FFT analyser. This is not very practical, though. In this paper it is demonstrated that one can predict the...... random errors from the power and cross power spectra determined with the same spectral resolution as the sound intensity itself....

  6. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  7. Boosting accuracy of automated classification of fluorescence microscope images for location proteomics

    Directory of Open Access Journals (Sweden)

    Huang Kai

    2004-06-01

    Full Text Available Abstract Background Detailed knowledge of the subcellular location of each expressed protein is critical to a full understanding of its function. Fluorescence microscopy, in combination with methods for fluorescent tagging, is the most suitable current method for proteome-wide determination of subcellular location. Previous work has shown that neural network classifiers can distinguish all major protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Building on these results, we evaluate here new classifiers and features to improve the recognition of protein subcellular location patterns in both 2D and 3D fluorescence microscope images. Results We report here a thorough comparison of the performance on this problem of eight different state-of-the-art classification methods, including neural networks, support vector machines with linear, polynomial, radial basis, and exponential radial basis kernel functions, and ensemble methods such as AdaBoost, Bagging, and Mixtures-of-Experts. Ten-fold cross validation was used to evaluate each classifier with various parameters on different Subcellular Location Feature sets representing both 2D and 3D fluorescence microscope images, including new feature sets incorporating features derived from Gabor and Daubechies wavelet transforms. After optimal parameters were chosen for each of the eight classifiers, optimal majority-voting ensemble classifiers were formed for each feature set. Comparison of results for each image for all eight classifiers permits estimation of the lower bound classification error rate for each subcellular pattern, which we interpret to reflect the fraction of cells whose patterns are distorted by mitosis, cell death or acquisition errors. Overall, we obtained statistically significant improvements in classification accuracy over the best previously published results, with the overall error rate being reduced by one-third to one-half and with the average

  8. Realizing Low-Energy Classification Systems by Implementing Matrix Multiplication Directly Within an ADC.

    Science.gov (United States)

    Wang, Zhuo; Zhang, Jintao; Verma, Naveen

    2015-12-01

    In wearable and implantable medical-sensor applications, low-energy classification systems are of importance for deriving high-quality inferences locally within the device. Given that sensor instrumentation is typically followed by A-D conversion, this paper presents a system implementation wherein the majority of the computations required for classification are implemented within the ADC. To achieve this, first an algorithmic formulation is presented that combines linear feature extraction and classification into a single matrix transformation. Second, a matrix-multiplying ADC (MMADC) is presented that enables multiplication between an analog input sample and a digital multiplier, with negligible additional energy beyond that required for A-D conversion. Two systems mapped to the MMADC are demonstrated: (1) an ECG-based cardiac arrhythmia detector; and (2) an image-pixel-based facial gender detector. The RMS error over all multiplication performed, normalized to the RMS of ideal multiplication results is 0.018. Further, compared to idealized versions of conventional systems, the energy savings obtained are estimated to be 13× and 29×, respectively, while achieving similar level of performance. PMID:26849205

  9. Classification systems for natural resource management

    Science.gov (United States)

    Kleckner, Richard L.

    1981-01-01

    Resource managers employ various types of resource classification systems in their management activities such as inventory, mapping, and data analysis. Classification is the ordering or arranging of objects into groups or sets on the basis of their relationships, and as such, provide the resource managers with a structure for organizing their needed information. In addition of conforming to certain logical principles, resource classifications should be flexible, widely applicable to a variety of environmental conditions, and useable with minimal training. The process of classification may be approached from the bottom up (aggregation) or the top down (subdivision) or a combination of both, depending on the purpose of the classification. Most resource classification systems in use today focus on a single resource and are used for a single, limited purpose. However, resource managers now must employ the concept of multiple use in their management activities. What they need is an integrated, ecologically based approach to resource classification which would fulfill multiple-use mandates. In an effort to achieve resource-data compatibility and data sharing among Federal agencies, and interagency agreement has been signed by five Federal agencies to coordinate and cooperate in the area of resource classification and inventory.

  10. Hand eczema classification

    DEFF Research Database (Denmark)

    Diepgen, T L; Andersen, Klaus Ejner; Brandao, F M;

    2008-01-01

    of the disease is rarely evidence based, and a classification system for different subdiagnoses of hand eczema is not agreed upon. Randomized controlled trials investigating the treatment of hand eczema are called for. For this, as well as for clinical purposes, a generally accepted classification system...... A classification system for hand eczema is proposed. Conclusions It is suggested that this classification be used in clinical work and in clinical trials....

  11. Classification of articulators.

    Science.gov (United States)

    Rihani, A

    1980-03-01

    A simple classification in familiar terms with definite, clear characteristics can be adopted. This classification system is based on the number of records used and the adjustments necessary for the articulator to accept these records. The classification divides the articulators into nonadjustable, semiadjustable, and fully adjustable articulators (Table I). PMID:6928204

  12. Decentralized estimation of sensor systematic error andtarget state vector

    Institute of Scientific and Technical Information of China (English)

    贺明科; 王正明; 朱炬波

    2003-01-01

    An accurate estimation of the sensor systematic error is significant for improving the performance of target tracking system. The existing methods usually append the bias states directly to the variable states to form augmented state vectors and utilize the conventional Kalman estimator to achieve state vectors estimate. So doing is expensive in computation, and much work is devoted to decoupling variable states and systematic error. But the decentralied estimation of systematic errors and reduction of the amount of computation as well as decentralied track fusion are far from being realized. This paper addresses distributed track fusion problem in multi-sensor tracking system in the presence of sensor bias. By this method, variable states and systematic error is decoupled. Decentralized systematic error estimation and track fusion are achieved. Simulation results verify that this method can get accurate estimation of systematic error and state vector.

  13. Control by model error estimation

    Science.gov (United States)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  14. Automated classification of patients with chronic lymphocytic leukemia and immunocytoma from flow cytometric three-color immunophenotypes.

    Science.gov (United States)

    Valet, G K; Höffkes, H G

    1997-12-15

    The goal of this study was the discrimination between chronic lymphocytic leukemia (B-CLL), clinically more aggressive lymphoplasmocytoid immunocytoma (LP-IC) and other low-grade non-Hodgkin's lymphomas (NHL) of the B-cell type by automated analysis of flow cytometric immunophenotypes CD45/14/20, CD4/8/3, kappa/CD19/5, lambda/CD19/5 and CD10/23/19 from peripheral blood and bone marrow aspirate leukocytes using the multiparameter classification program CLASSIF1. The immunophenotype list mode files were exhaustively evaluated by combined lymphocyte, monocyte, and granulocyte (LMG) analysis. The results were introduced into databases and automatically classified in a standardized way. The resulting triple matrix classifiers are laboratory and instrument independent, error tolerant, and robust in the classification of unknown test samples. Practically 100% correct individual patient classification was achievable, and most manually unclassifiable patients were unambiguously classified. It is of interest that the single lambda/CD19/5 antibody triplet provided practically the same information as the full set of the five antibody triplets. This demonstrates that standardized classification can be used to optimize immunophenotype panels. On-line classification of test samples is accessible on the Internet: http://www.biochem.mpg.de/valet/leukaem1.html Immunophenotype panels are usually devised for the detection of the frequency of abnormal cell populations. As shown by computer classification, most the highly discriminant information is, however, not contained in percentage frequency values of cell populations, but rather in total antibody binding, antibody binding ratios, and relative antibody surface density parameters of various lymphocyte, monocyte, and granulocyte cell populations. PMID:9440819

  15. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  16. Manson's triple error.

    Science.gov (United States)

    F, Delaporte

    2008-09-01

    The author discusses the significance, implications and limitations of Manson's work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error. PMID:18814729

  17. Stellar classification from single-band imaging using machine learning

    Science.gov (United States)

    Kuntzer, T.; Tewes, M.; Courbin, F.

    2016-06-01

    Information on the spectral types of stars is of great interest in view of the exploitation of space-based imaging surveys. In this article, we investigate the classification of stars into spectral types using only the shape of their diffraction pattern in a single broad-band image. We propose a supervised machine learning approach to this endeavour, based on principal component analysis (PCA) for dimensionality reduction, followed by artificial neural networks (ANNs) estimating the spectral type. Our analysis is performed with image simulations mimicking the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) in the F606W and F814W bands, as well as the Euclid VIS imager. We first demonstrate this classification in a simple context, assuming perfect knowledge of the point spread function (PSF) model and the possibility of accurately generating mock training data for the machine learning. We then analyse its performance in a fully data-driven situation, in which the training would be performed with a limited subset of bright stars from a survey, and an unknown PSF with spatial variations across the detector. We use simulations of main-sequence stars with flat distributions in spectral type and in signal-to-noise ratio, and classify these stars into 13 spectral subclasses, from O5 to M5. Under these conditions, the algorithm achieves a high success rate both for Euclid and HST images, with typical errors of half a spectral class. Although more detailed simulations would be needed to assess the performance of the algorithm on a specific survey, this shows that stellar classification from single-band images is well possible.

  18. A framework to classify error in animal-borne technologies

    Directory of Open Access Journals (Sweden)

    Zackory eBurns

    2015-05-01

    Full Text Available The deployment of novel, innovative, and increasingly miniaturized devices on fauna, especially otherwise difficult to observe taxa, to collect data has steadily increased. Yet, every animal-borne technology has its shortcomings, such as limitations in its precision or accuracy. These shortcomings, here labelled as ‘error’, are not yet studied systematically and a framework to identify and classify error does not exist. Here, we propose a classification scheme to synthesize error across technologies, discussing basic physical properties used by a technology to collect data, conversion of raw data into useful variables, and subjectivity in the parameters chosen. In addition, we outline a four-step framework to quantify error in animal-borne devices: to know, to identify, to evaluate, and to store. Both the classification scheme and framework are theoretical in nature. However, since mitigating error is essential to answer many biological questions, we believe they will be operationalized and facilitate future work to determine and quantify error in animal-borne technologies. Moreover, increasing the transparency of error will ensure the technique used to collect data moderates the biological questions and conclusions.

  19. Error Analysis and Its Implication

    Institute of Scientific and Technical Information of China (English)

    崔蕾

    2007-01-01

    Error analysis is the important theory and approach for exploring the mental process of language learner in SLA. Its major contribution is pointing out that intralingual errors are the main reason of the errors during language learning. Researchers' exploration and description of the errors will not only promote the bidirectional study of Error Analysis as both theory and approach, but also give the implication to second language learning.

  20. Progression in nuclear classification

    International Nuclear Information System (INIS)

    In this book, summarize the author's achievements of nuclear classification by new method in latest 30 years, new foundational law of nuclear layer in matter world is found. It is explained with a hypothesis of a nucleus which it is made up of two nucleon's clusters with deuteron and triton. Its concrete content is: to advance a new method which analyze data of nuclei with natural abundance using relationship between the numbers of proton and neutron. The relationship of each nucleus increases to 4 sets: S+H=Z H+Z=N Z+N=A and S-H=K. To expand the similarity between proton and neutron to the similarity among p,n, deuteron, triton, and He-5 clusters. According to the distribution law of same kind of nuclei, it obtains that the upper limits of stable region both should be '44s'. New foundational law of nuclear system is 1,2,4,8,16,8,4,2,1. In order to explain new law, a hypothesis which nucleus is made up of deuteron and triton is developing and nuclear field of whole number is built up. And it relates that unity of matter motion, which is the most foundational form atomic nuclear systematic is similar to the most first-class form chromosome numbers of mankind. These achievements will shake the foundations of traditional nuclear science. These achievements will supply new tasks in developing nuclear theory. And shake the ground of which magic number is the basic of nuclear science. It opens up a new field on foundational research. The book will supply new knowledge for researcher, teachers and students in universities and polytechnic schools. Scientific workers read in works of research and technical exploit. It can be stored up for library and laboratory of society and universities. In nowadays of prosperity our nation by science and education, the book is readable for workers of scientific technology and amateurs of natural science

  1. Texture Image Classification Based on Gabor Wavelet

    Institute of Scientific and Technical Information of China (English)

    DENG Wei-bing; LI Hai-fei; SHI Ya-li; YANG Xiao-hui

    2014-01-01

    For a texture image, by recognizining the class of every pixel of the image, it can be partitioned into disjoint regions of uniform texture. This paper proposed a texture image classification algorithm based on Gabor wavelet. In this algorithm, characteristic of every image is obtained through every pixel and its neighborhood of this image. And this algorithm can achieve the information transform between different sizes of neighborhood. Experiments on standard Brodatz texture image dataset show that our proposed algorithm can achieve good classification rates.

  2. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  3. Classification rules for Indian Rice diseases

    Directory of Open Access Journals (Sweden)

    A. Nithya

    2011-01-01

    Full Text Available Many techniques have been developed for learning rules and relationships automatically from diverse data sets, to simplify the often tedious and error-prone process of acquiring knowledge from empirical data. Decision tree is one of learning algorithm which posses certain advantages that make it suitable for discovering the classification rule for data mining applications. Normally Decision trees widely used learning method and do not require any prior knowledge of data distribution, works well on noisy data .It has been applied to classify Rice disease based on the symptoms. This paper intended to discover classification rules for the Indian rice diseases using the c4.5 decision trees algorithm. Expert systems have been used in agriculture since the early 1980s. Several systems have been developed in different countries including the USA, Europe, and Egypt for plant-disorder diagnosis, management and other production aspects. This paper explores what Classification rule can do in the agricultural domain.

  4. Graded Achievement, Tested Achievement, and Validity

    Science.gov (United States)

    Brookhart, Susan M.

    2015-01-01

    Twenty-eight studies of grades, over a century, were reviewed using the argument-based approach to validity suggested by Kane as a theoretical framework. The review draws conclusions about the meaning of graded achievement, its relation to tested achievement, and changes in the construct of graded achievement over time. "Graded…

  5. Model and Algorithm of Backward Error Recovery of Distributed Software

    Institute of Scientific and Technical Information of China (English)

    刘纯生; 文传源

    1989-01-01

    Backward error recovery is one of the important techniques of software fault tolerance. Because of error propagation its recovery in distributed software needs cooperation between processes to achieve consistent recovery. However, the techniques of the achievement suffer from either concurrency level decreasing or the domino effect. Based on a formal model of the distributhed system, a backward recovery protocol without the two drawbacks is specified in this paper. The algorithm of the protocol is Woven strictly and its implementation is proposed.

  6. Medical error and systems of signaling: conceptual and linguistic definition.

    Science.gov (United States)

    Smorti, Andrea; Cappelli, Francesco; Zarantonello, Roberta; Tani, Franca; Gensini, Gian Franco

    2014-09-01

    In recent years the issue of patient safety has been the subject of detailed investigations, particularly as a result of the increasing attention from the patients and the public on the problem of medical error. The purpose of this work is firstly to define the classification of medical errors, which are distinguished between two perspectives: those that are personal, and those that are caused by the system. Furthermore we will briefly review some of the main methods used by healthcare organizations to identify and analyze errors. During this discussion it has been determined that, in order to constitute a practical, coordinated and shared action to counteract the error, it is necessary to promote an analysis that considers all elements (human, technological and organizational) that contribute to the occurrence of a critical event. Therefore, it is essential to create a culture of constructive confrontation that encourages an open and non-punitive debate about the causes that led to error. In conclusion we have thus underlined that in health it is essential to affirm a system discussion that considers the error as a learning source, and as a result of the interaction between the individual and the organization. In this way, one should encourage a non-guilt bearing discussion on evident errors and on those which are not immediately identifiable, in order to create the conditions that recognize and corrects the error even before it produces negative consequences. PMID:25034521

  7. Sparse Partial Least Squares Classification for High Dimensional Data*

    OpenAIRE

    Chung, Dongjun; Keles, Sunduz

    2010-01-01

    Partial least squares (PLS) is a well known dimension reduction method which has been recently adapted for high dimensional classification problems in genome biology. We develop sparse versions of the recently proposed two PLS-based classification methods using sparse partial least squares (SPLS). These sparse versions aim to achieve variable selection and dimension reduction simultaneously. We consider both binary and multicategory classification. We provide analytical and simulation-based i...

  8. Accurate molecular classification of cancer using simple rules

    OpenAIRE

    Gotoh Osamu; Wang Xiaosheng

    2009-01-01

    Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...

  9. Recursive heuristic classification

    Science.gov (United States)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  10. Security classification of information

    Energy Technology Data Exchange (ETDEWEB)

    Quist, A.S.

    1993-04-01

    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  11. Classification of ASKAP Vast Radio Light Curves

    Science.gov (United States)

    Rebbapragada, Umaa; Lo, Kitty; Wagstaff, Kiri L.; Reed, Colorado; Murphy, Tara; Thompson, David R.

    2012-01-01

    The VAST survey is a wide-field survey that observes with unprecedented instrument sensitivity (0.5 mJy or lower) and repeat cadence (a goal of 5 seconds) that will enable novel scientific discoveries related to known and unknown classes of radio transients and variables. Given the unprecedented observing characteristics of VAST, it is important to estimate source classification performance, and determine best practices prior to the launch of ASKAP's BETA in 2012. The goal of this study is to identify light curve characterization and classification algorithms that are best suited for archival VAST light curve classification. We perform our experiments on light curve simulations of eight source types and achieve best case performance of approximately 90% accuracy. We note that classification performance is most influenced by light curve characterization rather than classifier algorithm.

  12. Fingerprint Gender Classification using Wavelet Transform and Singular Value Decomposition

    CERN Document Server

    Gnanasivam, P

    2012-01-01

    A novel method of gender Classification from fingerprint is proposed based on discrete wavelet transform (DWT) and singular value decomposition (SVD). The classification is achieved by extracting the energy computed from all the sub-bands of DWT combined with the spatial features of non-zero singular values obtained from the SVD of fingerprint images. K nearest neighbor (KNN) used as a classifier. This method is experimented with the internal database of 3570 fingerprints finger prints in which 1980 were male fingerprints and 1590 were female fingerprints. Finger-wise gender classification is achieved which is 94.32% for the left hand little fingers of female persons and 95.46% for the left hand index finger of male persons. Gender classification for any finger of male persons tested is attained as 91.67% and 84.69% for female persons respectively. Overall classification rate is 88.28% has been achieved.

  13. Cost-sensitive classification for rare events: an application to the credit rating model validation for SMEs

    OpenAIRE

    Raffaella Calabrese

    2011-01-01

    Receiver Operating Characteristic (ROC) curve is used to assess the discriminatory power of credit rating models. To identify the optimal threshold on the ROC curve, the iso-performance lines are used. The ROC curve and the iso-performance line assume equal classification error costs and that the two classification groups are relatively balanced. These assumptions are unrealistic in the application to credit risk. In order to remove these hypotheses, the curve of Classification Error Costs is...

  14. Error Model and Accuracy Calibration of 5-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Fangyu Pan

    2013-08-01

    Full Text Available To improve the machining precision and reduce the geometric errors for 5-axis machinetool, error model and calibration are presented in this paper. Error model is realized by the theory of multi-body system and characteristic matrixes, which can establish the relationship between the cutting tool and the workpiece in theory. The accuracy calibration was difficult to achieve, but by a laser approach-laser interferometer and laser tracker, the errors can be displayed accurately which is benefit for later compensation.

  15. New methodology in biomedical science: methodological errors in classical science.

    Science.gov (United States)

    Skurvydas, Albertas

    2005-01-01

    The following methodological errors are observed in biomedical sciences: paradigmatic ones; those of exaggerated search for certainty; science dehumanisation; deterministic and linearity; those of making conclusions; errors of reductionism or quality decomposition as well as exaggerated enlargement; errors connected with discarding odd; unexpected or awkward facts; those of exaggerated mathematization; isolation of science; the error of "common sense"; Ceteris Paribus law's ("other things being equal" laws) error; "youth" and common sense; inflexibility of criteria of the truth; errors of restricting the sources of truth and ways of searching for truth; the error connected with wisdom gained post factum; the errors of wrong interpretation of research mission; "laziness" to repeat the experiment as well as the errors of coordination of errors. One of the basic aims for the present-day scholars of biomedicine is, therefore, mastering the new non-linear, holistic, complex way of thinking that will, undoubtedly, enable one to make less errors doing research. The aim of "scientific travelling" will be achieved with greater probability if the "travelling" itself is performed with great probability. PMID:15687745

  16. Classification of titanium dioxide

    International Nuclear Information System (INIS)

    In this work the X-ray diffraction (XRD), Scanning Electron Microscopy (Sem) and the X-ray Dispersive Energy Spectroscopy techniques are used with the purpose to achieve a complete identification of phases and mixture of phases of a crystalline material as titanium dioxide. The problem for solving consists of being able to distinguish a sample of titanium dioxide being different than a titanium dioxide pigment. A standard sample of titanium dioxide with NIST certificate is used, which indicates a purity of 99.74% for the TiO2. The following way is recommended to proceed: a)To make an analysis by means of X-ray diffraction technique to the sample of titanium dioxide pigment and on the standard of titanium dioxide waiting not find differences. b) To make a chemical analysis by the X-ray Dispersive Energy Spectroscopy via in a microscope, taking advantage of the high vacuum since it is oxygen which is analysed and if it is concluded that the aluminium oxide appears in a greater proportion to 1% it is established that is a titanium dioxide pigment, but if it is lesser then it will be only titanium dioxide. This type of analysis is an application of the nuclear techniques useful for the tariff classification of merchandise which is considered as of difficult recognition. (Author)

  17. Dimensional jump in quantum error correction

    Science.gov (United States)

    Bombín, Héctor

    2016-04-01

    Topological stabilizer codes with different spatial dimensions have complementary properties. Here I show that the spatial dimension can be switched using gauge fixing. Combining 2D and 3D gauge color codes in a 3D qubit lattice, fault-tolerant quantum computation can be achieved with constant time overhead on the number of logical gates, up to efficient global classical computation, using only local quantum operations. Single-shot error correction plays a crucial role.

  18. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  19. Carotid and Jugular Classification in ARTSENS.

    Science.gov (United States)

    Sahani, Ashish Kumar; Shah, Malay Ilesh; Joseph, Jayaraj; Sivaprakasam, Mohanasankar

    2016-03-01

    Over past few years our group has been working on the development of a low-cost device, ARTSENS, for measurement of local arterial stiffness (AS) of the common carotid artery (CCA). This uses a single element ultrasound transducer to obtain A-mode frames from the CCA. It is designed to be fully automatic in its operation such that, a general medical practitioner can use the device without any prior knowledge of ultrasound modality. Placement of the probe over CCA and identification of echo positions corresponding to its two walls are critical steps in the process of measurement of AS. We had reported an algorithm to locate the CCA walls based on their characteristic motion. Unfortunately, in supine position, the internal jugular vein (IJV) expands in the carotid triangle and pulsates in a manner that confounds the existing algorithm and leads to wrong measurements of the AS. Jugular venous pulse (JVP), on its own right, is a very important physiological signal for diagnosis of morbidities of the right side of the heart and there is a lack of noninvasive methods for its accurate estimation. We integrated an ECG device to the existing hardware of ARTSENS and developed a method based on physiology of the vessels, which now enable us to segregate the CCA pulse (CCP) and the JVP. False identification rate is less than 4%. To retain the capabilities of ARTSENS to operate without ECG, we designed another method where the classification can be achieved without an ECG, albeit errors are a bit higher. These improvements enable ARTSENS to perform automatic measurement of AS even in the supine position and make it a unique and handy tool to perform JVP analysis. PMID:25700474

  20. Fast Wavelet-Based Visual Classification

    CERN Document Server

    Yu, Guoshen

    2008-01-01

    We investigate a biologically motivated approach to fast visual classification, directly inspired by the recent work of Serre et al. Specifically, trading-off biological accuracy for computational efficiency, we explore using wavelet and grouplet-like transforms to parallel the tuning of visual cortex V1 and V2 cells, alternated with max operations to achieve scale and translation invariance. A feature selection procedure is applied during learning to accelerate recognition. We introduce a simple attention-like feedback mechanism, significantly improving recognition and robustness in multiple-object scenes. In experiments, the proposed algorithm achieves or exceeds state-of-the-art success rate on object recognition, texture and satellite image classification, language identification and sound classification.

  1. Emotion Classification from Noisy Speech - A Deep Learning Approach

    OpenAIRE

    Rana, Rajib

    2016-01-01

    This paper investigates the performance of Deep Learning for speech emotion classification when the speech is compounded with noise. It reports on the classification accuracy and concludes with the future directions for achieving greater robustness for emotion recognition from noisy speech.

  2. Error Detection in ESL Teaching

    OpenAIRE

    Rogoveanu Raluca

    2011-01-01

    This study investigates the role of error correction in the larger paradigm of ESL teaching and learning. It conceptualizes error as an inevitable variable in the process of learning and as a frequently occurring element in written and oral discourses of ESL learners. It also identifies specific strategies in which error can be detected and corrected and makes reference to various theoretical trends and their approach to error correction, as well as to the relation between language instructor...

  3. On the Arithmetic of Errors

    OpenAIRE

    Markov, Svetoslav; Hayes, Nathan

    2010-01-01

    An approximate number is an ordered pair consisting of a (real) number and an error bound, briefly error, which is a (real) non-negative number. To compute with approximate numbers the arithmetic operations on errors should be well-known. To model computations with errors one should suitably define and study arithmetic operations and order relations over the set of non-negative numbers. In this work we discuss the algebraic properties of non-negative numbers starting from fa...

  4. Cough event classification by pretrained deep neural network

    Science.gov (United States)

    2015-01-01

    Background Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. Method The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. Results The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. Conclusions In this paper, we

  5. Ontologies vs. Classification Systems

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2009-01-01

    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...... classification systems and meta data taxonomies, should be based on ontologies....

  6. Classification of Itch.

    Science.gov (United States)

    Ständer, Sonja

    2016-01-01

    Chronic pruritus has diverse forms of presentation and can appear not only on normal skin [International Forum for the Study of Itch (IFSI) classification group II], but also in the company of dermatoses (IFSI classification group I). Scratching, a natural reflex, begins in response to itch. Enough damage can be done to the skin by scratching to cause changes in the primary clinical picture, often leading to a clinical picture predominated by the development of chronic scratch lesions (IFSI classification group III). An internationally recognized, standardized classification system was created by the IFSI to not only aid in clarifying terms and definitions, but also to harmonize the global nomenclature for itch. PMID:27578063

  7. Sensitivity analysis of DOA estimation algorithms to sensor errors

    Science.gov (United States)

    Li, Fu; Vaccaro, Richard J.

    1992-07-01

    A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.

  8. Error bounds for set inclusions

    Institute of Scientific and Technical Information of China (English)

    ZHENG; Xiyin(郑喜印)

    2003-01-01

    A variant of Robinson-Ursescu Theorem is given in normed spaces. Several error bound theorems for convex inclusions are proved and in particular a positive answer to Li and Singer's conjecture is given under weaker assumption than the assumption required in their conjecture. Perturbation error bounds are also studied. As applications, we study error bounds for convex inequality systems.

  9. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  10. Classifications for Proliferative Vitreoretinopathy (PVR: An Analysis of Their Use in Publications over the Last 15 Years

    Directory of Open Access Journals (Sweden)

    Salvatore Di Lauro

    2016-01-01

    Full Text Available Purpose. To evaluate the current and suitable use of current proliferative vitreoretinopathy (PVR classifications in clinical publications related to treatment. Methods. A PubMed search was undertaken using the term “proliferative vitreoretinopathy therapy”. Outcome parameters were the reported PVR classification and PVR grades. The way the classifications were used in comparison to the original description was analyzed. Classification errors were also included. It was also noted whether classifications were used for comparison before and after pharmacological or surgical treatment. Results. 138 papers were included. 35 of them (25.4% presented no classification reference or did not use any one. 103 publications (74.6% used a standardized classification. The updated Retina Society Classification, the first Retina Society Classification, and the Silicone Study Classification were cited in 56.3%, 33.9%, and 3.8% papers, respectively. Furthermore, 3 authors (2.9% used modified-customized classifications and 4 (3.8% classification errors were identified. When the updated Retina Society Classification was used, only 10.4% of authors used a full C grade description. Finally, only 2 authors reported PVR grade before and after treatment. Conclusions. Our findings suggest that current classifications are of limited value in clinical practice due to the inconsistent and limited use and that it may be of benefit to produce a revised classification.

  11. Automatic classification of background EEG activity in healthy and sick neonates

    Science.gov (United States)

    Löfhede, Johan; Thordstein, Magnus; Löfgren, Nils; Flisberg, Anders; Rosa-Zurera, Manuel; Kjellmer, Ingemar; Lindecrantz, Kaj

    2010-02-01

    The overall aim of our research is to develop methods for a monitoring system to be used at neonatal intensive care units. When monitoring a baby, a range of different types of background activity needs to be considered. In this work, we have developed a scheme for automatic classification of background EEG activity in newborn babies. EEG from six full-term babies who were displaying a burst suppression pattern while suffering from the after-effects of asphyxia during birth was included along with EEG from 20 full-term healthy newborn babies. The signals from the healthy babies were divided into four behavioural states: active awake, quiet awake, active sleep and quiet sleep. By using a number of features extracted from the EEG together with Fisher's linear discriminant classifier we have managed to achieve 100% correct classification when separating burst suppression EEG from all four healthy EEG types and 93% true positive classification when separating quiet sleep from the other types. The other three sleep stages could not be classified. When the pathological burst suppression pattern was detected, the analysis was taken one step further and the signal was segmented into burst and suppression, allowing clinically relevant parameters such as suppression length and burst suppression ratio to be calculated. The segmentation of the burst suppression EEG works well, with a probability of error around 4%.

  12. Beta systems error analysis

    Science.gov (United States)

    1984-01-01

    The atmospheric backscatter coefficient, beta, measured with an airborne CO Laser Doppler Velocimeter (LDV) system operating in a continuous wave, focussed model is discussed. The Single Particle Mode (SPM) algorithm, was developed from concept through analysis of an extensive amount of data obtained with the system on board a NASA aircraft. The SPM algorithm is intended to be employed in situations where one particle at a time appears in the sensitive volume of the LDV. In addition to giving the backscatter coefficient, the SPM algorithm also produces as intermediate results the aerosol density and the aerosol backscatter cross section distribution. A second method, which measures only the atmospheric backscatter coefficient, is called the Volume Mode (VM) and was simultaneously employed. The results of these two methods differed by slightly less than an order of magnitude. The measurement uncertainties or other errors in the results of the two methods are examined.

  13. Firewall Configuration Errors Revisited

    CERN Document Server

    Wool, Avishai

    2009-01-01

    The first quantitative evaluation of the quality of corporate firewall configurations appeared in 2004, based on Check Point FireWall-1 rule-sets. In general that survey indicated that corporate firewalls were often enforcing poorly written rule-sets, containing many mistakes. The goal of this work is to revisit the first survey. The current study is much larger. Moreover, for the first time, the study includes configurations from two major vendors. The study also introduce a novel "Firewall Complexity" (FC) measure, that applies to both types of firewalls. The findings of the current study indeed validate the 2004 study's main observations: firewalls are (still) poorly configured, and a rule-set's complexity is (still) positively correlated with the number of detected risk items. Thus we can conclude that, for well-configured firewalls, ``small is (still) beautiful''. However, unlike the 2004 study, we see no significant indication that later software versions have fewer errors (for both vendors).

  14. Catalytic quantum error correction

    CERN Document Server

    Brun, T; Hsieh, M H; Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-01-01

    We develop the theory of entanglement-assisted quantum error correcting (EAQEC) codes, a generalization of the stabilizer formalism to the setting in which the sender and receiver have access to pre-shared entanglement. Conventional stabilizer codes are equivalent to dual-containing symplectic codes. In contrast, EAQEC codes do not require the dual-containing condition, which greatly simplifies their construction. We show how any quaternary classical code can be made into a EAQEC code. In particular, efficient modern codes, like LDPC codes, which attain the Shannon capacity, can be made into EAQEC codes attaining the hashing bound. In a quantum computation setting, EAQEC codes give rise to catalytic quantum codes which maintain a region of inherited noiseless qubits. We also give an alternative construction of EAQEC codes by making classical entanglement assisted codes coherent.

  15. Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification

    Directory of Open Access Journals (Sweden)

    R. Sathya

    2013-02-01

    Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.

  16. Support vector classification algorithm based on variable parameter linear programming

    Institute of Scientific and Technical Information of China (English)

    Xiao Jianhua; Lin Jian

    2007-01-01

    To solve the problems of SVM in dealing with large sample size and asymmetric distributed samples, a support vector classification algorithm based on variable parameter linear programming is proposed.In the proposed algorithm, linear programming is employed to solve the optimization problem of classification to decrease the computation time and to reduce its complexity when compared with the original model.The adjusted punishment parameter greatly reduced the classification error resulting from asymmetric distributed samples and the detailed procedure of the proposed algorithm is given.An experiment is conducted to verify whether the proposed algorithm is suitable for asymmetric distributed samples.

  17. Concepts of Classification and Taxonomy. Phylogenetic Classification

    CERN Document Server

    Fraix-Burnet, Didier

    2016-01-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 1 Why phylogenetic tools in astrophysics? 1.1 History of classification The need for classifying living organisms is very ancient, and the first classification system can be dated back to the Greeks. The goal was very practical since it was intended to distinguish between eatable and toxic aliments, or kind and dangerous animals. Simple resemblance was used and has been used for centuries. Basically, until the XVIIIth...

  18. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  19. Active Dictionary Learning in Sparse Representation Based Classification

    OpenAIRE

    Xu, Jin; He, Haibo; Man, Hong

    2014-01-01

    Sparse representation, which uses dictionary atoms to reconstruct input vectors, has been studied intensively in recent years. A proper dictionary is a key for the success of sparse representation. In this paper, an active dictionary learning (ADL) method is introduced, in which classification error and reconstruction error are considered as the active learning criteria in selection of the atoms for dictionary construction. The learned dictionaries are caculated in sparse representation based...

  20. Error-backpropagation in temporally encoded networks of spiking neurons

    OpenAIRE

    Bohte, Sander; La Poutré, Han; Kok, Joost

    2000-01-01

    For a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-coded networks. We perf...

  1. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  2. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard;

    2011-01-01

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  3. An SMP soft classification algorithm for remote sensing

    Science.gov (United States)

    Phillips, Rhonda D.; Watson, Layne T.; Easterling, David R.; Wynne, Randolph H.

    2014-07-01

    This work introduces a symmetric multiprocessing (SMP) version of the continuous iterative guided spectral class rejection (CIGSCR) algorithm, a semiautomated classification algorithm for remote sensing (multispectral) images. The algorithm uses soft data clusters to produce a soft classification containing inherently more information than a comparable hard classification at an increased computational cost. Previous work suggests that similar algorithms achieve good parallel scalability, motivating the parallel algorithm development work here. Experimental results of applying parallel CIGSCR to an image with approximately 108 pixels and six bands demonstrate superlinear speedup. A soft two class classification is generated in just over 4 min using 32 processors.

  4. Library Classification 2020

    Science.gov (United States)

    Harris, Christopher

    2013-01-01

    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  5. Musings on galaxy classification

    International Nuclear Information System (INIS)

    Classification schemes and their utility are discussed with a number of examples, particularly for cD galaxies. Data suggest that primordial turbulence rather than tidal torques is responsible for most of the presently observed angular momentum of galaxies. Finally, some of the limitations on present-day schemes for galaxy classification are pointed out. 54 references, 4 figures, 3 tables

  6. On Network-Error Correcting Convolutional Codes under the BSC Edge Error Model

    CERN Document Server

    Prasad, K

    2010-01-01

    Convolutional network-error correcting codes (CNECCs) are known to provide error correcting capability in acyclic instantaneous networks within the network coding paradigm under small field size conditions. In this work, we investigate the performance of CNECCs under the error model of the network where the edges are assumed to be statistically independent binary symmetric channels, each with the same probability of error $p_e$($0\\leq p_e<0.5$). We obtain bounds on the performance of such CNECCs based on a modified generating function (the transfer function) of the CNECCs. For a given network, we derive a mathematical condition on how small $p_e$ should be so that only single edge network-errors need to be accounted for, thus reducing the complexity of evaluating the probability of error of any CNECC. Simulations indicate that convolutional codes are required to possess different properties to achieve good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime, convolutional codes with g...

  7. Neural Networks for Emotion Classification

    CERN Document Server

    Sun, Yafei

    2011-01-01

    It is argued that for the computer to be able to interact with humans, it needs to have the communication skills of humans. One of these skills is the ability to understand the emotional state of the person. This thesis describes a neural network-based approach for emotion classification. We learn a classifier that can recognize six basic emotions with an average accuracy of 77% over the Cohn-Kanade database. The novelty of this work is that instead of empirically selecting the parameters of the neural network, i.e. the learning rate, activation function parameter, momentum number, the number of nodes in one layer, etc. we developed a strategy that can automatically select comparatively better combination of these parameters. We also introduce another way to perform back propagation. Instead of using the partial differential of the error function, we use optimal algorithm; namely Powell's direction set to minimize the error function. We were also interested in construction an authentic emotion databases. This...

  8. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  9. The 13 errors.

    Science.gov (United States)

    Flower, J

    1998-01-01

    The reality is that most change efforts fail. McKinsey & Company carried out a fascinating research project on change to "crack the code" on creating and managing change in large organizations. One of the questions they asked--and answered--is why most organizations fail in their efforts to manage change. They found that 80 percent of these failures could be traced to 13 common errors. They are: (1) No winning strategy; (2) failure to make a compelling and urgent case for change; (3) failure to distinguish between decision-driven and behavior-dependent change; (4) over-reliance on structure and systems to change behavior; (5) lack of skills and resources; (6) failure to experiment; (7) leaders' inability or unwillingness to confront how they and their roles must change; (8) failure to mobilize and engage pivotal groups; (9) failure to understand and shape the informal organization; (10) inability to integrate and align all the initiatives; (11) no performance focus; (12) excessively open-ended process; and (13) failure to make the whole process transparent and meaningful to individuals. PMID:10351717

  10. PREVENTABLE ERRORS: NEVER EVENTS

    Directory of Open Access Journals (Sweden)

    Narra Gopal

    2014-07-01

    Full Text Available Operation or any invasive procedure is a stressful event involving risks and complications. We should be able to offer a guarantee that the right procedure will be done on right person in the right place on their body. “Never events” are definable. These are the avoidable and preventable events. The people affected from consequences of surgical mistakes ranged from temporary injury in 60%, permanent injury in 33% and death in 7%”.World Health Organization (WHO [1] has earlier said that over seven million people across the globe suffer from preventable surgical injuries every year, a million of them even dying during or immediately after the surgery? The UN body quantified the number of surgeries taking place every year globally 234 million. It said surgeries had become common, with one in every 25 people undergoing it at any given time. 50% never events are preventable. Evidence suggests up to one in ten hospital admissions results in an adverse incident. This incident rate is not acceptable in other industries. In order to move towards a more acceptable level of safety, we need to understand how and why things go wrong and have to build a reliable system of working. With this system even though complete prevention may not be possible but we can reduce the error percentage2. To change present concept towards patient, first we have to change and replace the word patient with medical customer. Then our outlook also changes, we will be more careful towards our customers.

  11. Information Classification on University Websites

    DEFF Research Database (Denmark)

    Nawaz, Ather; Clemmensen, Torkil; Hertzum, Morten

    2011-01-01

    classification of 14 Danish and 14 Pakistani students and compares it with the information classification of their university website. Brainstorming, card sorting and task exploration activities were used to discover similarities and differences in the participating students’ classification of website...

  12. Information Classification on University Websites

    DEFF Research Database (Denmark)

    Nawaz, Ather; Clemmensen, Torkil; Hertzum, Morten

    2011-01-01

    classification of 14 Danish and 14 Pakistani students and compares it with the information classification of their university website. Brainstorming, card sorting, and task exploration activities were used to discover similarities and differences in the participating students’ classification of website...

  13. ERRORS AND DIFFICULTIES IN TRANSLATING LEGAL TEXTS

    Directory of Open Access Journals (Sweden)

    Camelia, CHIRILA

    2014-11-01

    Full Text Available Nowadays the accurate translation of legal texts has become highly important as the mistranslation of a passage in a contract, for example, could lead to lawsuits and loss of money. Consequently, the translation of legal texts to other languages faces many difficulties and only professional translators specialised in legal translation should deal with the translation of legal documents and scholarly writings. The purpose of this paper is to analyze translation from three perspectives: translation quality, errors and difficulties encountered in translating legal texts and consequences of such errors in professional translation. First of all, the paper points out the importance of performing a good and correct translation, which is one of the most important elements to be considered when discussing translation. Furthermore, the paper presents an overview of the errors and difficulties in translating texts and of the consequences of errors in professional translation, with applications to the field of law. The paper is also an approach to the differences between languages (English and Romanian that can hinder comprehension for those who have embarked upon the difficult task of translation. The research method that I have used to achieve the objectives of the paper was the content analysis of various Romanian and foreign authors' works.

  14. Estimating achievement from fame

    OpenAIRE

    Simkin, M. V.; Roychowdhury, V. P.

    2009-01-01

    We report a method for estimating people's achievement based on their fame. Earlier we discovered (cond-mat/0310049) that fame of fighter pilot aces (measured as number of Google hits) grows exponentially with their achievement (number of victories). We hypothesize that the same functional relation between achievement and fame holds for other professions. This allows us to estimate achievement for professions where an unquestionable and universally accepted measure of achievement does not exi...

  15. Support Vector classifiers for Land Cover Classification

    CERN Document Server

    Pal, Mahesh

    2008-01-01

    Support vector machines represent a promising development in machine learning research that is not widely used within the remote sensing community. This paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral DAIS)data in which multi-class SVMs are compared with maximum likelihood and artificial neural network methods in terms of classification accuracy. Our results show that the SVM achieves a higher level of classification accuracy than either the maximum likelihood or the neural classifier, and that the support vector machine can be used with small training datasets and high-dimensional data.

  16. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  17. Nested Quantum Error Correction Codes

    CERN Document Server

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko

    2009-01-01

    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  18. [Error factors in spirometry].

    Science.gov (United States)

    Quadrelli, S A; Montiel, G C; Roncoroni, A J

    1994-01-01

    Spirometry is the more frequently used method to estimate pulmonary function in the clinical laboratory. It is important to comply with technical requisites to approximate the real values sought as well as adequate interpretation of results. Recommendations are made to establish: 1--quality control 2--define abnormality 3--classify the change from normal and its degree 4--define reversibility. In relation to quality control several criteria are pointed out such as end of the test, back-extrapolation and extrapolated volume in order to delineate most common errors. Daily calibration is advised. Inspection of graphical records of the test is mandatory. The limitations to the common use of 80% of predicted values to establish abnormality is stressed. The reasons for employing 95% confidence limits are detailed. It is important to select the reference values equation (in view of the differences in predicted values). It is advisable to validate the selection with local population normal values. In relation to the definition of the defect as restrictive or obstructive, the limitations of vital capacity (VC) to establish restriction, when obstruction is also present, are defined. Also the limitations of maximal mid-expiratory flow 25-75 (FMF 25-75) as an isolated marker of obstruction. Finally the qualities of forced expiratory volume in 1 sec (VEF1) and the difficulties with other indicators (CVF, FMF 25-75, VEF1/CVF) to estimate reversibility after bronchodilators are evaluated. The value of different methods used to define reversibility (% of change in initial value, absolute change or % of predicted), is commented. Clinical spirometric studies in order to be valuable should be performed with the same technical rigour as any other more complex studies. PMID:7990690

  19. Discriminant analysis with errors in variables

    CERN Document Server

    Loustau, Sébastien

    2012-01-01

    The effect of measurement error in discriminant analysis is investigated. Given observations $Z=X+\\epsilon$, where $\\epsilon$ denotes a random noise, the goal is to predict the density of $X$ among two possible candidates $f$ and $g$. We suppose that we have at our disposal two learning samples. The aim is to approach the best possible decision rule $G^*$ defined as a minimizer of the Bayes risk. In the free-noise case $(\\epsilon=0)$, minimax fast rates of convergence are well-known under the margin assumption in discriminant analysis (see \\cite{mammen}) or in the more general classification framework (see \\cite{tsybakov2004,AT}). In this paper we intend to establish similar results in the noisy case, i.e. when dealing with errors in variables. In particular, we discuss two possible complexity assumptions that can be set on the problem, which may alternatively concern the regularity of $f-g$ or the boundary of $G^*$. We prove minimax lower bounds for these both problems and explain how can these rates be atta...

  20. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  1. Evaluation of the contribution of LiDAR data and postclassification procedures to object-based classification accuracy

    Science.gov (United States)

    Styers, Diane M.; Moskal, L. Monika; Richardson, Jeffrey J.; Halabisky, Meghan A.

    2014-01-01

    Object-based image analysis (OBIA) is becoming an increasingly common method for producing land use/land cover (LULC) classifications in urban areas. In order to produce the most accurate LULC map, LiDAR data and postclassification procedures are often employed, but their relative contributions to accuracy are unclear. We examined the contribution of LiDAR data and postclassification procedures to increase classification accuracies over using imagery alone and assessed sources of error along an ecologically complex urban-to-rural gradient in Olympia, Washington. Overall classification accuracy and user's and producer's accuracies for individual classes were evaluated. The addition of LiDAR data to the OBIA classification resulted in an 8.34% increase in overall accuracy, while manual postclassification to the imagery+LiDAR classification improved accuracy only an additional 1%. Sources of error in this classification were largely due to edge effects, from which multiple different types of errors result.

  2. Semi-Supervised Classification based on Gaussian Mixture Model for remote imagery

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Semi-Supervised Classification (SSC),which makes use of both labeled and unlabeled data to determine classification borders in feature space,has great advantages in extracting classification information from mass data.In this paper,a novel SSC method based on Gaussian Mixture Model (GMM) is proposed,in which each class’s feature space is described by one GMM.Experiments show the proposed method can achieve high classification accuracy with small amount of labeled data.However,for the same accuracy,supervised classification methods such as Support Vector Machine,Object Oriented Classification,etc.should be provided with much more labeled data.

  3. [Classifications in forensic medicine and their logical basis].

    Science.gov (United States)

    Kovalev, A V; Shmarov, L A; Ten'kov, A A

    2014-01-01

    The objective of the present study was to characterize the main requirements for the correct construction of classifications used in forensic medicine, with special reference to the errors that occur in the relevant text-books, guidelines, and manuals and the ways to avoid them. This publication continues the series of thematic articles of the authors devoted to the logical errors in the expert conclusions. The preparation of further publications is underway to report the results of the in-depth analysis of the logical errors encountered in expert conclusions, text-books, guidelines, and manuals.

  4. Classification of hand eczema

    DEFF Research Database (Denmark)

    Agner, T; Aalto-Korte, K; Andersen, K E;

    2015-01-01

    BACKGROUND: Classification of hand eczema (HE) is mandatory in epidemiological and clinical studies, and also important in clinical work. OBJECTIVES: The aim was to test a recently proposed classification system of HE in clinical practice in a prospective multicentre study. METHODS: Patients were......%) could not be classified. 38% had one additional diagnosis and 26% had two or more additional diagnoses. Eczema on feet was found in 30% of the patients, statistically significantly more frequently associated with hyperkeratotic and vesicular endogenous eczema. CONCLUSION: We find that the classification...

  5. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  6. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  7. Using History to Teach Scientific Method: The Role of Errors

    Science.gov (United States)

    Giunta, Carmen J.

    2001-05-01

    Including tales of error along with tales of discovery is desirable in any use of history of science to teach about science. Tales of error, particularly when they involve justly well-regarded historical figures, serve to avoid two pitfalls to which use of historical material in science teaching is otherwise susceptible. Acknowledging the false steps of great scientists avoids putting those scientists on a pedestal and illustrates that there is no automatic or mechanical scientific method. This paper lists five kinds of error with examples of each from the development of chemistry in the 18th and 19th centuries: erroneous theories (such as phlogiston), seeing a new phenomenon everywhere one seeks it (e.g., Lavoisier and the decomposition of water), theories erroneous in detail but nonetheless fruitful (e.g., Dalton's atomic theory), rejection of correct theories (e.g., Avogadro's hypothesis), and incoherent insights (e.g., J. A. R. Newlands' classification of the elements).

  8. Developing control charts to review and monitor medication errors.

    Science.gov (United States)

    Ciminera, J L; Lease, M P

    1992-03-01

    There is a need to monitor reported medication errors in a hospital setting. Because the quantity of errors vary due to external reporting, quantifying the data is extremely difficult. Typically, these errors are reviewed using classification systems that often have wide variations in the numbers per class per month. The authors recommend the use of control charts to review historical data and to monitor future data. The procedure they have adopted is a modification of schemes using absolute (i.e., positive) values of successive differences to estimate the standard deviation when only single incidence values are available in time rather than sample averages, and when many successive differences may be zero. PMID:10116719

  9. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian;

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We...... examined all reviews approved and published by the Cochrane Heart Group in the 2012 Cochrane Library that included at least one meta-analysis with 5 or more randomized trials. We used trial sequential analysis to classify statistically significant meta-analyses as true positives if their pooled sample size...... and/or their cumulative Z-curve crossed the O'Brien-Fleming monitoring boundaries for detecting a RRR of at least 25%. We classified meta-analyses that did not achieve statistical significance as true negatives if their pooled sample size was sufficient to reject a RRR of 25%. RESULTS: Twenty three...

  10. A Two Step Data Mining Approach for Amharic Text Classification

    Directory of Open Access Journals (Sweden)

    Seffi Gebeyehu

    2016-08-01

    Full Text Available Traditionally, text classifiers are built from labeled training examples (supervised. Labeling is usually done manually by human experts (or the users, which is a labor intensive and time consuming process. In the past few years, researchers have investigated various forms of semi-supervised learning to reduce the burden of manual labeling. In this paper is aimed to show as the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. In this paper, intended to implement an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation- Maximization (EM and two classifiers: Naive Bayes (NB and locally weighted learning (LWL. NB first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents while LWL uses a class of function approximation to build a model around the current point of interest. An experiment conducted on a mixture of labeled and unlabeled Amharic text documents showed that the new method achieved a significant performance in comparison with that of a supervised LWL and NB. The result also pointed out that the use of unlabeled data with EM reduces the classification absolute error by 27.6%. In general, since unlabeled documents are much less expensive and easier to collect than labeled documents, this method will be useful for text categorization tasks including online data sources such as web pages, e-mails and news group postings. If one uses this method, building text categorization systems will be significantly faster and less expensive than the supervised learning approach.

  11. CLASSIFICATIONS OF EEG SIGNALS FOR MENTAL TASKS USING ADAPTIVE RBF NETWORK

    Institute of Scientific and Technical Information of China (English)

    薛建中; 郑崇勋; 闫相国

    2004-01-01

    Objective This paper presents classifications of mental tasks based on EEG signals using an adaptive Radial Basis Function (RBF) network with optimal centers and widths for the Brain-Computer Interface (BCI) schemes. Methods Initial centers and widths of the network are selected by a cluster estimation method based on the distribution of the training set. Using a conjugate gradient descent method, they are optimized during training phase according to a regularized error function considering the influence of their changes to output values. Results The optimizing process improves the performance of RBF network, and its best cognition rate of three task pairs over four subjects achieves 87.0%. Moreover, this network runs fast due to the fewer hidden layer neurons. Conclusion The adaptive RBF network with optimal centers and widths has high recognition rate and runs fast. It may be a promising classifier for on-line BCI scheme.

  12. Classification system for reporting events involving human malfunctions

    International Nuclear Information System (INIS)

    The report describes a set of categories for reporting industrial incidents and events involving human malfunction. The classification system aims at ensuring information adequate for improvement of human work situations and man-machine interface systems and for attempts to quantify ''human error'' rates. The classification system has a multifacetted non-hierarchical structure and its compatibility with Ispra's ERDS classification is described. The collection of the information in general and for quantification purposes are discussed. 24 categories, 12 of which being human factors-oriented, are listed with their respective subcategories, and comments are given. Underlying models of human data process and their typical malfuntions and of a human decision sequence are described. The work reported is a joint contribution to the CSNI Group of Experts on Human Error Data and Assessment

  13. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... process in March 2012 (77 FR 5379). When verified by a futures classification, Smith-Doxey data serves as... Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed... for the addition of an optional cotton futures classification procedure--identified and known...

  14. Uncertainty in 2D hydrodynamic models from errors in roughness parameterization based on aerial images

    Science.gov (United States)

    Straatsma, Menno; Huthoff, Fredrik

    2011-01-01

    In The Netherlands, 2D-hydrodynamic simulations are used to evaluate the effect of potential safety measures against river floods. In the investigated scenarios, the floodplains are completely inundated, thus requiring realistic representations of hydraulic roughness of floodplain vegetation. The current study aims at providing better insight into the uncertainty of flood water levels due to uncertain floodplain roughness parameterization. The study focuses on three key elements in the uncertainty of floodplain roughness: (1) classification error of the landcover map, (2), within class variation of vegetation structural characteristics, and (3) mapping scale. To assess the effect of the first error source, new realizations of ecotope maps were made based on the current floodplain ecotope map and an error matrix of the classification. For the second error source, field measurements of vegetation structure were used to obtain uncertainty ranges for each vegetation structural type. The scale error was investigated by reassigning roughness codes on a smaller spatial scale. It is shown that classification accuracy of 69% leads to an uncertainty range of predicted water levels in the order of decimeters. The other error sources are less relevant. The quantification of the uncertainty in water levels can help to make better decisions on suitable flood protection measures. Moreover, the relation between uncertain floodplain roughness and the error bands in water levels may serve as a guideline for the desired accuracy of floodplain characteristics in hydrodynamic models.

  15. A Fuzzy Logic Based Sentiment Classification

    Directory of Open Access Journals (Sweden)

    J.I.Sheeba

    2014-07-01

    Full Text Available Sentiment classification aims to detect information such as opinions, explicit , implicit feelings expressed in text. The most existing approaches are able to detect either explicit expressions or implicit expressions of sentiments in the text separately. In this proposed framework it will detect both Implicit and Explicit expressions available in the meeting transcripts. It will classify the Positive, Negative, Neutral words and also identify the topic of the particular meeting transcripts by using fuzzy logic. This paper aims to add some additional features for improving the classification method. The quality of the sentiment classification is improved using proposed fuzzy logic framework .In this fuzzy logic it includes the features like Fuzzy rules and Fuzzy C-means algorithm.The quality of the output is evaluated using the parameters such as precision, recall, f-measure. Here Fuzzy C-means Clustering technique measured in terms of Purity and Entropy. The data set was validated using 10-fold cross validation method and observed 95% confidence interval between the accuracy values .Finally, the proposed fuzzy logic method produced more than 85 % accurate results and error rate is very less compared to existing sentiment classification techniques.

  16. Deep Reconstruction Models for Image Set Classification.

    Science.gov (United States)

    Hayat, Munawar; Bennamoun, Mohammed; An, Senjian

    2015-04-01

    Image set classification finds its applications in a number of real-life scenarios such as classification from surveillance videos, multi-view camera networks and personal albums. Compared with single image based classification, it offers more promises and has therefore attracted significant research attention in recent years. Unlike many existing methods which assume images of a set to lie on a certain geometric surface, this paper introduces a deep learning framework which makes no such prior assumptions and can automatically discover the underlying geometric structure. Specifically, a Template Deep Reconstruction Model (TDRM) is defined whose parameters are initialized by performing unsupervised pre-training in a layer-wise fashion using Gaussian Restricted Boltzmann Machines (GRBMs). The initialized TDRM is then separately trained for images of each class and class-specific DRMs are learnt. Based on the minimum reconstruction errors from the learnt class-specific models, three different voting strategies are devised for classification. Extensive experiments are performed to demonstrate the efficacy of the proposed framework for the tasks of face and object recognition from image sets. Experimental results show that the proposed method consistently outperforms the existing state of the art methods. PMID:26353289

  17. Classification Algorithms for Determining Handwritten Digit

    Directory of Open Access Journals (Sweden)

    Hayder Naser Khraibet AL-Behadili

    2016-06-01

    Full Text Available Data-intensive science is a critical science paradigm that interferes with all other sciences. Data mining (DM is a powerful and useful technology with wide potential users focusing on important meaningful patterns and discovers a new knowledge from a collected dataset. Any predictive task in DM uses some attribute to classify an unknown class. Classification algorithms are a class of prominent mathematical techniques in DM. Constructing a model is the core aspect of such algorithms. However, their performance highly depends on the algorithm behavior upon manipulating data. Focusing on binarazaition as an approach for preprocessing, this paper analysis and evaluates different classification algorithms when construct a model based on accuracy in the classification task. The Mixed National Institute of Standards and Technology (MNIST handwritten digits dataset provided by Yann LeCun has been used in evaluation. The paper focuses on machine learning approaches for handwritten digits detection. Machine learning establishes classification methods, such as K-Nearest Neighbor(KNN, Decision Tree (DT, and Neural Networks (NN. Results showed that the knowledge-based method, i.e. NN algorithm, is more accurate in determining the digits as it reduces the error rate. The implication of this evaluation is providing essential insights for computer scientists and practitioners for choosing the suitable DM technique that fit with their data.

  18. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Directory of Open Access Journals (Sweden)

    Muhammad Faisal Siddiqui

    Full Text Available A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT, principal component analysis (PCA, and least squares support vector machine (LS-SVM are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%. Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities

  19. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    Science.gov (United States)

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the

  20. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition......, a good metric is required to measure distance or similarity between feature points so that the classification becomes feasible. Furthermore, in order to build a successful classifier, one needs to deeply understand how classifiers work. This thesis focuses on these three aspects of classification...... and explores these challenging areas. The first focus of the thesis is to properly combine different local feature experts and prior information to design an effective classifier. The preliminary classification results, provided by the experts, are fused in order to develop an automatic segmentation method...

  1. Learning Apache Mahout classification

    CERN Document Server

    Gupta, Ashish

    2015-01-01

    If you are a data scientist who has some experience with the Hadoop ecosystem and machine learning methods and want to try out classification on large datasets using Mahout, this book is ideal for you. Knowledge of Java is essential.

  2. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    International Nuclear Information System (INIS)

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  3. Text Classification Using Sentential Frequent Itemsets

    Institute of Scientific and Technical Information of China (English)

    Shi-Zhu Liu; He-Ping Hu

    2007-01-01

    Text classification techniques mostly rely on single term analysis of the document data set, while more concepts,especially the specific ones, are usually conveyed by set of terms. To achieve more accurate text classifier, more informative feature including frequent co-occurring words in the same sentence and their weights are particularly important in such scenarios. In this paper, we propose a novel approach using sentential frequent itemset, a concept comes from association rule mining, for text classification, which views a sentence rather than a document as a transaction, and uses a variable precision rough set based method to evaluate each sentential frequent itemset's contribution to the classification. Experiments over the Reuters and newsgroup corpus are carried out, which validate the practicability of the proposed system.

  4. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  5. Classification of Sleep Disorders

    OpenAIRE

    Michael J. Thorpy

    2012-01-01

    The classification of sleep disorders is necessary to discriminate between disorders and to facilitate an understanding of symptoms, etiology, and pathophysiology that allows for appropriate treatment. The earliest classification systems, largely organized according to major symptoms (insomnia, excessive sleepiness, and abnormal events that occur during sleep), were unable to be based on pathophysiology because the cause of most sleep disorders was unknown. These 3 symptom-based categories ar...

  6. Inhibition in multiclass classification

    OpenAIRE

    Huerta, Ramón; Vembu, Shankar; Amigó, José M.; Nowotny, Thomas; Elkan, Charles

    2012-01-01

    The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions, that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a classification function that embodies unselective inhibition and ...

  7. Classification of Dams

    OpenAIRE

    Berg, Johan; Linder, Maria

    2013-01-01

    In a comparing survey this thesis investigates classification systems for dams in Sweden, Norway, Finland, Switzerland, Canada and USA. The investigation is aiming at an understanding of how potential consequences of a dam failure are taken into account when classifying dams. Furthermore, the significance of the classification, regarding the requirements on the dam owner and surveillance authorities concerning dam safety is considered and reviewed. The thesis is pointing out similarities and ...

  8. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  9. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  10. Operational Interventions to Maintenance Error

    Science.gov (United States)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  11. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  12. Human Error: A Concept Analysis

    Science.gov (United States)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  13. Sandbox Learning: Try without error?

    OpenAIRE

    Müller-Schloer, Christian

    2009-01-01

    Adaptivity is enabled by learning. Natural systems learn differently from technical systems. In particular, technical systems must not make errors. On the other hand, learning seems to be impossible without occasional errors. We propose a 3-level architecture for learning in adaptive technical systems and show its applicability in the domains of traffic control and communication network control.

  14. Error Analysis in English Language Learning

    Institute of Scientific and Technical Information of China (English)

    杜文婷

    2009-01-01

    Errors in English language learning are usually classified into interlingual errors and intralin-gual errors, having a clear knowledge of the causes of the errors will help students learn better English.

  15. Error localization in RHIC by fitting difference orbits

    Energy Technology Data Exchange (ETDEWEB)

    Liu C.; Minty, M.; Ptitsyn, V.

    2012-05-20

    The presence of realistic errors in an accelerator or in the model used to describe the accelerator are such that a measurement of the beam trajectory may deviate from prediction. Comparison of measurements to model can be used to detect such errors. To do so the initial conditions (phase space parameters at any point) must be determined which can be achieved by fitting the difference orbit compared to model prediction using only a few beam position measurements. Using these initial conditions, the fitted orbit can be propagated along the beam line based on the optics model. Measurement and model will agree up to the point of an error. The error source can be better localized by additionally fitting the difference orbit using downstream BPMs and back-propagating the solution. If one dominating error source exist in the machine, the fitted orbit will deviate from the difference orbit at the same point.

  16. Introduction to error correcting codes in quantum computers

    CERN Document Server

    Salas, P J

    2006-01-01

    The goal of this paper is to review the theoretical basis for achieving a faithful quantum information transmission and processing in the presence of noise. Initially encoding and decoding, implementing gates and quantum error correction will be considered error free. Finally we will relax this non realistic assumption, introducing the quantum fault-tolerant concept. The existence of an error threshold permits to conclude that there is no physical law preventing a quantum computer from being built. An error model based on the depolarizing channel will be able to provide a simple estimation of the storage or memory computation error threshold: < 5.2 10-5. The encoding is made by means of the [[7,1,3

  17. On Metrics for Error Correction in Network Coding

    CERN Document Server

    Silva, Danilo

    2008-01-01

    The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a modified subspace metric, which is closely related to, but different than, the subspace metric of K\\"otter and Kschischang. In particular, in the case of a non-constant-dimension code, the decoder associated with the modified metric is shown to correct more errors then a minimum subspace distance decoder.

  18. Reflection error correction of gas turbine blade temperature

    Science.gov (United States)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  19. Provider error prevention: online total parenteral nutrition calculator.

    OpenAIRE

    Lehmann, Christoph U.; Conner, Kim G.; Cox, Jeanne M.

    2002-01-01

    OBJECTIVE: 1. To reduce errors in the ordering of total parenteral nutrition (TPN) in the Newborn Intensive Care Unit (NICU) at the Johns Hopkins Hospital (JHH). 2. To develop a pragmatic low-cost medical information system to achieve this goal. METHODS: We designed an online total parenteral nutrition order entry system (TPNCalculator) using Internet technologies. Total development time was three weeks. Utilization, impact on medical errors and user satisfaction were evaluated. RESULTS: Duri...

  20. Onorbit IMU alignment error budget

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  1. Hierarchical classification of social groups

    OpenAIRE

    Витковская, Мария

    2001-01-01

    Classification problems are important for every science, and for sociology as well. Social phenomena, examined from the aspect of classification of social groups, can be examined deeper. At present one common classification of groups does not exist. This article offers the hierarchical classification of social group.

  2. Error resilient image transmission based on virtual SPIHT

    Science.gov (United States)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  3. Determination of diametral error using finite elements and experimental method

    Directory of Open Access Journals (Sweden)

    A. Karabulut

    2010-01-01

    Full Text Available This study concerns experimental and numerical analysis on a one-sided bound workpiece on the lathe machine. Cutting force creates deflection on workpiece while turning process is on. Deflection quantity is estimated utilizing Laser Distance Sensor (LDS with no contact achieved. Also diametral values are detected from different sides of workpiece after each turning operation. It is observed that diametral error differs due to the quantity of the deflection. Diametral error reached a peak where deflection reached a peak. Model which constituted finite elements is verified by experimental results. And also, facts which caused diametral error are determined.

  4. Photometric Supernova Classification with Machine Learning

    Science.gov (United States)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  5. Hyperspectral Data Classification Using Factor Graphs

    Science.gov (United States)

    Makarau, A.; Müller, R.; Palubinskas, G.; Reinartz, P.

    2012-07-01

    Accurate classification of hyperspectral data is still a competitive task and new classification methods are developed to achieve desired tasks of hyperspectral data use. The objective of this paper is to develop a new method for hyperspectral data classification ensuring the classification model properties like transferability, generalization, probabilistic interpretation, etc. While factor graphs (undirected graphical models) are unfortunately not widely employed in remote sensing tasks, these models possess important properties such as representation of complex systems to model estimation/decision making tasks. In this paper we present a new method for hyperspectral data classification using factor graphs. Factor graph (a bipartite graph consisting of variables and factor vertices) allows factorization of a more complex function leading to definition of variables (employed to store input data), latent variables (allow to bridge abstract class to data), and factors (defining prior probabilities for spectral features and abstract classes; input data mapping to spectral features mixture and further bridging of the mixture to an abstract class). Latent variables play an important role by defining two-level mapping of the input spectral features to a class. Configuration (learning) on training data of the model allows calculating a parameter set for the model to bridge the input data to a class. The classification algorithm is as follows. Spectral bands are separately pre-processed (unsupervised clustering is used) to be defined on a finite domain (alphabet) leading to a representation of the data on multinomial distribution. The represented hyperspectral data is used as input evidence (evidence vector is selected pixelwise) in a configured factor graph and an inference is run resulting in the posterior probability. Variational inference (Mean field) allows to obtain plausible results with a low calculation time. Calculating the posterior probability for each class

  6. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses using Structural Plasticity

    OpenAIRE

    Shaista eHussain; Arindam eBasu

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  7. Multiclass Classification by Adaptive Network of Dendritic Neurons with Binary Synapses Using Structural Plasticity

    OpenAIRE

    Hussain, Shaista; Basu, Arindam

    2016-01-01

    The development of power-efficient neuromorphic devices presents the challenge of designing spike pattern classification algorithms which can be implemented on low-precision hardware and can also achieve state-of-the-art performance. In our pursuit of meeting this challenge, we present a pattern classification model which uses a sparse connection matrix and exploits the mechanism of nonlinear dendritic processing to achieve high classification accuracy. A rate-based structural learning rule f...

  8. DEVELOPMENT OF NEURAL NETWORK MODEL FOR CLASSIFICATION OF CAVITATION SIGNALS

    Directory of Open Access Journals (Sweden)

    KALYANASUNDARAM PERUMAL

    2011-10-01

    Full Text Available This paper deals with the early detection of cavitation by classification of cavitation signal into no, incipient and developed cavitation signal using artificial neural network model. This ANN model diagnoses the cavitation signal based on amplitude of rms vibration signal acquired from accelerometer, in order to find the different stages of cavitation. The classification results shows that feed forward network employing resilient back propagation algorithm was effective to distinct between the classes based on the good selection of input files for training the network. The proposed ANN model with resilient algorithm gives better performance and classification rate. The classification rate was 72.96% for the training sets and 75.57% for test data sets. It is concluded that the performance of the neural network is carried out irrespective of zones and it is optimum, and the errors are very less. The paper also discusses the future research directions.

  9. Product Classification in Supply Chain

    OpenAIRE

    Xing, Lihong; Xu, Yaoxuan

    2010-01-01

    Oriflame is a famous international direct sale cosmetics company with complicated supply chain operation but it lacks of a product classification system. It is vital to design a product classification method in order to support Oriflame global supply planning and improve the supply chain performance. This article is aim to investigate and design the multi-criteria of product classification, propose the classification model, suggest application areas of product classification results and intro...

  10. Concepts of Classification and Taxonomy Phylogenetic Classification

    Science.gov (United States)

    Fraix-Burnet, D.

    2016-05-01

    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works.

  11. Controlling errors in unidosis carts

    OpenAIRE

    Inmaculada Díaz Fernández; Clara Fernández-Shaw Toda; David García Marco

    2010-01-01

    Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264) versus 0.6% (154) which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or una...

  12. Evaluating bias due to data linkage error in electronic healthcare records.

    OpenAIRE

    Harron, K.; WADE, A.; Gilbert, R.; Muller-Pebody, B; Goldstein, H.

    2014-01-01

    Background Linkage of electronic healthcare records is becoming increasingly important for research purposes. However, linkage error due to mis-recorded or missing identifiers can lead to biased results. We evaluated the impact of linkage error on estimated infection rates using two different methods for classifying links: highest-weight (HW) classification using probabilistic match weights and prior-informed imputation (PII) using match probabilities. Methods A gold-standard dataset was crea...

  13. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    Science.gov (United States)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  14. R-Peak Detection using Daubechies Wavelet and ECG Signal Classification using Radial Basis Function Neural Network

    Science.gov (United States)

    Rai, H. M.; Trivedi, A.; Chatterjee, K.; Shukla, S.

    2014-01-01

    This paper employed the Daubechies wavelet transform (WT) for R-peak detection and radial basis function neural network (RBFNN) to classify the electrocardiogram (ECG) signals. Five types of ECG beats: normal beat, paced beat, left bundle branch block (LBBB) beat, right bundle branch block (RBBB) beat and premature ventricular contraction (PVC) were classified. 500 QRS complexes were arbitrarily extracted from 26 records in Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database, which are available on Physionet website. Each and every QRS complex was represented by 21 points from p1 to p21 and these QRS complexes of each record were categorized according to types of beats. The system performance was computed using four types of parameter evaluation metrics: sensitivity, positive predictivity, specificity and classification error rate. The experimental result shows that the average values of sensitivity, positive predictivity, specificity and classification error rate are 99.8%, 99.60%, 99.90% and 0.12%, respectively with RBFNN classifier. The overall accuracy achieved for back propagation neural network (BPNN), multilayered perceptron (MLP), support vector machine (SVM) and RBFNN classifiers are 97.2%, 98.8%, 99% and 99.6%, respectively. The accuracy levels and processing time of RBFNN is higher than or comparable with BPNN, MLP and SVM classifiers.

  15. Model and Algorithm of Backward Error Recovery of Distributed Software

    Institute of Scientific and Technical Information of China (English)

    刘纯生; 文传源

    1989-01-01

    Backward error recovery is one of the important techniques of software fault tolerance,Because of error propagation its recovery in distributed software needs cooperation between processes to achieve consistent recovery ,However,the techniques of the achievement suffer from either concurency level decreasing or the domino effect.Based on a formal model of the distributed system.a backward recovery protocol without the two drawbacks is specified in this paper,The algorithm of the protocol is proven strictly and its implementation is proposed.

  16. Aquisição do sistema ortográfico: desempenho na expressão escrita e classificação dos erros ortográficos Acquisition of the orthographic system: proficiency in written expression and classification of orthographic errors

    Directory of Open Access Journals (Sweden)

    Clarice Costa Rosa

    2012-02-01

    Full Text Available OBJETIVO: analisar o desempenho na expressão escrita e classificar os erros da produção ortográfica que ocorrem durante as quatro primeiras séries do ensino fundamental, identificando os erros ortográficos mais freqüentes e descrevendo a evolução dos mesmos, comparando-os por série e gênero. MÉTODO: foi realizado um estudo transversal na população de alunos de 1ª a 4ª série de uma escola estadual do município de Porto Alegre. Foram avaliados 214 sujeitos por meio do ditado de palavras do subteste da escrita do Teste de Desempenho Escolar. RESULTADOS: foram observados maiores níveis de suficiência no domínio da expressão escrita nas séries iniciais; os sujeitos da 4ª série demonstraram dificuldade no domínio das regras de acentuação. Por meio da análise do ditado, constatou-se que os erros de representações múltiplas (14,76% foram os mais freqüentes nesta população. Quando comparado os diferentes tipos de erros ortográficos verificados nas quatro séries em conjunto, observou-se que houve diferença significante entre as mesmas no decorrer das séries (PPURPOSES: to analyze the proficiency in written expression, classifying the errors of orthographic production, which occurs during the first four grades of elementary school, identifying the most frequent and monitoring their developments, comparing the performance by grade and by gender. METHOD: a cross-sectional study was conducted in the population of students from 1st to 4th grade, in the city of Porto Alegre. 214 subjects were assessed by the standardized saying of the School Performance Test. RESULTS: we found higher sufficiency levels in the field of writing for the initial series. On the 4th grade the level of performance was considered to be lower with grammar emphasis' rules. Analyzing the test, it was evidenced that the errors of multiple representations, were the most frequent in this population. When the different types of orthographic errors of the

  17. 3-PRS serial-parallel machine tool error calibration and parameter identification

    Institute of Scientific and Technical Information of China (English)

    ZHAO Jun-wei; DAI Jun; HUANG Jun-jie

    2009-01-01

    3-PRS serial-parallel machine tool consists of a 3-degree-of-freedom (DOF) implementation platform and a 2-DOF X-Y platform. The error modeling and parameter identification methods were deduced based on 3-PRS serial-parallel machine tool. 3-PRS serial-parallel machine tool was researched, and the mechanism of error analysis, modeling, identification of error parameters and measurement equipment for the use of agency error of measurement were conducted. In order to achieve the geometric parameters calibration and error compensation of the serial-parallel machine tool, the nominal structural parameters of the controller was adjusted by identifying the structure of the machine tool. With the establishment of a vector space size chain, we can do the error analysis, error modeling, error measurement and error compensation can be done.

  18. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  19. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  20. Information gathering for CLP classification

    Directory of Open Access Journals (Sweden)

    Ida Marcello

    2011-01-01

    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  1. The paradox of atheoretical classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2016-01-01

    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may...... be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural...

  2. Errors in Chemical Sensor Measurements

    Directory of Open Access Journals (Sweden)

    Artur Dybko

    2001-06-01

    Full Text Available Various types of errors during the measurements of ion-selective electrodes, ionsensitive field effect transistors, and fibre optic chemical sensors are described. The errors were divided according to their nature and place of origin into chemical, instrumental and non-chemical. The influence of interfering ions, leakage of the membrane components, liquid junction potential as well as sensor wiring, ambient light and temperature is presented.

  3. Job Mobility and Measurement Error

    OpenAIRE

    Bergin, Adele

    2011-01-01

    This thesis consists of essays investigating job mobility and measurement error. Job mobility, captured here as a change of employer, is a striking feature of the labour market. In empirical work on job mobility, researchers often depend on self-reported tenure data to identify job changes. There may be measurement error in these responses and consequently observations may be misclassified as job changes when truly no change has taken place and vice versa. These observations serve as a starti...

  4. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between....... Descriptors, range of quality levels, number of quality classes, class intervals, denotations and descriptions vary across Europe. The diversity is an obstacle for exchange of experience about constructions fulfilling different classes, implying also trade barriers. Thus, a harmonized classification scheme...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  5. Bosniak Classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;

    2014-01-01

    Background: The Bosniak classification is a diagnostic tool for the differentiation of cystic changes in the kidney. The process of categorizing renal cysts may be challenging, involving a series of decisions that may affect the final diagnosis and clinical outcome such as surgical management....... Purpose: To investigate the inter- and intra-observer agreement among experienced uroradiologists when categorizing complex renal cysts according to the Bosniak classification. Material and Methods: The original categories of 100 cystic renal masses were chosen as “Gold Standard” (GS), established...... to the calculated weighted κ all readers performed “very good” for both inter-observer and intra-observer variation. Most variation was seen in cysts catagorized as Bosniak II, IIF, and III. These results show that radiologists who evaluate complex renal cysts routinely may apply the Bosniak classification...

  6. Bosniak classification system

    DEFF Research Database (Denmark)

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;

    2016-01-01

    BACKGROUND: The Bosniak classification was originally based on computed tomographic (CT) findings. Magnetic resonance (MR) and contrast-enhanced ultrasonography (CEUS) imaging may demonstrate findings that are not depicted at CT, and there may not always be a clear correlation between the findings...... at MR and CEUS imaging and those at CT. PURPOSE: To compare diagnostic accuracy of MR, CEUS, and CT when categorizing complex renal cystic masses according to the Bosniak classification. MATERIAL AND METHODS: From February 2011 to June 2012, 46 complex renal cysts were prospectively evaluated by three...... readers. Each mass was categorized according to the Bosniak classification and CT was chosen as gold standard. Kappa was calculated for diagnostic accuracy and data was compared with pathological results. RESULTS: CT images found 27 BII, six BIIF, seven BIII, and six BIV. Forty-three cysts could...

  7. Vertebral fracture classification

    Science.gov (United States)

    de Bruijne, Marleen; Pettersen, Paola C.; Tankó, László B.; Nielsen, Mads

    2007-03-01

    A novel method for classification and quantification of vertebral fractures from X-ray images is presented. Using pairwise conditional shape models trained on a set of healthy spines, the most likely unfractured shape is estimated for each of the vertebrae in the image. The difference between the true shape and the reconstructed normal shape is an indicator for the shape abnormality. A statistical classification scheme with the two shapes as features is applied to detect, classify, and grade various types of deformities. In contrast with the current (semi-)quantitative grading strategies this method takes the full shape into account, it uses a patient-specific reference by combining population-based information on biological variation in vertebra shape and vertebra interrelations, and it provides a continuous measure of deformity. Good agreement with manual classification and grading is demonstrated on 204 lateral spine radiographs with in total 89 fractures.

  8. Quantum learning: asymptotically optimal classification of qubit states

    International Nuclear Information System (INIS)

    Pattern recognition is a central topic in learning theory, with numerous applications such as voice and text recognition, image analysis and computer diagnosis. The statistical setup in classification is the following: we are given an i.i.d. training set (X1, Y1), ... , (Xn, Yn), where Xi represents a feature and Yiin{0, 1} is a label attached to that feature. The underlying joint distribution of (X, Y) is unknown, but we can learn about it from the training set, and we aim at devising low error classifiers f: X→Y used to predict the label of new incoming features. In this paper, we solve a quantum analogue of this problem, namely the classification of two arbitrary unknown mixed qubit states. Given a number of 'training' copies from each of the states, we would like to 'learn' about them by performing a measurement on the training set. The outcome is then used to design measurements for the classification of future systems with unknown labels. We found the asymptotically optimal classification strategy and show that typically it performs strictly better than a plug-in strategy, which consists of estimating the states separately and then discriminating between them using the Helstrom measurement. The figure of merit is given by the excess risk equal to the difference between the probability of error and the probability of error of the optimal measurement for known states. We show that the excess risk scales as n-1 and compute the exact constant of the rate.

  9. Error image aware content restoration

    Science.gov (United States)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  10. Measuring verification device error rates

    International Nuclear Information System (INIS)

    A verification device generates a Type I (II) error when it recommends to reject (accept) a valid (false) identity claim. For a given identity, the rates or probabilities of these errors quantify random variations of the device from claim to claim. These are intra-identity variations. To some degree, these rates depend on the particular identity being challenged, and there exists a distribution of error rates characterizing inter-identity variations. However, for most security system applications we only need to know averages of this distribution. These averages are called the pooled error rates. In this paper the authors present the statistical underpinnings for the measurement of pooled Type I and Type II error rates. The authors consider a conceptual experiment, ''a crate of biased coins''. This model illustrates the effects of sampling both within trials of the same individual and among trials from different individuals. Application of this simple model to verification devices yields pooled error rate estimates and confidence limits for these estimates. A sample certification procedure for verification devices is given in the appendix

  11. Land-cover classification with an expert classification algorithm using digital aerial photographs

    Directory of Open Access Journals (Sweden)

    José L. de la Cruz

    2010-05-01

    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (Zea mays L., oats (Avena sativa L., rye (Secale cereale L., wheat (Triticum aestivum L. and barley (Hordeun vulgare L., (3 high protein crops, such as peas (Pisum sativum L. and beans (Vicia faba L., (4 alfalfa (Medicago sativa L., (5 woodlands and scrublands, including holly oak (Quercus ilex L. and common retama (Retama sphaerocarpa L., (6 urban soil, (7 olive groves (Olea europaea L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  12. Classification des rongeurs

    OpenAIRE

    Mignon, Jacques; Hardouin, Jacques

    2003-01-01

    Les lecteurs du Bulletin BEDIM semblent parfois avoir des difficultés avec la classification scientifique des animaux connus comme "rongeurs" dans le langage courant. Vu les querelles existant encore aujourd'hui dans la mise en place de cette classification, nous ne nous en étonnerons guère. La brève synthèse qui suit concerne les animaux faisant ou susceptibles de faire partie du mini-élevage. The note aims at providing the main characteristics of the principal families of rodents relevan...

  13. Acoustic classification of dwellings

    DEFF Research Database (Denmark)

    Berardi, Umberto; Rasmussen, Birgit

    2014-01-01

    insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...... of descriptors, number of classes, and class intervals occurred between national schemes. However, a proposal “acoustic classification scheme for dwellings” has been developed recently in the European COST Action TU0901 with 32 member countries. This proposal has been accepted as an ISO work item. This paper...

  14. Classification of syringomyelia.

    Science.gov (United States)

    Milhorat, T H

    2000-01-01

    Syringomyelia poses special challenges for the clinician because of its complex symptomatology, uncertain pathogenesis, and multiple options of treatment. The purpose of this study was to classify intramedullary cavities according to their most salient pathological and clinical features. Pathological findings obtained in 175 individuals with tubular cavitations of the spinal cord were correlated with clinical and magnetic resonance (MR) imaging findings in a database of 927 patients. A classification system was developed in which the morbid anatomy, cause, and pathogenesis of these lesions are emphasized. The use of a disease-based classification of syringomyelia facilitates diagnosis and the interpretation of MR imaging findings and provides a guide to treatment. PMID:16676921

  15. Sequence Classification: 890773 [

    Lifescience Database Archive (English)

    Full Text Available oline as sole nitrogen source; deficiency of the human homolog causes HPII, an autosomal recessive inborn error of metabolism; Put2p || http://www.ncbi.nlm.nih.gov/protein/6321826 ...

  16. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Science.gov (United States)

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  17. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Recent advances in training deep (multi-layer architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM approach, which also enables a very rapid training time (∼ 10 minutes. Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  18. Basic Hand Gestures Classification Based on Surface Electromyography.

    Science.gov (United States)

    Palkowski, Aleksander; Redlarski, Grzegorz

    2016-01-01

    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method. PMID:27298630

  19. Basic Hand Gestures Classification Based on Surface Electromyography

    Directory of Open Access Journals (Sweden)

    Aleksander Palkowski

    2016-01-01

    Full Text Available This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method.

  20. Basic Hand Gestures Classification Based on Surface Electromyography

    Science.gov (United States)

    Palkowski, Aleksander; Redlarski, Grzegorz

    2016-01-01

    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method. PMID:27298630

  1. Basic Hand Gestures Classification Based on Surface Electromyography

    OpenAIRE

    Aleksander Palkowski; Grzegorz Redlarski

    2016-01-01

    This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the prop...

  2. Classification Accuracy of Neural Networks with PCA in Emotion Recognition

    OpenAIRE

    Novakovic Jasmina; Minic Milomir; Veljovic Alempije

    2011-01-01

    This paper presents classification accuracy of neural network with principal component analysis (PCA) for feature selections in emotion recognition using facial expressions. Dimensionality reduction of a feature set is a common preprocessing step used for pattern recognition and classification applications. PCA is one of the popular methods used, and can be shown to be optimal using different optimality criteria. Experiment results, in which we achieved a recognition rate of approximately 85%...

  3. Compensatory neurofuzzy model for discrete data classification in biomedical

    Science.gov (United States)

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  4. Words semantic orientation classification based on HowNet

    Institute of Scientific and Technical Information of China (English)

    LI Dun; MA Yong-tao; GUO Jian-li

    2009-01-01

    Based on the text orientation classification, a new measurement approach to semantic orientation of words was proposed. According to the integrated and detailed definition of words in HowNet, seed sets including the words with intense orientations were built up. The orientation similarity between the seed words and the given word was then calculated using the sentiment weight priority to recognize the semantic orientation of common words. Finally, the words' semantic orientation and the context were combined to recognize the given words' orientation. The experiments show that the measurement approach achieves better results for common words' orientation classification and contributes particularly to the text orientation classification of large granularities.

  5. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  6. Semi-automatic classification of glaciovolcanic landforms: An object-based mapping approach based on geomorphometry

    Science.gov (United States)

    Pedersen, G. B. M.

    2016-02-01

    A new object-oriented approach is developed to classify glaciovolcanic landforms (Procedure A) and their landform elements boundaries (Procedure B). It utilizes the principle that glaciovolcanic edifices are geomorphometrically distinct from lava shields and plains (Pedersen and Grosse, 2014), and the approach is tested on data from Reykjanes Peninsula, Iceland. The outlined procedures utilize slope and profile curvature attribute maps (20 m/pixel) and the classified results are evaluated quantitatively through error matrix maps (Procedure A) and visual inspection (Procedure B). In procedure A, the highest obtained accuracy is 94.1%, but even simple mapping procedures provide good results (> 90% accuracy). Successful classification of glaciovolcanic landform element boundaries (Procedure B) is also achieved and this technique has the potential to delineate the transition from intraglacial to subaerial volcanic activity in orthographic view. This object-oriented approach based on geomorphometry overcomes issues with vegetation cover, which has been typically problematic for classification schemes utilizing spectral data. Furthermore, it handles complex edifice outlines well and is easily incorporated into a GIS environment, where results can be edited or fused with other mapping results. The approach outlined here is designed to map glaciovolcanic edifices within the Icelandic neovolcanic zone but may also be applied to similar subaerial or submarine volcanic settings, where steep volcanic edifices are surrounded by flat plains.

  7. Co-occurrence Models in Music Genre Classification

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan

    2005-01-01

    genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models.......Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...

  8. Sandwich classification theorem

    Directory of Open Access Journals (Sweden)

    Alexey Stepanov

    2015-09-01

    Full Text Available The present note arises from the author's talk at the conference ``Ischia Group Theory 2014''. For subgroups FleN of a group G denote by Lat(F,N the set of all subgroups of N , containing F . Let D be a subgroup of G . In this note we study the lattice LL=Lat(D,G and the lattice LL ′ of subgroups of G , normalized by D . We say that LL satisfies sandwich classification theorem if LL splits into a disjoint union of sandwiches Lat(F,N G (F over all subgroups F such that the normal closure of D in F coincides with F . Here N G (F denotes the normalizer of F in G . A similar notion of sandwich classification is introduced for the lattice LL ′ . If D is perfect, i.,e. coincides with its commutator subgroup, then it turns out that sandwich classification theorem for LL and LL ′ are equivalent. We also show how to find basic subroup F of sandwiches for LL ′ and review sandwich classification theorems in algebraic groups over rings.

  9. Dynamic Latent Classification Model

    DEFF Research Database (Denmark)

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre;

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...

  10. Classifications in popular music

    NARCIS (Netherlands)

    A. van Venrooij; V. Schmutz

    2015-01-01

    The categorical system of popular music, such as genre categories, is a highly differentiated and dynamic classification system. In this article we present work that studies different aspects of these categorical systems in popular music. Following the work of Paul DiMaggio, we focus on four questio

  11. Classification of waste packages

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, H.P.; Sauer, M.; Rojahn, T. [Versuchsatomkraftwerk GmbH, Kahl am Main (Germany)

    2001-07-01

    A barrel gamma scanning unit has been in use at the VAK for the classification of radioactive waste materials since 1998. The unit provides the facility operator with the data required for classification of waste barrels. Once these data have been entered into the AVK data processing system, the radiological status of raw waste as well as pre-treated and processed waste can be tracked from the point of origin to the point at which the waste is delivered to a final storage. Since the barrel gamma scanning unit was commissioned in 1998, approximately 900 barrels have been measured and the relevant data required for classification collected and analyzed. Based on the positive results of experience in the use of the mobile barrel gamma scanning unit, the VAK now offers the classification of barrels as a service to external users. Depending upon waste quantity accumulation, this measurement unit offers facility operators a reliable and time-saving and cost-effective means of identifying and documenting the radioactivity inventory of barrels scheduled for final storage. (orig.)

  12. Improving Student Question Classification

    Science.gov (United States)

    Heiner, Cecily; Zachary, Joseph L.

    2009-01-01

    Students in introductory programming classes often articulate their questions and information needs incompletely. Consequently, the automatic classification of student questions to provide automated tutorial responses is a challenging problem. This paper analyzes 411 questions from an introductory Java programming course by reducing the natural…

  13. Nearest convex hull classification

    NARCIS (Netherlands)

    G.I. Nalbantov (Georgi); P.J.F. Groenen (Patrick); J.C. Bioch (Cor)

    2006-01-01

    textabstractConsider the classification task of assigning a test object to one of two or more possible groups, or classes. An intuitive way to proceed is to assign the object to that class, to which the distance is minimal. As a distance measure to a class, we propose here to use the distance to the

  14. Classification system: Netherlands

    NARCIS (Netherlands)

    Hartemink, A.E.

    2006-01-01

    Although people have always classified soils, it is only since the mid 19th century that soil classification emerged as an important topic within soil science. It forced soil scientists to think systematically about soils and its genesis and developed to facilitate communication between soil scienti

  15. Shark Teeth Classification

    Science.gov (United States)

    Brown, Tom; Creel, Sally; Lee, Velda

    2009-01-01

    On a recent autumn afternoon at Harmony Leland Elementary in Mableton, Georgia, students in a fifth-grade science class investigated the essential process of classification--the act of putting things into groups according to some common characteristics or attributes. While they may have honed these skills earlier in the week by grouping their own…

  16. The Classification Conundrum.

    Science.gov (United States)

    Granger, Charles R.

    1983-01-01

    Argues against the five-kingdom scheme of classification as using inconsistent criteria, ending up with divisions that are forced, not natural. Advocates an approach using cell type/complexity and modification of the metabolic machinery, recommending the five-kingdom scheme as starting point for class discussion on taxonomy and its conceptual…

  17. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  18. Error detection process - Model, design, and its impact on computer performance

    Science.gov (United States)

    Shin, K. G.; Lee, Y.-H.

    1984-01-01

    An analytical model is developed for computer error detection processes and applied to estimate their influence on system performance. Faults in the hardware, not in the design, are assumed to be the potential cause of transition to erroneous states during normal operations. The classification properties and associated recovery methods of error detection are discussed. The probability of obtaining an unreliable result is evaluated, along with the resulting computational loss. Error detection during design is considered and a feasible design space is outlined. Extension of the methods to account for the effects of extant multiple faults is indicated.

  19. Text Classification Retrieval Based on Complex Network and ICA Algorithm

    Directory of Open Access Journals (Sweden)

    Hongxia Li

    2013-08-01

    Full Text Available With the development of computer science and information technology, the library is developing toward information and network. The library digital process converts the book into digital information. The high-quality preservation and management are achieved by computer technology as well as text classification techniques. It realizes knowledge appreciation. This paper introduces complex network theory in the text classification process and put forwards the ICA semantic clustering algorithm. It realizes the independent component analysis of complex network text classification. Through the ICA clustering algorithm of independent component, it realizes character words clustering extraction of text classification. The visualization of text retrieval is improved. Finally, we make a comparative analysis of collocation algorithm and ICA clustering algorithm through text classification and keyword search experiment. The paper gives the clustering degree of algorithm and accuracy figure. Through simulation analysis, we find that ICA clustering algorithm increases by 1.2% comparing with text classification clustering degree. Accuracy can be improved by 11.1% at most. It improves the efficiency and accuracy of text classification retrieval. It also provides a theoretical reference for text retrieval classification of eBook

  20. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection. PMID:26353275

  1. A Novel Vehicle Classification Using Embedded Strain Gauge Sensors

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2008-11-01

    Full Text Available Abstract: This paper presents a new vehicle classification and develops a traffic monitoring detector to provide reliable vehicle classification to aid traffic management systems. The basic principle of this approach is based on measuring the dynamic strain caused by vehicles across pavement to obtain the corresponding vehicle parameters – wheelbase and number of axles – to then accurately classify the vehicle. A system prototype with five embedded strain sensors was developed to validate the accuracy and effectiveness of the classification method. According to the special arrangement of the sensors and the different time a vehicle arrived at the sensors one can estimate the vehicle’s speed accurately, corresponding to the estimated vehicle wheelbase and number of axles. Because of measurement errors and vehicle characteristics, there is a lot of overlap between vehicle wheelbase patterns. Therefore, directly setting up a fixed threshold for vehicle classification often leads to low-accuracy results. Using the machine learning pattern recognition method to deal with this problem is believed as one of the most effective tools. In this study, support vector machines (SVMs were used to integrate the classification features extracted from the strain sensors to automatically classify vehicles into five types, ranging from small vehicles to combination trucks, along the lines of the Federal Highway Administration vehicle classification guide. Test bench and field experiments will be introduced in this paper. Two support vector machines classification algorithms (one-against-all, one-against-one are used to classify single sensor data and multiple sensor combination data. Comparison of the two classification method results shows that the classification accuracy is very close using single data or multiple data. Our results indicate that using multiclass SVM-based fusion multiple sensor data significantly improves

  2. Quadratic Dynamical Decoupling with Non-Uniform Error Suppression

    CERN Document Server

    Quiroz, G

    2011-01-01

    We analyze numerically the performance of the near-optimal quadratic dynamical decoupling (QDD) single-qubit decoherence errors suppression method [J. West et al., Phys. Rev. Lett. 104, 130501 (2010)]. The QDD sequence is formed by nesting two optimal Uhrig dynamical decoupling sequences for two orthogonal axes, comprising N1 and N2 pulses, respectively. Varying these numbers, we study the decoherence suppression properties of QDD directly by isolating the errors associated with each system basis operator present in the system-bath interaction Hamiltonian. Each individual error scales with the lowest order of the Dyson series, therefore immediately yielding the order of decoherence suppression. We show that the error suppression properties of QDD are dependent upon the parities of N1 and N2, and near-optimal performance is achieved for general single-qubit interactions when N1=N2.

  3. The nearest neighbor and the bayes error rates.

    Science.gov (United States)

    Loizou, G; Maybank, S J

    1987-02-01

    The (k, l) nearest neighbor method of pattern classification is compared to the Bayes method. If the two acceptance rates are equal then the asymptotic error rates satisfy the inequalities Ek,l + 1 ¿ E*(¿) ¿ Ek,l dE*(¿), where d is a function of k, l, and the number of pattern classes, and ¿ is the reject threshold for the Bayes method. An explicit expression for d is given which is optimal in the sense that for some probability distributions Ek,l and dE* (¿) are equal. PMID:21869395

  4. A-posteriori error estimation for second order mechanical systems

    Institute of Scientific and Technical Information of China (English)

    Thomas Ruiner; J(ǒ)rg Fehr; Bernard Haasdonk; Peter Eberhard

    2012-01-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom.As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important.In this work,an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems.Due to the special second order structure of mechanical systems,an improvement of the a-posteriori error estimator is achieved· A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique.Therefore,it can be used for moment-matching based,Gramian matrices based or modal based model reduction techniques.The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system,and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  5. Error-thresholds for qudit-based topological quantum memories

    Science.gov (United States)

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.

    2014-03-01

    Extending the quantum computing paradigm from qubits to higher-dimensional quantum systems allows for increased channel capacity and a more efficient implementation of quantum gates. However, to perform reliable computations an efficient error-correction scheme adapted for these multi-level quantum systems is needed. A promising approach is via topological quantum error correction, where stability to external noise is achieved by encoding quantum information in non-local degrees of freedom. A key figure of merit is the error threshold which quantifies the fraction of physical qudits that can be damaged before logical information is lost. Here we analyze the resilience of generalized topological memories built from d-level quantum systems (qudits) to bit-flip errors. The error threshold is determined by mapping the quantum setup to a classical Potts-like model with bond disorder, which is then investigated numerically using large-scale Monte Carlo simulations. Our results show that topological error correction with qutrits exhibits an improved error threshold in comparison to qubit-based systems.

  6. A Conceptual Framework to use Remediation of Errors Based on Multiple External Remediation Applied to Learning Objects

    Directory of Open Access Journals (Sweden)

    Maici Duarte Leite

    2014-09-01

    Full Text Available This paper presents the application of some concepts of Intelligent Tutoring Systems (ITS to elaborate a conceptual framework that uses the remediation of errors with Multiple External Representations (MERs in Learning Objects (LO. To this is demonstrated a development of LO for teaching the Pythagorean Theorem through this framework. This study explored the remediation process of error by a classification of error in mathematical, providing support for the use of MERs with the remediation of error. The main objective of the proposed framework is to assist the individual learner in the recovery of a mistake made during the interaction with the LO, either through carelessness or lack of knowledge. Initially, we present the compilation of the classification of mathematical errors and their relationship with MERs. Later the concepts involved with conceptual framework proposed. Finally, an experiment with LO developed with a authoring tool called FARMA, using the conceptual framework for teaching the Pythagorean Theorem is presented.

  7. Energy efficient error-correcting coding for wireless systems

    NARCIS (Netherlands)

    Shao, Xiaoying

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required t

  8. College Achievement and Earnings

    OpenAIRE

    Gemus, Jonathan

    2010-01-01

    I study the size and sources of the monetary return to college achievement as measured by cumulative Grade Point Average (GPA). I first present evidence that the return to achievement is large and statistically significant. I find, however, that this masks variation in the return across different groups of people. In particular, there is no relationship between GPA and earnings for graduate degree holders but a large and positive relationship for people without a graduate degree. To reconcile...

  9. Synthetic aperture interferometry: error analysis

    International Nuclear Information System (INIS)

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  10. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  11. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  12. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    Elkin, Peter L.; Beuscart-zephir, Marie-Catherine; Pelayo, Sylvia;

    2013-01-01

    will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems.......Clinical Systems have become standard partners with clinicians in the care of patients. As these systems become integral parts of the clinical workflow, they have the potential to help improve patient outcomes, however they have also in some cases have led to adverse events and has resulted...... in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...

  13. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. PMID:27184070

  14. Hospital prescribing errors : epidemiological assessment of predictors

    NARCIS (Netherlands)

    Fijn, R; Van den Bemt, PMLA; Chow, M; De Blaey, CJ; De Jong-Van den Berg, LTW; Brouwers, JRBJ

    2002-01-01

    Aims To demonstrate in epidemiological method to assess predictors of prescribing errors.. Methods A retrospective case-control Study. comparing prescription,, with and without errors, Results Only prescriber and drug characteristics were associated with errors, Prescriber characteristics were medic

  15. Updated Classification System for Proximal Humeral Fractures

    Science.gov (United States)

    Guix, José M. Mora; Pedrós, Juan Sala; Serrano, Alejandro Castaño

    2009-01-01

    Proximal humeral fractures can restrict daily activities and, therefore, deserve efficient diagnoses that minimize complications and sequels. For good diagnosis and treatment, patient characteristics, variability in the forms of the fractures presented, and the technical difficulties in achieving fair results with surgical treatment should all be taken into account. Current classification systems for these fractures are based on anatomical and pathological principles, and not on systematic image reading. These fractures can appear in many different forms, with many characteristics that must be identified. However, many current classification systems lack good reliability, both inter-observer and intra-observer for different image types. A new approach to image reading, following a well-designed set and sequence of variables to check, is needed. We previously reported such an image reading system. In the present study, we report a classification system based on this image reading system. Here we define 21 fracture characteristics and apply them along with classical Codman approaches to classify fractures. We base this novel classification system for classifying proximal humeral fractures on a review of scientific literature and improvements to our image reading protocol. Patient status, fracture characteristics and surgeon circumstances have been important issues in developing this system. PMID:19574487

  16. Classification of Sporting Activities Using Smartphone Accelerometers

    Directory of Open Access Journals (Sweden)

    Noel E. O'Connor

    2013-04-01

    Full Text Available In this paper we present a framework that allows for the automatic identification of sporting activities using commonly available smartphones. We extract discriminative informational features from smartphone accelerometers using the Discrete Wavelet Transform (DWT. Despite the poor quality of their accelerometers, smartphones were used as capture devices due to their prevalence in today’s society. Successful classification on this basis potentially makes the technology accessible to both elite and non-elite athletes. Extracted features are used to train different categories of classifiers. No one classifier family has a reportable direct advantage in activity classification problems to date; thus we examine classifiers from each of the most widely used classifier families. We investigate three classification approaches; a commonly used SVM-based approach, an optimized classification model and a fusion of classifiers. We also investigate the effect of changing several of the DWT input parameters, including mother wavelets, window lengths and DWT decomposition levels. During the course of this work we created a challenging sports activity analysis dataset, comprised of soccer and field-hockey activities. The average maximum F-measure accuracy of 87% was achieved using a fusion of classifiers, which was 6% better than a single classifier model and 23% better than a standard SVM approach.

  17. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  18. SAR images classification method based on Dempster-Shafer theory and kernel estimate

    Institute of Scientific and Technical Information of China (English)

    He Chu; Xia Guisong; Sun Hong

    2007-01-01

    To study the scene classification in the Synthetic Aperture Radar (SAR) image, a novel method based on kernel estimate, with the Markov context and Dempster-Shafer evidence theory is proposed.Initially, a nonparametric Probability Density Function (PDF) estimate method is introduced, to describe the scene of SAR images.And then under the Markov context, both the determinate PDF and the kernel estimate method are adopted respectively, to form a primary classification.Next, the primary classification results are fused using the evidence theory in an unsupervised way to get the scene classification.Finally, a regularization step is used, in which an iterated maximum selecting approach is introduced to control the fragments and modify the errors of the classification.Use of the kernel estimate and evidence theory can describe the complicated scenes with little prior knowledge and eliminate the ambiguities of the primary classification results.Experimental results on real SAR images illustrate a rather impressive performance.

  19. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    Science.gov (United States)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  20. Etiologic Classification in Ischemic Stroke

    OpenAIRE

    Hakan Ay

    2011-01-01

    Ischemic stroke is an etiologically heterogenous disorder. Classification of ischemic stroke etiology into categories with discrete phenotypic, therapeutic, and prognostic features is indispensible to generate consistent information from stroke research. In addition, a functional classification of stroke etiology is critical to ensure unity among physicians and comparability among studies. There are two major approaches to etiologic classification in stroke. Phenotypic systems define subtypes...

  1. Towards automatic classification of all WISE sources

    Science.gov (United States)

    Kurcz, A.; Bilicki, M.; Solarz, A.; Krupa, M.; Pollo, A.; Małek, K.

    2016-07-01

    Context. The Wide-field Infrared Survey Explorer (WISE) has detected hundreds of millions of sources over the entire sky. Classifying them reliably is, however, a challenging task owing to degeneracies in WISE multicolour space and low levels of detection in its two longest-wavelength bandpasses. Simple colour cuts are often not sufficient; for satisfactory levels of completeness and purity, more sophisticated classification methods are needed. Aims: Here we aim to obtain comprehensive and reliable star, galaxy, and quasar catalogues based on automatic source classification in full-sky WISE data. This means that the final classification will employ only parameters available from WISE itself, in particular those which are reliably measured for the majority of sources. Methods: For the automatic classification we applied a supervised machine learning algorithm, support vector machines (SVM). It requires a training sample with relevant classes already identified, and we chose to use the SDSS spectroscopic dataset (DR10) for that purpose. We tested the performance of two kernels used by the classifier, and determined the minimum number of sources in the training set required to achieve stable classification, as well as the minimum dimension of the parameter space. We also tested SVM classification accuracy as a function of extinction and apparent magnitude. Thus, the calibrated classifier was finally applied to all-sky WISE data, flux-limited to 16 mag (Vega) in the 3.4 μm channel. Results: By calibrating on the test data drawn from SDSS, we first established that a polynomial kernel is preferred over a radial one for this particular dataset. Next, using three classification parameters (W1 magnitude, W1-W2 colour, and a differential aperture magnitude) we obtained very good classification efficiency in all the tests. At the bright end, the completeness for stars and galaxies reaches ~95%, deteriorating to ~80% at W1 = 16 mag, while for quasars it stays at a level of

  2. Human Error and Organizational Management

    Directory of Open Access Journals (Sweden)

    Alecxandrina DEACONU

    2009-01-01

    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  3. Chronology of prescribing error during the hospital stay and prediction of pharmacist's alerts overriding: a prospective analysis

    Directory of Open Access Journals (Sweden)

    Bruni Vanida

    2010-01-01

    Full Text Available Abstract Background Drug prescribing errors are frequent in the hospital setting and pharmacists play an important role in detection of these errors. The objectives of this study are (1 to describe the drug prescribing errors rate during the patient's stay, (2 to find which characteristics for a prescribing error are the most predictive of their reproduction the next day despite pharmacist's alert (i.e. override the alert. Methods We prospectively collected all medication order lines and prescribing errors during 18 days in 7 medical wards' using computerized physician order entry. We described and modelled the errors rate according to the chronology of hospital stay. We performed a classification and regression tree analysis to find which characteristics of alerts were predictive of their overriding (i.e. prescribing error repeated. Results 12 533 order lines were reviewed, 117 errors (errors rate 0.9% were observed and 51% of these errors occurred on the first day of the hospital stay. The risk of a prescribing error decreased over time. 52% of the alerts were overridden (i.e error uncorrected by prescribers on the following day. Drug omissions were the most frequently taken into account by prescribers. The classification and regression tree analysis showed that overriding pharmacist's alerts is first related to the ward of the prescriber and then to either Anatomical Therapeutic Chemical class of the drug or the type of error. Conclusions Since 51% of prescribing errors occurred on the first day of stay, pharmacist should concentrate his analysis of drug prescriptions on this day. The difference of overriding behavior between wards and according drug Anatomical Therapeutic Chemical class or type of error could also guide the validation tasks and programming of electronic alerts.

  4. On the Smoothed Minimum Error Entropy Criterion

    OpenAIRE

    Badong Chen; Principe, Jose C.

    2012-01-01

    Recent studies suggest that the minimum error entropy (MEE) criterion can outperform the traditional mean square error criterion in supervised machine learning, especially in nonlinear and non-Gaussian situations. In practice, however, one has to estimate the error entropy from the samples since in general the analytical evaluation of error entropy is not possible. By the Parzen windowing approach, the estimated error entropy converges asymptotically to the entropy of the error plus an indepe...

  5. Tackling uncertainties and errors in the satellite monitoring of forest cover change

    Science.gov (United States)

    Song, Kuan

    This study aims at improving the reliability of automatic forest change detection. Forest change detection is of vital importance for understanding global land cover as well as the carbon cycle. Remote sensing and machine learning have been widely adopted for such studies with increasing degrees of success. However, contemporary global studies still suffer from lower-than-satisfactory accuracies and robustness problems whose causes were largely unknown. Global geographical observations are complex, as a result of the hidden interweaving geographical processes. Is it possible that some geographical complexities were not expected in contemporary machine learning? Could they cause uncertainties and errors when contemporary machine learning theories are applied for remote sensing? This dissertation adopts the philosophy of error elimination. We start by explaining the mathematical origins of possible geographic uncertainties and errors in chapter two. Uncertainties are unavoidable but might be mitigated. Errors are hidden but might be found and corrected. Then in chapter three, experiments are specifically designed to assess whether or not the contemporary machine learning theories can handle these geographic uncertainties and errors. In chapter four, we identify an unreported systemic error source: the proportion distribution of classes in the training set. A subsequent Bayesian Optimal solution is designed to combine Support Vector Machine and Maximum Likelihood. Finally, in chapter five, we demonstrate how this type of error is widespread not just in classification algorithms, but also embedded in the conceptual definition of geographic classes before classification. In chapter six, the sources of errors and uncertainties and their solutions are summarized, with theoretical implications for future studies. The most important finding is, how we design a classification largely pre-determines the "scientific conclusions" we eventually get from the classification of

  6. Design and scheduling for periodic concurrent error detection and recovery in processor arrays

    Science.gov (United States)

    Wang, Yi-Min; Chung, Pi-Yu; Fuchs, W. Kent

    1992-01-01

    Periodic application of time-redundant error checking provides the trade-off between error detection latency and performance degradation. The goal is to achieve high error coverage while satisfying performance requirements. We derive the optimal scheduling of checking patterns in order to uniformly distribute the available checking capability and maximize the error coverage. Synchronous buffering designs using data forwarding and dynamic reconfiguration are described. Efficient single-cycle diagnosis is implemented by error pattern analysis and direct-mapped recovery cache. A rollback recovery scheme using start-up control for local recovery is also presented.

  7. A Synthetic Error Analysis of Positioning Equation for Airborne Three-Dimensional Laser Imaging Sensor

    CERN Document Server

    Jiang, Yuesong; Chen, Ruiqiang; Wang, Yanling

    2011-01-01

    This paper presents the exact error analysis of point positioning equation used for airborne three-dimensional(3D) imaging sensor. With differential calculus and principles of precision analysis a mathematics formula on the point position error and relative factors is derived to show how each error source affects both vertical and horizontal coordinates. A comprehensive analysis of the related error sources and their achievable accuracy are provided. At last, two example figures are shown to compare the position accuracy of elliptical trace scan and the line-trace scan are drawn under the same error source and some corresponding conclusions.

  8. Most Used Rock Mass Classifications for Underground Opening

    Directory of Open Access Journals (Sweden)

    Al-Jbori A’ssim

    2010-01-01

    Full Text Available Problem statement: Rock mass characterization is an integral part of rock engineering practice. The empirical design methods based on rock mass classifications systems provide quick assessments of the support requirements for underground excavations at any stage of a project, even if the available geotechnical data are limited. The underground excavation industry tends to lean on empirical approaches such as rock mass classification methods, which provide a rapid means of assessing rock mass quality and support requirements. Approach: There were several classifications systems used in underground construction design. This study reviewed and summarized the must used classification methods in the mining and tunneling systems. Results: The method of this research was collected of the underground excavations classifications method with its parameters calculations procedures for each one, trying to find the simplest, less costs and more efficient method. Conclusion: The study concluded with reference to errors that may arise in particular conditions and the choice of rock mass classification depend on the sensitivity of the projects, costs and the efficient.

  9. Generalization performance of graph-based semisupervised classification

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Semi-supervised learning has been of growing interest over the past few years and many methods have been proposed. Although various algorithms are provided to implement semi-supervised learning,there are still gaps in our understanding of the dependence of generalization error on the numbers of labeled and unlabeled data. In this paper,we consider a graph-based semi-supervised classification algorithm and establish its generalization error bounds. Our results show the close relations between the generalization performance and the structural invariants of data graph.

  10. Reader error during CT colonography: causes and implications for training

    International Nuclear Information System (INIS)

    This study investigated the variability in baseline computed tomography colonography (CTC) performance using untrained readers by documenting sources of error to guide future training requirements. Twenty CTC endoscopically validated data sets containing 32 polyps were consensus read by three unblinded radiologists experienced in CTC, creating a reference standard. Six readers without prior CTC training [four residents and two board-certified subspecialty gastrointestinal (GI) radiologists] read the 20 cases. Readers drew a region of interest (ROI) around every area they considered a potential colonic lesion, even if subsequently dismissed, before creating a final report. Using this final report, reader ROIs were classified as true positive detections, true negatives correctly dismissed, true detections incorrectly dismissed (i.e., classification error), or perceptual errors. Detection of polyps 1-5 mm, 6-9 mm, and ≥10 mm ranged from 7.1% to 28.6%, 16.7% to 41.7%, and 16.7% to 83.3%, respectively. There was no significant difference between polyp detection or false positives for the GI radiologists compared with residents (p=0.67, p=0.4 respectively). Most missed polyps were due to failure of detection rather than characterization (range 82-95%). Untrained reader performance is variable but generally poor. Most missed polyps are due perceptual error rather than characterization, suggesting basic training should focus heavily on lesion detection. (orig.)

  11. On the Modeling of Error Functions as High Dimensional Landscapes for Weight Initialization in Learning Networks

    CERN Document Server

    Julius,; T., Sumana; Adityakrishna, C S

    2016-01-01

    Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.

  12. QuorUM: An Error Corrector for Illumina Reads.

    Directory of Open Access Journals (Sweden)

    Guillaume Marçais

    Full Text Available Illumina Sequencing data can provide high coverage of a genome by relatively short (most often 100 bp to 150 bp reads at a low cost. Even with low (advertised 1% error rate, 100 × coverage Illumina data on average has an error in some read at every base in the genome. These errors make handling the data more complicated because they result in a large number of low-count erroneous k-mers in the reads. However, there is enough information in the reads to correct most of the sequencing errors, thus making subsequent use of the data (e.g. for mapping or assembly easier. Here we use the term "error correction" to denote the reduction in errors due to both changes in individual bases and trimming of unusable sequence. We developed an error correction software called QuorUM. QuorUM is mainly aimed at error correcting Illumina reads for subsequent assembly. It is designed around the novel idea of minimizing the number of distinct erroneous k-mers in the output reads and preserving the most true k-mers, and we introduce a composite statistic π that measures how successful we are at achieving this dual goal. We evaluate the performance of QuorUM by correcting actual Illumina reads from genomes for which a reference assembly is available.We produce trimmed and error-corrected reads that result in assemblies with longer contigs and fewer errors. We compared QuorUM against several published error correctors and found that it is the best performer in most metrics we use. QuorUM is efficiently implemented making use of current multi-core computing architectures and it is suitable for large data sets (1 billion bases checked and corrected per day per core. We also demonstrate that a third-party assembler (SOAPdenovo benefits significantly from using QuorUM error-corrected reads. QuorUM error corrected reads result in a factor of 1.1 to 4 improvement in N50 contig size compared to using the original reads with SOAPdenovo for the data sets investigated

  13. Soil Classification Using GATree

    CERN Document Server

    Bhargavi, P

    2010-01-01

    This paper details the application of a genetic programming framework for classification of decision tree of Soil data to classify soil texture. The database contains measurements of soil profile data. We have applied GATree for generating classification decision tree. GATree is a decision tree builder that is based on Genetic Algorithms (GAs). The idea behind it is rather simple but powerful. Instead of using statistic metrics that are biased towards specific trees we use a more flexible, global metric of tree quality that try to optimize accuracy and size. GATree offers some unique features not to be found in any other tree inducers while at the same time it can produce better results for many difficult problems. Experimental results are presented which illustrate the performance of generating best decision tree for classifying soil texture for soil data set.

  14. What Is a Reading Error?

    Science.gov (United States)

    Labov, William; Baker, Bettina

    2010-01-01

    Early efforts to apply knowledge of dialect differences to reading stressed the importance of the distinction between differences in pronunciation and mistakes in reading. This study develops a method of estimating the probability that a given oral reading that deviates from the text is a true reading error by observing the semantic impact of the…

  15. Typical errors of ESP users

    Science.gov (United States)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  16. Quantum Convolutional Error Correction Codes

    OpenAIRE

    Chau, H. F.

    1998-01-01

    I report two general methods to construct quantum convolutional codes for quantum registers with internal $N$ states. Using one of these methods, I construct a quantum convolutional code of rate 1/4 which is able to correct one general quantum error for every eight consecutive quantum registers.

  17. Error processing in Huntington's disease.

    Directory of Open Access Journals (Sweden)

    Christian Beste

    Full Text Available BACKGROUND: Huntington's disease (HD is a genetic disorder expressed by a degeneration of the basal ganglia inter alia accompanied with dopaminergic alterations. These dopaminergic alterations are related to genetic factors i.e., CAG-repeat expansion. The error (related negativity (Ne/ERN, a cognitive event-related potential related to performance monitoring, is generated in the anterior cingulate cortex (ACC and supposed to depend on the dopaminergic system. The Ne is reduced in Parkinson's Disease (PD. Due to a dopaminergic deficit in HD, a reduction of the Ne is also likely. Furthermore it is assumed that movement dysfunction emerges as a consequence of dysfunctional error-feedback processing. Since dopaminergic alterations are related to the CAG-repeat, a Ne reduction may furthermore also be related to the genetic disease load. METHODOLOGY/PRINCIPLE FINDINGS: We assessed the error negativity (Ne in a speeded reaction task under consideration of the underlying genetic abnormalities. HD patients showed a specific reduction in the Ne, which suggests impaired error processing in these patients. Furthermore, the Ne was closely related to CAG-repeat expansion. CONCLUSIONS/SIGNIFICANCE: The reduction of the Ne is likely to be an effect of the dopaminergic pathology. The result resembles findings in Parkinson's Disease. As such the Ne might be a measure for the integrity of striatal dopaminergic output function. The relation to the CAG-repeat expansion indicates that the Ne could serve as a gene-associated "cognitive" biomarker in HD.

  18. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  19. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  20. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  1. Input/output error analyzer

    Science.gov (United States)

    Vaughan, E. T.

    1977-01-01

    Program aids in equipment assessment. Independent assembly-language utility program is designed to operate under level 27 or 31 of EXEC 8 Operating System. It scans user-selected portions of system log file, whether located on tape or mass storage, and searches for and processes 1/0 error (type 6) entries.

  2. Errors in airborne flux measurements

    Science.gov (United States)

    Mann, Jakob; Lenschow, Donald H.

    1994-07-01

    We present a general approach for estimating systematic and random errors in eddy correlation fluxes and flux gradients measured by aircraft in the convective boundary layer as a function of the length of the flight leg, or of the cutoff wavelength of a highpass filter. The estimates are obtained from empirical expressions for various length scales in the convective boundary layer and they are experimentally verified using data from the First ISLSCP (International Satellite Land Surface Climatology Experiment) Field Experiment (FIFE), the Air Mass Transformation Experiment (AMTEX), and the Electra Radome Experiment (ELDOME). We show that the systematic flux and flux gradient errors can be important if fluxes are calculated from a set of several short flight legs or if the vertical velocity and scalar time series are high-pass filtered. While the systematic error of the flux is usually negative, that of the flux gradient can change sign. For example, for temperature flux divergence the systematic error changes from negative to positive about a quarter of the way up in the convective boundary layer.

  3. Measurement error in geometric morphometrics.

    Science.gov (United States)

    Fruciano, Carmelo

    2016-06-01

    Geometric morphometrics-a set of methods for the statistical analysis of shape once saluted as a revolutionary advancement in the analysis of morphology -is now mature and routinely used in ecology and evolution. However, a factor often disregarded in empirical studies is the presence and the extent of measurement error. This is potentially a very serious issue because random measurement error can inflate the amount of variance and, since many statistical analyses are based on the amount of "explained" relative to "residual" variance, can result in loss of statistical power. On the other hand, systematic bias can affect statistical analyses by biasing the results (i.e. variation due to bias is incorporated in the analysis and treated as biologically-meaningful variation). Here, I briefly review common sources of error in geometric morphometrics. I then review the most commonly used methods to measure and account for both random and non-random measurement error, providing a worked example using a real dataset. PMID:27038025

  4. Amplify Errors to Minimize Them

    Science.gov (United States)

    Stewart, Maria Shine

    2009-01-01

    In this article, the author offers her experience of modeling mistakes and writing spontaneously in the computer classroom to get students' attention and elicit their editorial response. She describes how she taught her class about major sentence errors--comma splices, run-ons, and fragments--through her Sentence Meditation exercise, a rendition…

  5. Reduced discretization error in HZETRN

    Science.gov (United States)

    Slaba, Tony C.; Blattnig, Steve R.; Tweed, John

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A astronaut radiation exposure. In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm2 exposed to both solar particle event and galactic cosmic ray environments.

  6. Classification of nanopolymers

    Energy Technology Data Exchange (ETDEWEB)

    Larena, A; Tur, A [Department of Chemical Industrial Engineering and Environment, Universidad Politecnica de Madrid, E.T.S. Ingenieros Industriales, C/ Jose Gutierrez Abascal, Madrid (Spain); Baranauskas, V [Faculdade de Engenharia Eletrica e Computacao, Departamento de Semicondutores, Instrumentos e Fotonica, Universidade Estadual de Campinas, UNICAMP, Av. Albert Einstein N.400, 13 083-852 Campinas SP Brasil (Brazil)], E-mail: alarena@etsii.upm.es

    2008-03-15

    Nanopolymers with different structures, shapes, and functional forms have recently been prepared using several techniques. Nanopolymers are the most promising basic building blocks for mounting complex and simple hierarchical nanosystems. The applications of nanopolymers are extremely broad and polymer-based nanotechnologies are fast emerging. We propose a nanopolymer classification scheme based on self-assembled structures, non self-assembled structures, and on the number of dimensions in the nanometer range (nD)

  7. Qatar content classification

    OpenAIRE

    Handosa, Mohamed

    2014-01-01

    Short title: Qatar content classification. Long title: Develop methods and software for classifying Arabic texts into a taxonomy using machine learning. Contact person and their contact information: Tarek Kanan, . Project description: Starting 4/1/2012, and running through 12/31/2015, is a project to advance digital libraries in the country of Qatar. This is led by VT, but also involves Penn State, Texas A&M, and Qatar University. Tarek is a GRA on this effort. His di...

  8. Evolvement of Classification Society

    Institute of Scientific and Technical Information of China (English)

    Xu Hua

    2011-01-01

    As an independent industry, the emergence of the classification society was perhaps the demand of beneficial interests between shipowners, cargo owners and insurers at the earliest time. Today, as an indispensable link of the international maritime industry, class role has changed fundamentally. Start off from the demand of the insurersSeaborne trade, transport and insurance industries began to emerge successively in the 17th century. The massive risk and benefit brought by seaborne transport provided a difficult problem to insurers.

  9. Estuary Classification Revisited

    OpenAIRE

    Guha, Anirban; Lawrence, Gregory A.

    2012-01-01

    This paper presents the governing equations of a tidally-averaged, width-averaged, rectangular estuary in completely nondimensionalized forms. Subsequently, we discover that the dynamics of an estuary is entirely controlled by only two variables: (i) the Estuarine Froude number, and (ii) a nondimensional number related to the Estuarine Aspect ratio and the Tidal Froude number. Motivated by this new observation, the problem of estuary classification is re-investigated. Our analysis shows that ...

  10. Classification of myocardial infarction

    DEFF Research Database (Denmark)

    Saaby, Lotte; Poulsen, Tina Svenstrup; Hosbond, Susanne Elisabeth;

    2013-01-01

    The classification of myocardial infarction into 5 types was introduced in 2007 as an important component of the universal definition. In contrast to the plaque rupture-related type 1 myocardial infarction, type 2 myocardial infarction is considered to be caused by an imbalance between demand...... and supply of oxygen in the myocardium. However, no specific criteria for type 2 myocardial infarction have been established....

  11. Effects of Classroom Sociometric Status on Achievement Prediction.

    Science.gov (United States)

    Peper, John B.

    The purpose of the study was to determine the relative importance of: (1) generalized ability; (2) prior specific learning; (3) self concept; (4) peer esteem; and (5) teacher esteem for pupils on the prediction of arithmetic achievement. The study included proportional numbers of fifth grade students from four community classification strata…

  12. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes.

  13. Research on new software compensation method of static and quasi-static errors for precision motion controller

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    To reduce mechanical vibrations induced by big errors compensation, a new software compensation method based on an improved digital differential analyzer (DDA) interpolator for static and quasi-static errors of machine tools is proposed. Based on principle of traditional DDA interpolator, a DDA interpolator is divided into command generator and command analyzer. There are three types of errors, considering the difference of positions between compensation points and interpolation segments. According to the classification, errors are distributed evenly in data processing and compensated to certain interpolation segments in machining. On-line implementation results show that the proposed approach greatly improves positioning accuracy of computer numerical control (CNC) machine tools.

  14. Short Text Classification: A Survey

    Directory of Open Access Journals (Sweden)

    Ge Song

    2014-05-01

    Full Text Available With the recent explosive growth of e-commerce and online communication, a new genre of text, short text, has been extensively applied in many areas. So many researches focus on short text mining. It is a challenge to classify the short text owing to its natural characters, such as sparseness, large-scale, immediacy, non-standardization. It is difficult for traditional methods to deal with short text classification mainly because too limited words in short text cannot represent the feature space and the relationship between words and documents. Several researches and reviews on text classification are shown in recent times. However, only a few of researches focus on short text classification. This paper discusses the characters of short text and the difficulty of short text classification. Then we introduce the existing popular works on short text classifiers and models, including short text classification using sematic analysis, semi-supervised short text classification, ensemble short text classification, and real-time classification. The evaluations of short text classification are analyzed in our paper. Finally we summarize the existing classification technology and prospect for development trend of short text classification

  15. Histologic classification of gliomas.

    Science.gov (United States)

    Perry, Arie; Wesseling, Pieter

    2016-01-01

    Gliomas form a heterogeneous group of tumors of the central nervous system (CNS) and are traditionally classified based on histologic type and malignancy grade. Most gliomas, the diffuse gliomas, show extensive infiltration in the CNS parenchyma. Diffuse gliomas can be further typed as astrocytic, oligodendroglial, or rare mixed oligodendroglial-astrocytic of World Health Organization (WHO) grade II (low grade), III (anaplastic), or IV (glioblastoma). Other gliomas generally have a more circumscribed growth pattern, with pilocytic astrocytomas (WHO grade I) and ependymal tumors (WHO grade I, II, or III) as the most frequent representatives. This chapter provides an overview of the histology of all glial neoplasms listed in the WHO 2016 classification, including the less frequent "nondiffuse" gliomas and mixed neuronal-glial tumors. For multiple decades the histologic diagnosis of these tumors formed a useful basis for assessment of prognosis and therapeutic management. However, it is now fully clear that information on the molecular underpinnings often allows for a more robust classification of (glial) neoplasms. Indeed, in the WHO 2016 classification, histologic and molecular findings are integrated in the definition of several gliomas. As such, this chapter and Chapter 6 are highly interrelated and neither should be considered in isolation. PMID:26948349

  16. Classification of Meteorological Drought

    Institute of Scientific and Technical Information of China (English)

    Zhang Qiang; Zou Xukai; Xiao Fengjin; Lu Houquan; Liu Haibo; Zhu Changhan; An Shunqing

    2011-01-01

    Background The national standard of the Classification of Meteorological Drought (GB/T 20481-2006) was developed by the National Climate Center in cooperation with Chinese Academy of Meteorological Sciences,National Meteorological Centre and Department of Forecasting and Disaster Mitigation under the China Meteorological Administration (CMA),and was formally released and implemented in November 2006.In 2008,this Standard won the second prize of the China Standard Innovation and Contribution Awards issued by SAC.Developed through independent innovation,it is the first national standard published to monitor meteorological drought disaster and the first standard in China and around the world specifying the classification of drought.Since its release in 2006,the national standard of Classification of Meteorological Drought has been used by CMA as the operational index to monitor and drought assess,and gradually used by provincial meteorological sureaus,and applied to the drought early warning release standard in the Methods of Release and Propagation of Meteorological Disaster Early Warning Signal.

  17. A constructive error climate as an element of effective learning environments

    Directory of Open Access Journals (Sweden)

    Gabriele Steuer

    2015-06-01

    Full Text Available Although making errors while learning is common, it is also frequently perceived by students as something negative, shameful and experienced as a potential threat to self-worth. These perceptions often prevent students from regarding errors as learning opportunities. The result is that the potential to learn from them – which is inherent to errors – is not being realized. However, a favorable error climate can support learning from errors and hence foster learning progress. Based on earlier work our intent was to analyze the factor structure of classroom’s error climate (Steuer, Rosentritt-Brunn, & Dresel, 2013. A second aim was to explore different error climate patterns. Finally, we were interested in the interrelations between error climate and student performance in mathematics. These aspects were investigated in a study with N = 1,525 students from 90 classrooms in German secondary schools in the subject of mathematics. Results were consistent with the presumed factor structure of error climate. Moreover, the results showed a set of three clusters of classrooms with distinct error climates. These clusters additionally support the assumption that differentiating separate error climate subdimensions is important. Furthermore the analyses revealed interrelations between error climate and achievement in mathematics. Here as well, a set of specific subdimensions seems to be related to learning from errors at school.

  18. ERROR CONVERGENCE ANALYSIS FOR LOCAL HYPERTHERMIA APPLICATIONS

    Directory of Open Access Journals (Sweden)

    NEERU MALHOTRA

    2016-01-01

    Full Text Available The accuracy of numerical solution for electromagnetic problem is greatly influenced by the convergence of the solution obtained. In order to quantify the correctness of the numerical solution the errors produced on solving the partial differential equations are required to be analyzed. Mesh quality is another parameter that affects convergence. The various quality metrics are dependent on the type of solver used for numerical simulation. The paper focuses on comparing the performance of iterative solvers used in COMSOL Multiphysics software. The modeling of coaxial coupled waveguide applicator operating at 485MHz has been done for local hyperthermia applications using adaptive finite element method. 3D heat distribution within the muscle phantom depicting spherical leison and localized heating pattern confirms the proper selection of the solver. The convergence plots are obtained during simulation of the problem using GMRES (generalized minimal residual and geometric multigrid linear iterative solvers. The best error convergence is achieved by using nonlinearity multigrid solver and further introducing adaptivity in nonlinear solver.

  19. A Classification-based Review Recommender

    Science.gov (United States)

    O'Mahony, Michael P.; Smyth, Barry

    Many online stores encourage their users to submit product/service reviews in order to guide future purchasing decisions. These reviews are often listed alongside product recommendations but, to date, limited attention has been paid as to how best to present these reviews to the end-user. In this paper, we describe a supervised classification approach that is designed to identify and recommend the most helpful product reviews. Using the TripAdvisor service as a case study, we compare the performance of several classification techniques using a range of features derived from hotel reviews. We then describe how these classifiers can be used as the basis for a practical recommender that automatically suggests the mosthelpful contrasting reviews to end-users. We present an empirical evaluation which shows that our approach achieves a statistically significant improvement over alternative review ranking schemes.

  20. Musical Instrument Timbres Classification with Spectral Features

    Directory of Open Access Journals (Sweden)

    Agostini Giulio

    2003-01-01

    Full Text Available A set of features is evaluated for recognition of musical instruments out of monophonic musical signals. Aiming to achieve a compact representation, the adopted features regard only spectral characteristics of sound and are limited in number. On top of these descriptors, various classification methods are implemented and tested. Over a dataset of 1007 tones from 27 musical instruments, support vector machines and quadratic discriminant analysis show comparable results with success rates close to 70% of successful classifications. Canonical discriminant analysis never had momentous results, while nearest neighbours performed on average among the employed classifiers. Strings have been the most misclassified instrument family, while very satisfactory results have been obtained with brass and woodwinds. The most relevant features are demonstrated to be the inharmonicity, the spectral centroid, and the energy contained in the first partial.

  1. Automated spectral classification using template matching

    Institute of Scientific and Technical Information of China (English)

    Fu-Qing Duan; Rong Liu; Ping Guo; Ming-Quan Zhou; Fu-Chao Wu

    2009-01-01

    An automated spectral classification technique for large sky surveys is pro-posed. We firstly perform spectral line matching to determine redshift candidates for an observed spectrum, and then estimate the spectral class by measuring the similarity be-tween the observed spectrum and the shifted templates for each redshift candidate. As a byproduct of this approach, the spectral redshift can also be obtained with high accuracy. Compared with some approaches based on computerized learning methods in the liter-ature, the proposed approach needs no training, which is time-consuming and sensitive to selection of the training set. Both simulated data and observed spectra are used to test the approach; the results show that the proposed method is efficient, and it can achieve a correct classification rate as high as 92.9%, 97.9% and 98.8% for stars, galaxies and quasars, respectively.

  2. Urdu Text Classification using Majority Voting

    Directory of Open Access Journals (Sweden)

    Muhammad Usman

    2016-08-01

    Full Text Available Text classification is a tool to assign the predefined categories to the text documents using supervised machine learning algorithms. It has various practical applications like spam detection, sentiment detection, and detection of a natural language. Based on the idea we applied five well-known classification techniques on Urdu language corpus and assigned a class to the documents using majority voting. The corpus contains 21769 news documents of seven categories (Business, Entertainment, Culture, Health, Sports, and Weird. The algorithms were not able to work directly on the data, so we applied the preprocessing techniques like tokenization, stop words removal and a rule-based stemmer. After preprocessing 93400 features are extracted from the data to apply machine learning algorithms. Furthermore, we achieved up to 94% precision and recall using majority voting.

  3. Error Locked Encoder and Decoder for Nanomemory Application

    Directory of Open Access Journals (Sweden)

    Y. Sharath

    2014-03-01

    Full Text Available Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT is dominated by the failure rate of the encoder and decoder. We prove that Euclidean Geometry Low-Density Parity-Check (EG-LDPC codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10-18 upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 1011 bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead.

  4. Medication Distribution in Hospital: Errors Observed X Errors Perceived

    OpenAIRE

    De Silva, G.N.; M. A. R. Rissato; N. S. Romano-Lieber

    2013-01-01

    Abstract: The aim of the present study was to compare errors committed in the distribution of medicationsat a hospital pharmacy with those perceived by staff members involved in the distributionprocess. Medications distributed to the medical and surgical wards were analyzed. The drugswere dispensed in individualized doses per patient, separated by administration time in boxes orplastic bags for 24 hours of care and using the carbon copy of the prescription. Nineteen staffmembers involved in t...

  5. High dimensional multiclass classification with applications to cancer diagnosis

    DEFF Research Database (Denmark)

    Vincent, Martin

    Probabilistic classifiers are introduced and it is shown that the only regular linear probabilistic classifier with convex risk is multinomial regression. Penalized empirical risk minimization is introduced and used to construct supervised learning methods for probabilistic classifiers. A sparse...... group lasso penalized approach to high dimensional multinomial classification is presented. On different real data examples it is found that this approach clearly outperforms multinomial lasso in terms of error rate and features included in the model. An efficient coordinate descent algorithm...

  6. Optimized features selection for gender classification using optimization algorithms

    OpenAIRE

    KHAN, Sajid Ali; Nazir, Muhammad; RIAZ, Naveed

    2013-01-01

    Optimized feature selection is an important task in gender classification. The optimized features not only reduce the dimensions, but also reduce the error rate. In this paper, we have proposed a technique for the extraction of facial features using both appearance-based and geometric-based feature extraction methods. The extracted features are then optimized using particle swarm optimization (PSO) and the bee algorithm. The geometric-based features are optimized by PSO with ensem...

  7. Methods for data classification

    Science.gov (United States)

    Garrity, George; Lilburn, Timothy G.

    2011-10-11

    The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.

  8. Classification system for reporting events involving human malfunctions

    International Nuclear Information System (INIS)

    The report describes a set of categories for reporting industrial incidents and events involving human malfunction. The classification system aims at ensuring information adequate for improvement of human work situations and man-machine interface systems and for attempts to quantify ''human error'' rates. The classification system has a multifacetted non-hierarchial structure and its compatibility with Ispra's ERDS classification is described. The collection of the information in general and for quantification purposes are discussed. 24 categories, 12 of which being human factors oriented, are listed with their respective subcategories, and comments are given. Underlying models of human data processes and their typical malfunctions and of a human decision sequence are described. (author)

  9. Explorations in achievement motivation

    Science.gov (United States)

    Helmreich, Robert L.

    1982-01-01

    Recent research on the nature of achievement motivation is reviewed. A three-factor model of intrinsic motives is presented and related to various criteria of performance, job satisfaction and leisure activities. The relationships between intrinsic and extrinsic motives are discussed. Needed areas for future research are described.

  10. Intelligence and Educational Achievement

    Science.gov (United States)

    Deary, Ian J.; Strand, Steve; Smith, Pauline; Fernandes, Cres

    2007-01-01

    This 5-year prospective longitudinal study of 70,000+ English children examined the association between psychometric intelligence at age 11 years and educational achievement in national examinations in 25 academic subjects at age 16. The correlation between a latent intelligence trait (Spearman's "g"from CAT2E) and a latent trait of educational…

  11. Setting and Achieving Objectives.

    Science.gov (United States)

    Knoop, Robert

    1986-01-01

    Provides basic guidelines which school officials and school boards may find helpful in negotiating, establishing, and managing objectives. Discusses characteristics of good objectives, specific and directional objectives, multiple objectives, participation in setting objectives, feedback on goal process and achievement, and managing a school…

  12. Modeling-Error-Driven Performance-Seeking Direct Adaptive Control

    Science.gov (United States)

    Kulkarni, Nilesh V.; Kaneshige, John; Krishnakumar, Kalmanje; Burken, John

    2008-01-01

    This paper presents a stable discrete-time adaptive law that targets modeling errors in a direct adaptive control framework. The update law was developed in our previous work for the adaptive disturbance rejection application. The approach is based on the philosophy that without modeling errors, the original control design has been tuned to achieve the desired performance. The adaptive control should, therefore, work towards getting this performance even in the face of modeling uncertainties/errors. In this work, the baseline controller uses dynamic inversion with proportional-integral augmentation. Dynamic inversion is carried out using the assumed system model. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to the dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. Contrary to the typical Lyapunov-based adaptive approaches that guarantee only stability, the current approach investigates conditions for stability as well as performance. A high-fidelity F-15 model is used to illustrate the overall approach.

  13. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  14. STRUCTURED BACKWARD ERRORS FOR STRUCTURED KKT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Xin-xiu Li; Xin-guo Liu

    2004-01-01

    In this paper we study structured backward errors for some structured KKT systems.Normwise structured backward errors for structured KKT systems are defined, and computable formulae of the structured backward errors are obtained. Simple numerical examples show that the structured backward errors may be much larger than the unstructured ones in some cases.

  15. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Directory of Open Access Journals (Sweden)

    Zeng Bing

    2006-01-01

    Full Text Available This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream, our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth and high robustness (under varying and/or unclean channel conditions.

  16. Error bound results for convex inequality systems via conjugate duality

    CERN Document Server

    Bot, Radu Ioan

    2010-01-01

    The aim of this paper is to implement some new techniques, based on conjugate duality in convex optimization, for proving the existence of global error bounds for convex inequality systems. We deal first of all with systems described via one convex inequality and extend the achieved results, by making use of a celebrated scalarization function, to convex inequality systems expressed by means of a general vector function. We also propose a second approach for guaranteeing the existence of global error bounds of the latter, which meanwhile sharpens the classical result of Robinson.

  17. An automated, real time classification system for biological and anthropogenic sounds from fixed ocean observatories

    OpenAIRE

    Zaugg, Serge Alain; Schaar, Mike van der; Houegnigan, Ludwig; André, Michel

    2010-01-01

    The automated, real time classification of acoustic events in the marine environment is an important tool to study anthropogenic sound pollution, marine mammals and for mitigating human activities that are potentially harmful. We present a real time classification system targeted at many important groups of acoustic events (clicks, buzzes, calls, whistles from several cetacean species, tonal and impulsive shipping noise and explosions). The achieved classification performance ...

  18. Ensemble polarimetric SAR image classification based on contextual sparse representation

    Science.gov (United States)

    Zhang, Lamei; Wang, Xiao; Zou, Bin; Qiao, Zhijun

    2016-05-01

    Polarimetric SAR image interpretation has become one of the most interesting topics, in which the construction of the reasonable and effective technique of image classification is of key importance. Sparse representation represents the data using the most succinct sparse atoms of the over-complete dictionary and the advantages of sparse representation also have been confirmed in the field of PolSAR classification. However, it is not perfect, like the ordinary classifier, at different aspects. So ensemble learning is introduced to improve the issue, which makes a plurality of different learners training and obtained the integrated results by combining the individual learner to get more accurate and ideal learning results. Therefore, this paper presents a polarimetric SAR image classification method based on the ensemble learning of sparse representation to achieve the optimal classification.

  19. Assessment of optimized Markov models in protein fold classification.

    Science.gov (United States)

    Lampros, Christos; Simos, Thomas; Exarchos, Themis P; Exarchos, Konstantinos P; Papaloukas, Costas; Fotiadis, Dimitrios I

    2014-08-01

    Protein fold classification is a challenging task strongly associated with the determination of proteins' structure. In this work, we tested an optimization strategy on a Markov chain and a recently introduced Hidden Markov Model (HMM) with reduced state-space topology. The proteins with unknown structure were scored against both these models. Then the derived scores were optimized following a local optimization method. The Protein Data Bank (PDB) and the annotation of the Structural Classification of Proteins (SCOP) database were used for the evaluation of the proposed methodology. The results demonstrated that the fold classification accuracy of the optimized HMM was substantially higher compared to that of the Markov chain or the reduced state-space HMM approaches. The proposed methodology achieved an accuracy of 41.4% on fold classification, while Sequence Alignment and Modeling (SAM), which was used for comparison, reached an accuracy of 38%. PMID:25152041

  20. Manson’s triple error

    Directory of Open Access Journals (Sweden)

    Delaporte F.

    2008-09-01

    Full Text Available The author discusses the significance, implications and limitations of Manson’s work. How did Patrick Manson resolve some of the major problems raised by the filarial worm life cycle? The Amoy physician showed that circulating embryos could only leave the blood via the percutaneous route, thereby requiring a bloodsucking insect. The discovery of a new autonomous, airborne, active host undoubtedly had a considerable impact on the history of parasitology, but the way in which Manson formulated and solved the problem of the transfer of filarial worms from the body of the mosquito to man resulted in failure. This article shows how the epistemological transformation operated by Manson was indissociably related to a series of errors and how a major breakthrough can be the result of a series of false proposals and, consequently, that the history of truth often involves a history of error.

  1. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  2. Achieving excellence on shift through teamwork

    International Nuclear Information System (INIS)

    Anyone familiar with the nuclear industry realizes the importance of operators. Operators can achieve error-free plant operations, i.e., excellence on shift through teamwork. As a shift supervisor (senior reactor operator/shift technical advisor) the author went through the plant's first cycle of operations with no scrams and no equipment damaged by operator error, having since changed roles (and companies) to one of assessing plant operations. This change has provided the opportunity to see objectively the importance of operators working together and of the team building and teamwork that contribute to the shift's success. This paper uses examples to show the effectiveness of working together and outlines steps for building a group of operators into a team

  3. Error Vector Normalized Adaptive Algorithm Applied to Adaptive Noise Canceller and System Identification

    OpenAIRE

    Zayed Ramadan

    2010-01-01

    Problem statement: This study introduced a variable step-size Least Mean-Square (LMS) algorithm in which the step-size is dependent on the Euclidian vector norm of the system output error. The error vector includes the last L values of the error, where L is a parameter to be chosen properly together with other parameters in the proposed algorithm to achieve a trade-off between speed of convergence and misadjustment. Approach: The performance of the algorithm was analyzed,&...

  4. Performance analysis of fuzzy rule based classification system for transient identification in nuclear power plant

    International Nuclear Information System (INIS)

    Highlights: • An interpretable fuzzy system with acceptable accuracy can be used in nuclear power plant. • This system is worthy of being used as a redundant system for transient identification. • Deaerator level gives quicker response to fuzzy system, to classify transients in steam water system. • Increase in the number of input variables does not necessarily increase the efficiency of a fuzzy system. • Helps in operator guidance by reducing information overloading. - Abstract: Even though fuzzy rule based classification system (FRBCS) has been useful in event identification, it has led to strong clash in terms of better interpretability along with adequate percentage of accuracy. Basically for classification in nuclear power plant (NPP) which receives data within a cycle time of few milliseconds, either the accuracy or the interpretability of the FRBCS would get jeopardized. Online event identification of any abnormality or transient using FRBCS becomes really critical for the plant which has such a short cycle time. For such cases, the output from a FRBCS may not be conducive to classify the event every cycle. Thus, it is necessary to monitor the output of a classification system for certain amount of cycles till the static nature is attained. This gives a high level of confidence on the classifier output to be accurate. A FRBCS can produce this level of confidence by choosing the best input features with high interpretability and acceptable accuracy. The best feature selection out of a lot of input variables and preparing the rule base is again a very critical and challenging task in FRBCS. There is always a dilemma on judiciously choosing the number of input features for the FRBCS to achieve an optimized interpretable and accurate fuzzy system. It is always advisable to select least number of features with proper output error margin for a FRBCS. On adding extra features along with some rules as input to the system, certainly increases the accuracy

  5. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    OpenAIRE

    Valverde-Albacete, Francisco J.; Carmen Peláez-Moreno

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are dep...

  6. Large errors and severe conditions

    International Nuclear Information System (INIS)

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probability distributions. Fourth, there are pragmatic reasons for seeking convenient analytical formulas to approximate the 'true' probability distributions of derived parameters generated by Monte Carlo simulation. This paper discusses each of these issues and illustrates the main concepts with realistic examples involving radioactivity decay and nuclear astrophysics

  7. On the Classification of Psychology in General Library Classification Schemes.

    Science.gov (United States)

    Soudek, Miluse

    1980-01-01

    Holds that traditional library classification systems are inadequate to handle psychological literature, and advocates the establishment of new theoretical approaches to bibliographic organization. (FM)

  8. Remote Sensing Classification Uncertainty: Validating Probabilistic Pixel Level Classification

    Science.gov (United States)

    Vrettas, Michail; Cornford, Dan; Bastin, Lucy; Pons, Xavier; Sevillano, Eva; Moré, Gerard; Serra, Pere; Ninyerola, Miquel

    2013-04-01

    There already exists an extensive literature on classification of remotely sensed imagery, and indeed classification more widely, that considers a wide range of probabilistic and non-probabilistic classification methodologies. Although for many probabilistic classification methodologies posterior class probabilities are produced per pixel (observation) these are often not communicated at the pixel level, and typically not validated at the pixel level. Most often the probabilistic classification in converted into a hard classification (of the most probable class) and the accuracy of the resulting classification is reported in terms of a global confusion matrix, or some score derived from this. For applications where classification accuracy is spatially variable and where pixel level estimates of uncertainty can be meaningfully exploited in workflows that propagate uncertainty validating and communicating the pixel level uncertainty opens opportunities for more refined and accountable modelling. In this work we describe our recent work applying and validation of a range of probabilistic classifiers. Using a multi-temporal Landsat data set of the Ebro Delta in Catalonia, which has been carefully radiometrically and geometrically corrected, we present a range of Bayesian classifiers from simple Bayesian linear discriminant analysis to a complex variational Gaussian process based classifier. Field study derived labelled data, classified into 8 classes, which primarily consider land use and the degree of flooding in what is a rice growing region, are used to train the pixel level classifiers. Our focus is not so much on the classification accuracy, but rather the validation of the probabilistic classification made by all methods. We present a range of validation plots and scores, many of which are used for probabilistic weather forecast verification, but are new to remote sensing classification including of course the standard measures of misclassification, but also

  9. Achieveing Organizational Excellence Through

    Directory of Open Access Journals (Sweden)

    Mehdi Abzari

    2009-04-01

    Full Text Available AbstractToday, In order to create motivation and desirable behavior in employees, to obtain organizational goals,to increase human resources productivity and finally to achieve organizational excellence, top managers oforganizations apply new and effective strategies. One of these strategies to achieve organizational excellenceis creating desirable corporate culture. This research has been conducted to identify the path to reachorganizational excellence by creating corporate culture according to the standards and criteria oforganizational excellence. The result of the so-called research is this paper in which researchers foundtwenty models and components of corporate culture and based on the Industry, organizational goals andEFQM model developed a model called "The Eskimo model of Culture-Excellence". The method of theresearch is survey and field study and the questionnaires were distributed among 116 managers andemployees. To assess the reliability of questionnaires, Cronbach alpha was measured to be 95% in the idealsituation and 0/97 in the current situation. Systematic sampling was done and in the pre-test stage 45questionnaires were distributed. A comparison between the current and the ideal corporate culture based onthe views of managers and employees was done and finally it has been concluded that corporate culture isthe main factor to facilitate corporate excellence and success in order to achieve organizational effectiveness.The contribution of this paper is that it proposes a localized –applicable model of corporate excellencethrough reinforcing corporate culture.

  10. URBAN TREE CLASSIFICATION USING FULL-WAVEFORM AIRBORNE LASER SCANNING

    Directory of Open Access Journals (Sweden)

    Zs. Koma

    2016-06-01

    Full Text Available Vegetation mapping in urban environments plays an important role in biological research and urban management. Airborne laser scanning provides detailed 3D geodata, which allows to classify single trees into different taxa. Until now, research dealing with tree classification focused on forest environments. This study investigates the object-based classification of urban trees at taxonomic family level, using full-waveform airborne laser scanning data captured in the city centre of Vienna (Austria. The data set is characterised by a variety of taxa, including deciduous trees (beeches, mallows, plane trees and soapberries and the coniferous pine species. A workflow for tree object classification is presented using geometric and radiometric features. The derived features are related to point density, crown shape and radiometric characteristics. For the derivation of crown features, a prior detection of the crown base is performed. The effects of interfering objects (e.g. fences and cars which are typical in urban areas on the feature characteristics and the subsequent classification accuracy are investigated. The applicability of the features is evaluated by Random Forest classification and exploratory analysis. The most reliable classification is achieved by using the combination of geometric and radiometric features, resulting in 87.5% overall accuracy. By using radiometric features only, a reliable classification with accuracy of 86.3% can be achieved. The influence of interfering objects on feature characteristics is identified, in particular for the radiometric features. The results indicate the potential of using radiometric features in urban tree classification and show its limitations due to anthropogenic influences at the same time.

  11. Urban Tree Classification Using Full-Waveform Airborne Laser Scanning

    Science.gov (United States)

    Koma, Zs.; Koenig, K.; Höfle, B.

    2016-06-01

    Vegetation mapping in urban environments plays an important role in biological research and urban management. Airborne laser scanning provides detailed 3D geodata, which allows to classify single trees into different taxa. Until now, research dealing with tree classification focused on forest environments. This study investigates the object-based classification of urban trees at taxonomic family level, using full-waveform airborne laser scanning data captured in the city centre of Vienna (Austria). The data set is characterised by a variety of taxa, including deciduous trees (beeches, mallows, plane trees and soapberries) and the coniferous pine species. A workflow for tree object classification is presented using geometric and radiometric features. The derived features are related to point density, crown shape and radiometric characteristics. For the derivation of crown features, a prior detection of the crown base is performed. The effects of interfering objects (e.g. fences and cars which are typical in urban areas) on the feature characteristics and the subsequent classification accuracy are investigated. The applicability of the features is evaluated by Random Forest classification and exploratory analysis. The most reliable classification is achieved by using the combination of geometric and radiometric features, resulting in 87.5% overall accuracy. By using radiometric features only, a reliable classification with accuracy of 86.3% can be achieved. The influence of interfering objects on feature characteristics is identified, in particular for the radiometric features. The results indicate the potential of using radiometric features in urban tree classification and show its limitations due to anthropogenic influences at the same time.

  12. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  13. SPORT FOOD ADDITIVE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    I. P. Prokopenko

    2015-01-01

    Full Text Available Correctly organized nutritive and pharmacological support is an important component of an athlete's preparation for competitions, an optimal shape maintenance, fast recovery and rehabilitation after traumas and defatigation. Special products of enhanced biological value (BAS for athletes nutrition are used with this purpose. Easy-to-use energy sources are administered into athlete's organism, yielded materials and biologically active substances which regulate and activate exchange reactions which proceed with difficulties during certain physical trainings. The article presents sport supplements classification which can be used before warm-up and trainings, after trainings and in competitions breaks.

  14. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    KAUST Repository

    Bryant, C. M.

    2015-01-01

    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  15. Capacitor Mismatch Error Cancellation Technique for a Successive Approximation A/D Converter

    DEFF Research Database (Denmark)

    Zheng, Zhiliang; Moon, Un-Ku; Steensgaard-Madsen, Jesper;

    1999-01-01

    An error cancellation technique is described for suppressing capacitor mismatch in a successive approximation A/D converter. At the cost of a 50% increase in conversion time, the first-order capacitor mismatch error is cancelled. Methods for achieving top-plate parasitic insensitive operation...

  16. Classification Of Human Rights: Modern Approaches

    Directory of Open Access Journals (Sweden)

    Viktor I. Pishhulin

    2014-09-01

    Full Text Available In the article existing doctrinal approaches to the classification of the rights and freedoms of person are revealed. Author suggests to approach a problem of the rights and freedoms of person classification in a historical and chronological order and on this basis to allocate three generations of human rights. In the article the role of human rights in creation of democratic constitutional state is shown. It is emphasized that a main goal of any state – protection of constitutional rights of the personality and providing opportunities for their full practical realization. According to the author, achievements of modern legal and political science can act as a form of insurance from false understanding of human rights. Author analyzes essence and principles of the rights and freedoms of person, reflecting on problems of their classification and protection, scientists create the base for the legislation on human rights development and for their full realization. Due to such understanding of the scientific activity importance the state and scientific community can combine efforts for the achievement of public consent objectives, creation of civil society, development of its institutes. Author proves that an important role in this process is played also by the legal education which carries out educational functions, promotes in the society formation of the legal culture. In the conclusion author explains why a main goal of any state – protection of constitutional rights of the personality and providing opportunities for their full practical realization. In the achievement of this goal the modern state has to consider human rights not as the instrument of political struggle and a factor of games of politics, but as an inherent supreme value.

  17. Adaptive codebook selection schemes for image classification in correlated channels

    Science.gov (United States)

    Hu, Chia Chang; Liu, Xiang Lian; Liu, Kuan-Fu

    2015-09-01

    The multiple-input multiple-output (MIMO) system with the use of transmit and receive antenna arrays achieves diversity and array gains via transmit beamforming. Due to the absence of full channel state information (CSI) at the transmitter, the transmit beamforming vector can be quantized at the receiver and sent back to the transmitter by a low-rate feedback channel, called limited feedback beamforming. One of the key roles of Vector Quantization (VQ) is how to generate a good codebook such that the distortion between the original image and the reconstructed image is the minimized. In this paper, a novel adaptive codebook selection scheme for image classification is proposed with taking both spatial and temporal correlation inherent in the channel into consideration. The new codebook selection algorithm is developed to select two codebooks from the discrete Fourier transform (DFT) codebook, the generalized Lloyd algorithm (GLA) codebook and the Grassmannian codebook to be combined and used as candidates of the original image and the reconstructed image for image transmission. The channel is estimated and divided into four regions based on the spatial and temporal correlation of the channel and an appropriate codebook is assigned to each region. The proposed method can efficiently reduce the required information of feedback under the spatially and temporally correlated channels, where each region is adaptively. Simulation results show that in the case of temporally and spatially correlated channels, the bit-error-rate (BER) performance can be improved substantially by the proposed algorithm compared to the one with only single codebook.

  18. Completed Local Ternary Pattern for Rotation Invariant Texture Classification

    Directory of Open Access Journals (Sweden)

    Taha H. Rassem

    2014-01-01

    Full Text Available Despite the fact that the two texture descriptors, the completed modeling of Local Binary Pattern (CLBP and the Completed Local Binary Count (CLBC, have achieved a remarkable accuracy for invariant rotation texture classification, they inherit some Local Binary Pattern (LBP drawbacks. The LBP is sensitive to noise, and different patterns of LBP may be classified into the same class that reduces its discriminating property. Although, the Local Ternary Pattern (LTP is proposed to be more robust to noise than LBP, however, the latter’s weakness may appear with the LTP as well as with LBP. In this paper, a novel completed modeling of the Local Ternary Pattern (LTP operator is proposed to overcome both LBP drawbacks, and an associated completed Local Ternary Pattern (CLTP scheme is developed for rotation invariant texture classification. The experimental results using four different texture databases show that the proposed CLTP achieved an impressive classification accuracy as compared to the CLBP and CLBC descriptors.

  19. Robust tissue classification for reproducible wound assessment in telemedicine environments

    Science.gov (United States)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  20. Medication administration errors for older people in long-term residential care

    Directory of Open Access Journals (Sweden)

    Szczepura Ala

    2011-12-01

    Full Text Available Abstract Background Older people in long-term residential care are at increased risk of medication prescribing and administration errors. The main aim of this study was to measure the incidence of medication administration errors in nursing and residential homes using a barcode medication administration (BCMA system. Methods A prospective study was conducted in 13 care homes (9 residential and 4 nursing. Data on all medication administrations for a cohort of 345 older residents were recorded in real-time using a disguised observation technique. Every attempt by social care and nursing staff to administer medication over a 3-month observation period was analysed using BCMA records to determine the incidence and types of potential medication administration errors (MAEs and whether errors were averted. Error classifications included attempts to administer medication at the wrong time, to the wrong person or discontinued medication. Further analysis compared data for residential and nursing homes. In addition, staff were surveyed prior to BCMA system implementation to assess their awareness of administration errors. Results A total of 188,249 medication administration attempts were analysed using BCMA data. Typically each resident was receiving nine different drugs and was exposed to 206 medication administration episodes every month. During the observation period, 2,289 potential MAEs were recorded for the 345 residents; 90% of residents were exposed to at least one error. The most common (n = 1,021, 45% of errors was attempting to give medication at the wrong time. Over the 3-month observation period, half (52% of residents were exposed to a serious error such as attempting to give medication to the wrong resident. Error incidence rates were 1.43 as high (95% CI 1.32-1.56 p Conclusions The incidence of medication administration errors is high in long-term residential care. A barcode medication administration system can capture medication

  1. The future of general classification

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2013-01-01

    Discusses problems related to accessing multiple collections using a single retrieval language. Surveys the concepts of interoperability and switching language. Finds that mapping between more indexing languages always will be an approximation. Surveys the issues related to general classification...... and contrasts that to special classifications. Argues for the use of general classifications to provide access to collections nationally and internationally. © 2003 by The Haworth Press, Inc. All rights reserved....

  2. Classification and Labelling for Biocides

    OpenAIRE

    Rubbiani, Maristella

    2015-01-01

    CLP and biocides The EU Regulation (EC) No 1272/2008 on Classification, Labelling and Packaging of Substances and Mixtures, the CLP-Regulation, entered into force on 20th January, 2009. Since 1st December, 2010 the classification, labelling and packaging of substances has to comply with this Regulation. For mixtures, the rules of this Regulation are mandatory from 1st June, 2015; this means that until this date classification, labelling and packaging could either be carried out according to D...

  3. DCC Briefing Paper: Genre classification

    OpenAIRE

    Abbott, Daisy; Kim, Yunhyong

    2008-01-01

    Genre classification is the process of grouping objects together based on defined similarities such as subject, format, style, or purpose. Genre classification as a means of managing information is already established in music (e.g. folk, blues, jazz) and text and is used, alongside topic classification, to organise materials in the commercial sector (the children's section of a bookshop) and intellectually (for example, in the Usenet newsgroup directory hierarchy). However, in the case o...

  4. Random Forests for Poverty Classification

    OpenAIRE

    Ruben Thoplan

    2014-01-01

    This paper applies a relatively novel method in data mining to address the issue of poverty classification in Mauritius. The random forests algorithm is applied to the census data in view of improving classification accuracy for poverty status. The analysis shows that the numbers of hours worked, age, education and sex are the most important variables in the classification of the poverty status of an individual. In addition, a clear poverty-gender gap is identified as women have higher chance...

  5. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    Science.gov (United States)

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  6. Nuclear reactors transients identification and classification system

    International Nuclear Information System (INIS)

    This work describes the study and test of a system capable to identify and classify transients in thermo-hydraulic systems, using a neural network technique of the self-organizing maps (SOM) type, with the objective of implanting it on the new generations of nuclear reactors. The technique developed in this work consists on the use of multiple networks to do the classification and identification of the transient states, being each network a specialist at one respective transient of the system, that compete with each other using the quantization error, that is a measure given by this type of neural network. This technique showed very promising characteristics that allow the development of new functionalities in future projects. One of these characteristics consists on the potential of each network, besides responding what transient is in course, could give additional information about that transient. (author)

  7. Adaptive multiclass classification for brain computer interfaces.

    Science.gov (United States)

    Llera, A; Gómez, V; Kappen, H J

    2014-06-01

    We consider the problem of multiclass adaptive classification for brain-computer interfaces and propose the use of multiclass pooled mean linear discriminant analysis (MPMLDA), a multiclass generalization of the adaptation rule introduced by Vidaurre, Kawanabe, von Bünau, Blankertz, and Müller (2010) for the binary class setting. Using publicly available EEG data sets and tangent space mapping (Barachant, Bonnet, Congedo, & Jutten, 2012) as a feature extractor, we demonstrate that MPMLDA can significantly outperform state-of-the-art multiclass static and adaptive methods. Furthermore, efficient learning rates can be achieved using data from different subjects.

  8. Logistic Regression for Evolving Data Streams Classification

    Institute of Scientific and Technical Information of China (English)

    YIN Zhi-wu; HUANG Shang-teng; XUE Gui-rong

    2007-01-01

    Logistic regression is a fast classifier and can achieve higher accuracy on small training data. Moreover,it can work on both discrete and continuous attributes with nonlinear patterns. Based on these properties of logistic regression, this paper proposed an algorithm, called evolutionary logistical regression classifier (ELRClass), to solve the classification of evolving data streams. This algorithm applies logistic regression repeatedly to a sliding window of samples in order to update the existing classifier, to keep this classifier if its performance is deteriorated by the reason of bursting noise, or to construct a new classifier if a major concept drift is detected. The intensive experimental results demonstrate the effectiveness of this algorithm.

  9. PSC: protein surface classification.

    Science.gov (United States)

    Tseng, Yan Yuan; Li, Wen-Hsiung

    2012-07-01

    We recently proposed to classify proteins by their functional surfaces. Using the structural attributes of functional surfaces, we inferred the pairwise relationships of proteins and constructed an expandable database of protein surface classification (PSC). As the functional surface(s) of a protein is the local region where the protein performs its function, our classification may reflect the functional relationships among proteins. Currently, PSC contains a library of 1974 surface types that include 25,857 functional surfaces identified from 24,170 bound structures. The search tool in PSC empowers users to explore related surfaces that share similar local structures and core functions. Each functional surface is characterized by structural attributes, which are geometric, physicochemical or evolutionary features. The attributes have been normalized as descriptors and integrated to produce a profile for each functional surface in PSC. In addition, binding ligands are recorded for comparisons among homologs. PSC allows users to exploit related binding surfaces to reveal the changes in functionally important residues on homologs that have led to functional divergence during evolution. The substitutions at the key residues of a spatial pattern may determine the functional evolution of a protein. In PSC (http://pocket.uchicago.edu/psc/), a pool of changes in residues on similar functional surfaces is provided.

  10. Cost Sensitive Sequential Classification

    CERN Document Server

    Trapeznikov, Kirill; Castanon, David

    2012-01-01

    In many decision systems, sensing modalities have different acquisition costs. It is often unnecessary to use every sensor to classify a majority of examples. We study a multi-stage system in a prediction time cost reduction setting, where all the modalities are available for training, but for a test example, measurements in a new modality can be acquired at each stage for an additional cost. We seek decision rules to reduce the average acquisition cost. We construct an empirical risk minimization problem (ERM) for a multi-stage reject classifier, wherein the stage $k$ classifier either classifies a sample using only the measurements acquired so far or rejects it to the next stage where more attributes can be acquired for a cost. To solve the ERM problem, we factorize the loss function into classification and rejection decisions. We then transform reject decisions into a binary classification problem. We formulate stage-by-stage global surrogate risk and introduce an iterative algorithm in the boosting framew...

  11. Mimicking human texture classification

    Science.gov (United States)

    van Rikxoort, Eva M.; van den Broek, Egon L.; Schouten, Theo E.

    2005-03-01

    In an attempt to mimic human (colorful) texture classification by a clustering algorithm three lines of research have been encountered, in which as test set 180 texture images (both their color and gray-scale equivalent) were drawn from the OuTex and VisTex databases. First, a k-means algorithm was applied with three feature vectors, based on color/gray values, four texture features, and their combination. Second, 18 participants clustered the images using a newly developed card sorting program. The mutual agreement between the participants was 57% and 56% and between the algorithm and the participants it was 47% and 45%, for respectively color and gray-scale texture images. Third, in a benchmark, 30 participants judged the algorithms' clusters with gray-scale textures as more homogeneous then those with colored textures. However, a high interpersonal variability was present for both the color and the gray-scale clusters. So, despite the promising results, it is questionable whether average human texture classification can be mimicked (if it exists at all).

  12. Holistic facial expression classification

    Science.gov (United States)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  13. CLASSIFICATION OF CRIMINAL GROUPS

    Directory of Open Access Journals (Sweden)

    Natalia Romanova

    2013-06-01

    Full Text Available New types of criminal groups are emerging in modern society.  These types have their special criminal subculture. The research objective is to develop new parameters of classification of modern criminal groups, create a new typology of criminal groups and identify some features of their subculture. Research methodology is based on the system approach that includes using the method of analysis of documentary sources (materials of a criminal case, method of conversations with themembers of the criminal group, method of testing the members of the criminal group and method of observation. As a result of the conducted research, we have created a new classification of criminal groups. The first type is a lawful group in its form and criminal according to its content (i.e., its target is criminal enrichment. The second type is a criminal organization which is run by so-called "white-collars" that "remain in the shadow". The third type is traditional criminal groups.  The fourth type is the criminal group, which openly demonstrates its criminal activity.

  14. Tracking Error Analysis of a Rotation-Elevation Mode Heliostat

    Directory of Open Access Journals (Sweden)

    Omar Aliman

    2007-01-01

    Full Text Available For the past few years, great efforts have been done in improving the tracking accuracy of a newly proposed rotation-elevation tracking mode heliostat. A special simulation program has been developed to systematically analyze the image movement and to find out the error of the parameters. In the simulation program, ray-tracing method was applied to work out the central point position of the master mirror image on the target plane during the primary tracking. From the experiment, less than 5cm of tracking error was achieved with the help of the simulation program. We discussed the error analysis of the two prototypes of so called Non-Imaging Focusing Heliostat (NIFH in Universiti Teknologi Malaysia (UTM which has greatly reduced the optical alignment process and resulting more precise result.

  15. A precise error bound for quantum phase estimation.

    Directory of Open Access Journals (Sweden)

    James M Chappell

    Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

  16. Emotion of Physiological Signals Classification Based on TS Feature Selection

    Institute of Scientific and Technical Information of China (English)

    Wang Yujing; Mo Jianlin

    2015-01-01

    This paper propose a method of TS-MLP about emotion recognition of physiological signal.It can recognize emotion successfully by Tabu search which selects features of emotion’s physiological signals and multilayer perceptron that is used to classify emotion.Simulation shows that it has achieved good emotion classification performance.

  17. Classification of complex polynomial vector fields in one complex variable

    DEFF Research Database (Denmark)

    Branner, Bodil; Dias, Kealey

    2010-01-01

    This paper classifies the global structure of monic and centred one-variable complex polynomial vector fields. The classification is achieved by means of combinatorial and analytic data. More specifically, given a polynomial vector field, we construct a combinatorial invariant, describing...

  18. Ground-Level Classification of a Coral Reef Using a Hyperspectral Camera

    Directory of Open Access Journals (Sweden)

    Tamir Caras

    2015-06-01

    Full Text Available Especially in the remote sensing context, thematic classification is a desired product for coral reef surveys. This study presents a novel statistical-based image classification approach, namely Partial Least Square Discriminant Analysis (PLS-DA, capable of doing so. Three classification models were built and implemented for the images while the fourth was a combination of spectra from all three images together. The classification was optimised by using pre-processing transformations (PPTs and post-classification low-pass filtering. Despite the fact that the images were acquired under different conditions and quality, the best classification model was achieved by combining spectral training samples from three images (accuracy 0.63 for all classes. PPTs improved the classification accuracy by 5%–15% and post-classification treatments further increased the final accuracy by 10%–20%. The fourth classification model was the most accurate one, suggesting that combining spectra from differ conditions improves thematic classification. Despite some limitations, available aerial sensors already provide an opportunity to implement the described classification and mark the next investigation step. Nonetheless, the findings of this study are relevant both to the field of remote sensing in general and to the niche of coral reef spectroscopy.

  19. Nominated Texture Based Cervical Cancer Classification

    Directory of Open Access Journals (Sweden)

    Edwin Jayasingh Mariarputham

    2015-01-01

    Full Text Available Accurate classification of Pap smear images becomes the challenging task in medical image processing. This can be improved in two ways. One way is by selecting suitable well defined specific features and the other is by selecting the best classifier. This paper presents a nominated texture based cervical cancer (NTCC classification system which classifies the Pap smear images into any one of the seven classes. This can be achieved by extracting well defined texture features and selecting best classifier. Seven sets of texture features (24 features are extracted which include relative size of nucleus and cytoplasm, dynamic range and first four moments of intensities of nucleus and cytoplasm, relative displacement of nucleus within the cytoplasm, gray level cooccurrence matrix, local binary pattern histogram, tamura features, and edge orientation histogram. Few types of support vector machine (SVM and neural network (NN classifiers are used for the classification. The performance of the NTCC algorithm is tested and compared to other algorithms on public image database of Herlev University Hospital, Denmark, with 917 Pap smear images. The output of SVM is found to be best for the most of the classes and better results for the remaining classes.

  20. Classification SAR targets with support vector machine

    Science.gov (United States)

    Cao, Lanying

    2007-02-01

    With the development of Synthetic Aperture Radar (SAR) technology, automatic target recognition (ATR) is becoming increasingly important. In this paper, we proposed a 3-class target classification system in SAR images. The system is based on invariant wavelet moments and support vector machine (SVM) algorithm. It is a two-stage approach. The first stage is to extract and select a small set of wavelet invariant moment features to indicate target images. The wavelet invariant moments take both advantages of the wavelet inherent property of multi-resolution analysis and moment invariants quality of invariant to translation, scaling changes and rotation. The second stage is classification of targets with SVM algorithm. SVM is based on the principle of structural risk minimization (SRM), which has been shown better than the principle of empirical risk minimization (ERM) which is used by many conventional networks. To test the performance and efficiency of the proposed method, we performed experiments on invariant wavelet moments, different kernel functions, 2-class identification, and 3-class identification. Test results show that wavelet invariant moments indicate the target effectively; linear kernel function achieves better results than other kernel functions, and SVM classification approach performs better than conventional nearest distance approach.

  1. Sparse extreme learning machine for classification.

    Science.gov (United States)

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M Brandon

    2014-10-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM.

  2. A Kernel Classification Framework for Metric Learning.

    Science.gov (United States)

    Wang, Faqiang; Zuo, Wangmeng; Zhang, Lei; Meng, Deyu; Zhang, David

    2015-09-01

    Learning a distance metric from the given training samples plays a crucial role in many machine learning tasks, and various models and optimization algorithms have been proposed in the past decade. In this paper, we generalize several state-of-the-art metric learning methods, such as large margin nearest neighbor (LMNN) and information theoretic metric learning (ITML), into a kernel classification framework. First, doublets and triplets are constructed from the training samples, and a family of degree-2 polynomial kernel functions is proposed for pairs of doublets or triplets. Then, a kernel classification framework is established to generalize many popular metric learning methods such as LMNN and ITML. The proposed framework can also suggest new metric learning methods, which can be efficiently implemented, interestingly, using the standard support vector machine (SVM) solvers. Two novel metric learning methods, namely, doublet-SVM and triplet-SVM, are then developed under the proposed framework. Experimental results show that doublet-SVM and triplet-SVM achieve competitive classification accuracies with state-of-the-art metric learning methods but with significantly less training time. PMID:25347887

  3. BROAD PHONEME CLASSIFICATION USING SIGNAL BASED FEATURES

    Directory of Open Access Journals (Sweden)

    Deekshitha G

    2014-12-01

    Full Text Available Speech is the most efficient and popular means of human communication Speech is produced as a sequence of phonemes. Phoneme recognition is the first step performed by automatic speech recognition system. The state-of-the-art recognizers use mel-frequency cepstral coefficients (MFCC features derived through short time analysis, for which the recognition accuracy is limited. Instead of this, here broad phoneme classification is achieved using features derived directly from the speech at the signal level itself. Broad phoneme classes include vowels, nasals, fricatives, stops, approximants and silence. The features identified useful for broad phoneme classification are voiced/unvoiced decision, zero crossing rate (ZCR, short time energy, most dominant frequency, energy in most dominant frequency, spectral flatness measure and first three formants. Features derived from short time frames of training speech are used to train a multilayer feedforward neural network based classifier with manually marked class label as output and classification accuracy is then tested. Later this broad phoneme classifier is used for broad syllable structure prediction which is useful for applications such as automatic speech recognition and automatic language identification.

  4. 78 FR 68983 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-11-18

    ...-Doxey data into the cotton futures classification process in March 2012 (77 FR 5379). When verified by a... October 9, 2013 (78 FR 54970). AMS received two comments: one from a national trade organization... Agricultural Marketing Service 7 CFR Part 27 RIN 0581-AD33 Cotton Futures Classification:...

  5. Classifying Emotion in News Sentences: When Machine Classification Meets Human Classification

    Directory of Open Access Journals (Sweden)

    Plaban Kumar Bhowmick

    2010-01-01

    Full Text Available Multiple emotions are often evoked in readers in response to text stimuli like news article. In this paper, we present a method for classifying news sentences into multiple emotion categories. The corpus consists of 1000 news sentences and the emotion tag considered was anger, disgust, fear, happiness, sadness and surprise. We performed different experiments to compare the machine classification with human classification of emotion. In both the cases, it has been observed that combining anger and disgust class results in better classification and removing surprise, which is a highly ambiguous class in human classification, improves the performance. Words present in the sentences and the polarity of the subject, object and verb were used as features. The classifier performs better with the word and polarity feature combination compared to feature set consisting only of words. The best performance has been achieved with the corpus where anger and disgust classes are combined and surprise class is removed. In this experiment, the average precision was computed to be 79.5% and the average class wise micro F1 is found to be 59.52%.

  6. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    Human errors are divided in two groups. The first group contains human errors, which effect the reliability directly. The second group contains human errors, which will not directly effect the reliability of the structure. The methodology used to estimate so-called reliability distributions...... on basis of reliability profiles for bridges without human errors are extended to include bridges with human errors. The first rehabilitation distributions for bridges without and with human errors are combined into a joint first rehabilitation distribution. The methodology presented is illustrated...

  7. The prosody of speech error corrections revisited

    OpenAIRE

    Shattuck-Hufnagel, S.; Cutler, A.

    1999-01-01

    A corpus of digitized speech errors is used to compare the prosody of correction patterns for word-level vs. sound-level errors. Results for both peak F0 and perceived prosodic markedness confirm that speakers are more likely to mark corrections of word-level errors than corrections of sound-level errors, and that errors ambiguous between word-level and soundlevel (such as boat for moat) show correction patterns like those for sound level errors. This finding increases the plausibility of the...

  8. Ensemble of classifiers for confidence-rated classification of NDE signal

    Science.gov (United States)

    Banerjee, Portia; Safdarnejad, Seyed; Udpa, Lalita; Udpa, Satish

    2016-02-01

    Ensemble of classifiers in general, aims to improve classification accuracy by combining results from multiple weak hypotheses into a single strong classifier through weighted majority voting. Improved versions of ensemble of classifiers generate self-rated confidence scores which estimate the reliability of each of its prediction and boost the classifier using these confidence-rated predictions. However, such a confidence metric is based only on the rate of correct classification. In existing works, although ensemble of classifiers has been widely used in computational intelligence, the effect of all factors of unreliability on the confidence of classification is highly overlooked. With relevance to NDE, classification results are affected by inherent ambiguity of classifica-tion, non-discriminative features, inadequate training samples and noise due to measurement. In this paper, we extend the existing ensemble classification by maximizing confidence of every classification decision in addition to minimizing the classification error. Initial results of the approach on data from eddy current inspection show improvement in classification performance of defect and non-defect indications.

  9. Gender classification system in uncontrolled environments

    Science.gov (United States)

    Zeng, Pingping; Zhang, Yu-Jin; Duan, Fei

    2011-01-01

    Most face analysis systems available today perform mainly on restricted databases of images in terms of size, age, illumination. In addition, it is frequently assumed that all images are frontal and unconcealed. Actually, in a non-guided real-time supervision, the face pictures taken may often be partially covered and with head rotation less or more. In this paper, a special system supposed to be used in real-time surveillance with un-calibrated camera and non-guided photography is described. It mainly consists of five parts: face detection, non-face filtering, best-angle face selection, texture normalization, and gender classification. Emphases are focused on non-face filtering and best-angle face selection parts as well as texture normalization. Best-angle faces are figured out by PCA reconstruction, which equals to an implicit face alignment and results in a huge increase of the accuracy for gender classification. Dynamic skin model and a masked PCA reconstruction algorithm are applied to filter out faces detected in error. In order to fully include facial-texture and shape-outline features, a hybrid feature that is a combination of Gabor wavelet and PHoG (pyramid histogram of gradients) was proposed to equitable inner texture and outer contour. Comparative study on the effects of different non-face filtering and texture masking methods in the context of gender classification by SVM is reported through experiments on a set of UT (a company name) face images, a large number of internet images and CAS (Chinese Academy of Sciences) face database. Some encouraging results are obtained.

  10. 15 CFR 2008.9 - Classification guides.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Classification guides. 2008.9 Section... REPRESENTATIVE Derivative Classification § 2008.9 Classification guides. Classification guides shall be issued by... direct derivative classification, shall identify the information to be protected in specific and...

  11. 32 CFR 2400.15 - Classification guides.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classification guides. 2400.15 Section 2400.15... Derivative Classification § 2400.15 Classification guides. (a) OSTP shall issue and maintain classification guides to facilitate the proper and uniform derivative classification of information. These guides...

  12. 14 CFR 1203.412 - Classification guides.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Classification guides. 1203.412 Section... PROGRAM Guides for Original Classification § 1203.412 Classification guides. (a) General. A classification guide, based upon classification determinations made by appropriate program and...

  13. 7 CFR 27.34 - Classification procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification procedure. 27.34 Section 27.34... REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification and Micronaire Determinations § 27.34 Classification procedure. Classification shall proceed as rapidly as possible, but...

  14. 22 CFR 9.6 - Derivative classification.

    Science.gov (United States)

    2010-04-01

    ... CFR 2001.22. (c) Department of State Classification Guide. The Department of State Classification... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Derivative classification. 9.6 Section 9.6... classification. (a) Definition. Derivative classification is the incorporating, paraphrasing, restating...

  15. 22 CFR 9.4 - Original classification.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Original classification. 9.4 Section 9.4... classification. (a) Definition. Original classification is the initial determination that certain information... classification. (b) Classification levels. (1) Top Secret shall be applied to information the...

  16. Progressive Classification Using Support Vector Machines

    Science.gov (United States)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  17. Distance Measurement Error Reduction Analysis for the Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Tariq Jamil SaifullahKhanzada

    2012-10-01

    Full Text Available This paper presents the DME (Distance Measurement Error estimation analysis for the wireless indoor positioning channel. The channel model for indoor positioning is derived and implemented using 8 WLAN (Wireless Local Area Network antennas system compliant to IEEE 802.11 a/b/g standard. Channel impairments are derived for the TDOA (Time Difference of Arrival range estimation. DME calculation is performed over distinct experiments in the TDOA channel profiles using 1,2,4 and 8 antennas deployed system. Analysis for the DME for different antennas is presented. The spiral antenna achieves minimum DME in the range of 1m. Data demographics scattering for the error spread in TDOA channel profile is analyzed to show the error behavior. The effect of increase in number of recordings on DME is shown by the results. Transmitter antennas behavior for DME and their standard deviations are depicted through the results, which minimize the error floor to less than 1 m. This reduction is not achieved in the literature to the best of our knowledge.

  18. Errors associated with outpatient computerized prescribing systems

    Science.gov (United States)

    Rothschild, Jeffrey M; Salzberg, Claudia; Keohane, Carol A; Zigmont, Katherine; Devita, Jim; Gandhi, Tejal K; Dalal, Anuj K; Bates, David W; Poon, Eric G

    2011-01-01

    Objective To report the frequency, types, and causes of errors associated with outpatient computer-generated prescriptions, and to develop a framework to classify these errors to determine which strategies have greatest potential for preventing them. Materials and methods This is a retrospective cohort study of 3850 computer-generated prescriptions received by a commercial outpatient pharmacy chain across three states over 4 weeks in 2008. A clinician panel reviewed the prescriptions using a previously described method to identify and classify medication errors. Primary outcomes were the incidence of medication errors; potential adverse drug events, defined as errors with potential for harm; and rate of prescribing errors by error type and by prescribing system. Results Of 3850 prescriptions, 452 (11.7%) contained 466 total errors, of which 163 (35.0%) were considered potential adverse drug events. Error rates varied by computerized prescribing system, from 5.1% to 37.5%. The most common error was omitted information (60.7% of all errors). Discussion About one in 10 computer-generated prescriptions included at least one error, of which a third had potential for harm. This is consistent with the literature on manual handwritten prescription error rates. The number, type, and severity of errors varied by computerized prescribing system, suggesting that some systems may be better at preventing errors than others. Conclusions Implementing a computerized prescribing system without comprehensive functionality and processes in place to ensure meaningful system use does not decrease medication errors. The authors offer targeted recommendations on improving computerized prescribing systems to prevent errors. PMID:21715428

  19. Antenna motion errors in bistatic SAR imagery

    Science.gov (United States)

    Wang, Ling; Yazıcı, Birsen; Cagri Yanik, H.

    2015-06-01

    Antenna trajectory or motion errors are pervasive in synthetic aperture radar (SAR) imaging. Motion errors typically result in smearing and positioning errors in SAR images. Understanding the relationship between the trajectory errors and position errors in reconstructed images is essential in forming focused SAR images. Existing studies on the effect of antenna motion errors are limited to certain geometries, trajectory error models or monostatic SAR configuration. In this paper, we present an analysis of position errors in bistatic SAR imagery due to antenna motion errors. Bistatic SAR imagery is becoming increasingly important in the context of passive imaging and multi-sensor imaging. Our analysis provides an explicit quantitative relationship between the trajectory errors and the positioning errors in bistatic SAR images. The analysis is applicable to arbitrary trajectory errors and arbitrary imaging geometries including wide apertures and large scenes. We present extensive numerical simulations to validate the analysis and to illustrate the results in commonly used bistatic configurations and certain trajectory error models.

  20. Enhanced Named Entity Extraction via Error-Driven Aggregation

    Energy Technology Data Exchange (ETDEWEB)

    Lemmond, T D; Perry, N C; Guensche, J W; Nitao, J J; Glaser, R E; Kidwell, P; Hanley, W G

    2010-02-22

    Despite recent advances in named entity extraction technologies, state-of-the-art extraction tools achieve insufficient accuracy rates for practical use in many operational settings. However, they are not generally prone to the same types of error, suggesting that substantial improvements may be achieved via appropriate combinations of existing tools, provided their behavior can be accurately characterized and quantified. In this paper, we present an inference methodology for the aggregation of named entity extraction technologies that is founded upon a black-box analysis of their respective error processes. This method has been shown to produce statistically significant improvements in extraction relative to standard performance metrics and to mitigate the weak performance of entity extractors operating under suboptimal conditions. Moreover, this approach provides a framework for quantifying uncertainty and has demonstrated the ability to reconstruct the truth when majority voting fails.

  1. Human decision error (HUMDEE) trees

    Energy Technology Data Exchange (ETDEWEB)

    Ostrom, L.T.

    1993-08-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision.

  2. Achieving English Spoken Fluency

    Institute of Scientific and Technical Information of China (English)

    王鲜杰

    2000-01-01

    Language is first and foremost oral,spoken language,speaking skill is the most important one of the four skills(L,S,R,W)and also it is the most difficult one of the four skills. To have an all-round command of a language one must be able to speak and to understand the spoken language, it is not enough for a language learner only to have a good reading and writing skills. As Englisn language teachers, we need to focus on improving learners' English speaking skill to meet the need of our society and our country and provide learner some useful techniques to achieving their English spoken fluency. This paper focuses on the spoken how to improving learners speaking skill.

  3. Achieving diagnosis by consensus

    LENUS (Irish Health Repository)

    Kane, Bridget

    2009-08-01

    This paper provides an analysis of the collaborative work conducted at a multidisciplinary medical team meeting, where a patient’s definitive diagnosis is agreed, by consensus. The features that distinguish this process of diagnostic work by consensus are examined in depth. The current use of technology to support this collaborative activity is described, and experienced deficiencies are identified. Emphasis is placed on the visual and perceptual difficulty for individual specialities in making interpretations, and on how, through collaboration in discussion, definitive diagnosis is actually achieved. The challenge for providing adequate support for the multidisciplinary team at their meeting is outlined, given the multifaceted nature of the setting, i.e. patient management, educational, organizational and social functions, that need to be satisfied.

  4. Application of Numenta® Hierarchical Temporal Memory for land-use classification

    Directory of Open Access Journals (Sweden)

    J.E. Meroño

    2010-01-01

    Full Text Available The aim of this paper is to present the application of memoryprediction theory, implemented in the form of a Hierarchical Temporal Memory (HTM, for land-use classification. Numenta®HTM is a new computing technology that replicates the structure and function of the human neocortex. In this study, a photogram, received by a photogrammetric UltraCamD® sensor of Vexcel, and data on 1 513 plots in Manzanilla (Huelva, Spain were used to validate the classification, achieving an overall classification accuracy of 90.4%. The HTMapproach appears to hold promise for land-use classification.

  5. Classification model of arousal and valence mental states by EEG signals analysis and Brodmann correlations

    Directory of Open Access Journals (Sweden)

    Adrian Rodriguez Aguinaga

    2015-06-01

    Full Text Available This paper proposes a methodology to perform emotional states classification by the analysis of EEG signals, wavelet decomposition and an electrode discrimination process, that associates electrodes of a 10/20 model to Brodmann regions and reduce computational burden. The classification process were performed by a Support Vector Machines Classification process, achieving a 81.46 percent of classification rate for a multi-class problem and the emotions modeling are based in an adjusted space from the Russell Arousal Valence Space and the Geneva model.

  6. Practical and Reliable Error Bars in Quantum Tomography

    Science.gov (United States)

    Faist, Philippe; Renner, Renato

    2016-07-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of the output of the tomography procedure, the quantum error bars. This representation is (i) concise, being given in terms of few parameters, (ii) intuitive, providing a fair idea of the "spread" of the error, and (iii) useful, containing the necessary information for constructing confidence regions. The statements resulting from our method are formulated in terms of a figure of merit, such as the fidelity to a reference state. We present an algorithm for computing this representation and provide ready-to-use software. Our procedure is applied to actual experimental data obtained from two superconducting qubits in an entangled state, demonstrating the applicability of our method.

  7. Practical and Reliable Error Bars in Quantum Tomography.

    Science.gov (United States)

    Faist, Philippe; Renner, Renato

    2016-07-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of the output of the tomography procedure, the quantum error bars. This representation is (i) concise, being given in terms of few parameters, (ii) intuitive, providing a fair idea of the "spread" of the error, and (iii) useful, containing the necessary information for constructing confidence regions. The statements resulting from our method are formulated in terms of a figure of merit, such as the fidelity to a reference state. We present an algorithm for computing this representation and provide ready-to-use software. Our procedure is applied to actual experimental data obtained from two superconducting qubits in an entangled state, demonstrating the applicability of our method.

  8. Practical and Reliable Error Bars in Quantum Tomography.

    Science.gov (United States)

    Faist, Philippe; Renner, Renato

    2016-07-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of the output of the tomography procedure, the quantum error bars. This representation is (i) concise, being given in terms of few parameters, (ii) intuitive, providing a fair idea of the "spread" of the error, and (iii) useful, containing the necessary information for constructing confidence regions. The statements resulting from our method are formulated in terms of a figure of merit, such as the fidelity to a reference state. We present an algorithm for computing this representation and provide ready-to-use software. Our procedure is applied to actual experimental data obtained from two superconducting qubits in an entangled state, demonstrating the applicability of our method. PMID:27419548

  9. Nonlinear estimation and classification

    CERN Document Server

    Hansen, Mark; Holmes, Christopher; Mallick, Bani; Yu, Bin

    2003-01-01

    Researchers in many disciplines face the formidable task of analyzing massive amounts of high-dimensional and highly-structured data This is due in part to recent advances in data collection and computing technologies As a result, fundamental statistical research is being undertaken in a variety of different fields Driven by the complexity of these new problems, and fueled by the explosion of available computer power, highly adaptive, non-linear procedures are now essential components of modern "data analysis," a term that we liberally interpret to include speech and pattern recognition, classification, data compression and signal processing The development of new, flexible methods combines advances from many sources, including approximation theory, numerical analysis, machine learning, signal processing and statistics The proposed workshop intends to bring together eminent experts from these fields in order to exchange ideas and forge directions for the future

  10. Estuary Classification Revisited

    CERN Document Server

    Guha, Anirban

    2012-01-01

    The governing equations of a tidally averaged, width averaged, rectangular estuary has been investigated. It's theoretically shown that the dynamics of an estuary is entirely controlled by three parameters: (i) the Estuarine Froude number, (ii) the Tidal Froude number and (iii) the Estuarine Aspect ratio. The momentum, salinity and integral salt balance equations can be completely expressed in terms of these control variables. The estuary classification problem has also been reinvestigated. It's found that these three control variables can completely specify the estuary type. Comparison with real estuary data shows very good match. Additionally, we show that the well accepted leading order estuarine integral salt balance equation is inconsitent with the leading order salinity equation in an order of magnitude sense.

  11. Classification-based reasoning

    Science.gov (United States)

    Gomez, Fernando; Segami, Carlos

    1991-01-01

    A representation formalism for N-ary relations, quantification, and definition of concepts is described. Three types of conditions are associated with the concepts: (1) necessary and sufficient properties, (2) contingent properties, and (3) necessary properties. Also explained is how complex chains of inferences can be accomplished by representing existentially quantified sentences, and concepts denoted by restrictive relative clauses as classification hierarchies. The representation structures that make possible the inferences are explained first, followed by the reasoning algorithms that draw the inferences from the knowledge structures. All the ideas explained have been implemented and are part of the information retrieval component of a program called Snowy. An appendix contains a brief session with the program.

  12. Seismic texture classification. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Vinther, R.

    1997-12-31

    The seismic texture classification method, is a seismic attribute that can both recognize the general reflectivity styles and locate variations from these. The seismic texture classification performs a statistic analysis for the seismic section (or volume) aiming at describing the reflectivity. Based on a set of reference reflectivities the seismic textures are classified. The result of the seismic texture classification is a display of seismic texture categories showing both the styles of reflectivity from the reference set and interpolations and extrapolations from these. The display is interpreted as statistical variations in the seismic data. The seismic texture classification is applied to seismic sections and volumes from the Danish North Sea representing both horizontal stratifications and salt diapers. The attribute succeeded in recognizing both general structure of successions and variations from these. Also, the seismic texture classification is not only able to display variations in prospective areas (1-7 sec. TWT) but can also be applied to deep seismic sections. The seismic texture classification is tested on a deep reflection seismic section (13-18 sec. TWT) from the Baltic Sea. Applied to this section the seismic texture classification succeeded in locating the Moho, which could not be located using conventional interpretation tools. The seismic texture classification is a seismic attribute which can display general reflectivity styles and deviations from these and enhance variations not found by conventional interpretation tools. (LN)

  13. A New Classification of Sandstone.

    Science.gov (United States)

    Brewer, Roger Clay; And Others

    1990-01-01

    Introduced is a sandstone classification scheme intended for use with thin-sections and hand specimens. Detailed is a step-by-step classification scheme. A graphic presentation of the scheme is presented. This method is compared with other existing schemes. (CW)

  14. A Path Select Algorithm with Error Control Schemes and Energy Efficient Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sandeep Dahiya

    2012-04-01

    Full Text Available A wireless sensor network consists of a large number of sensor nodes that are spread densely to observe the phenomenon. The whole network lifetime relies on the lifetime of the each sensor node. If one node dies, it could lead to a separation of the sensor network. Also a multi hop structure and broadcast channel of wireless sensornecessitate error control scheme to achieve reliable data transmission. Automatic repeat request (ARQ and forward error correction (FEC are the key error control strategies in wire sensor network. In this paper we propose a path selection algorithm with error control schemes using energy efficient analysis.

  15. Panel positioning error and support mechanism for a 30-m THz radio telescope

    Institute of Scientific and Technical Information of China (English)

    De-Hua Yang; Daniel Okoh; Guo-Hua Zhou; Ai-Hua Li; Guo-Ping Li; Jing-Quan Cheng

    2011-01-01

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio.Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way.

  16. Panel positioning error and support mechanism for a 30-m THz radio telescope

    International Nuclear Information System (INIS)

    A 30-m TeraHertz (THz) radio telescope is proposed to operate at 200 μm with an active primary surface. This paper presents sensitivity analysis of active surface panel positioning errors with optical performance in terms of the Strehl ratio. Based on Ruze's surface error theory and using a Monte Carlo simulation, the effects of six rigid panel positioning errors, such as piston, tip, tilt, radial, azimuthal and twist displacements, were directly derived. The optical performance of the telescope was then evaluated using the standard Strehl ratio. We graphically illustrated the various panel error effects by presenting simulations of complete ensembles of full reflector surface errors for the six different rigid panel positioning errors. Study of the panel error sensitivity analysis revealed that the piston error and tilt/tip errors are dominant while the other rigid errors are much less important. Furthermore, as indicated by the results, we conceived of an alternative Master-Slave Concept-based (MSC-based) active surface by implementating a special Series-Parallel Concept-based (SPC-based) hexapod as the active panel support mechanism. A new 30-m active reflector based on the two concepts was demonstrated to achieve correction for all the six rigid panel positioning errors in an economically feasible way. (research papers)

  17. Classification of Rainbows

    Science.gov (United States)

    Ricard, J. L.; Peter, A. L.; Barckicke, J.

    2015-12-01

    CLASSIFICATION OF RAINBOWS Jean Louis Ricard,1,2,* Peter Adams ,2 and Jean Barckicke 2,3 1CNRM, Météo-France,42 Avenue Gaspard Coriolis, 31057 Toulouse, France 2CEPAL, 148 Himley Road, Dudley, West Midlands DY1 2QH, United Kingdom 3DP/Compas,Météo-France,42 Avenue Gaspard Coriolis, 31057 Toulouse, France *Corresponding author: Dr_Jean_Ricard@yahoo,co,ukRainbows are the most beautiful and most spectacular optical atmospheric phenomenon. Humphreys (1964) pointedly noted that "the "explanations" generally given of the rainbow [ in textbooks] may well be said to explain beautifully that which does not occur, and to leave unexplained which does" . . . "The records of close observations of rainbows soon show that not even the colors are always the same". Textbooks stress that the main factor affecting the aspect of the rainbow is the radius of the water droplets. In his well-known textbook entitled "the nature of light & colour in the open air", Minnaert (1954) gives the chief features of the rainbow depending on the diameter of the drops producing it. For this study, we have gathered hundreds of pictures of primary bows. We sort out the pictures into classes. The classes are defined in a such way that rainbows belonging to the same class look similar. Our results are surprising and do not confirm Minnaert's classification. In practice, the size of the water droplets is only a minor factor controlling the overall aspect of the rainbow. The main factor appears to be the height of the sun above the horizon. At sunset, the width of the red band increases, while the width of the other bands of colours decreases. The orange, the violet, the blue and the green bands disappear completely in this order. At the end, the primary bow is mainly red and slightly yellow. Picture = Contrast-enhanced photograph of a primary bow picture (prepared by Andrew Dunn).

  18. Medical errors in hospitalized pediatric trauma patients with chronic health conditions

    Directory of Open Access Journals (Sweden)

    Xiaotong Liu

    2014-01-01

    Full Text Available Objective: This study compares medical errors in pediatric trauma patients with and without chronic conditions. Methods: The 2009 Kids’ Inpatient Database, which included 123,303 trauma discharges, was analyzed. Medical errors were identified by International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes. The medical error rates per 100 discharges and per 1000 hospital days were calculated and compared between inpatients with and without chronic conditions. Results: Pediatric trauma patients with chronic conditions experienced a higher medical error rate compared with patients without chronic conditions: 4.04 (95% confidence interval: 3.75–4.33 versus 1.07 (95% confidence interval: 0.98–1.16 per 100 discharges. The rate of medical error differed by type of chronic condition. After controlling for confounding factors, the presence of a chronic condition increased the adjusted odds ratio of medical error by 37% if one chronic condition existed (adjusted odds ratio: 1.37, 95% confidence interval: 1.21–1.5, and 69% if more than one chronic condition existed (adjusted odds ratio: 1.69, 95% confidence interval: 1.48–1.53. In the adjusted model, length of stay had the strongest association with medical error, but the adjusted odds ratio for chronic conditions and medical error remained significantly elevated even when accounting for the length of stay, suggesting that medical complexity has a role in medical error. Higher adjusted odds ratios were seen in other subgroups. Conclusion: Chronic conditions are associated with significantly higher rate of medical errors in pediatric trauma patients. Future research should evaluate interventions or guidelines for reducing the risk of medical errors in pediatric trauma patients with chronic conditions.

  19. Classification data mining method based on dynamic RBF neural networks

    Science.gov (United States)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  20. Integration of multi-array sensors and support vector machines for the detection and classification of organophosphate nerve agents

    Science.gov (United States)

    Land, Walker H., Jr.; Sadik, Omowunmi A.; Embrechts, Mark J.; Leibensperger, Dale; Wong, Lut; Wanekaya, Adam; Uematsu, Michiko

    2003-08-01

    Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. Furthermore, recent events have highlighted awareness that chemical and biological agents (CBAs) may become the preferred, cheap alternative WMD, because these agents can effectively attack large populations while leaving infrastructures intact. Despite the availability of numerous sensing devices, intelligent hybrid sensors that can detect and degrade CBAs are virtually nonexistent. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using parathion and dichlorvos as model stimulants compounds. SVMs were used for the design and evaluation of new and more accurate data extraction, preprocessing and classification. Experimental results for the paradigms developed using Structural Risk Minimization, show a significant increase in classification accuracy when compared to the existing AromaScan baseline system. Specifically, the results of this research has demonstrated that, for the Parathion versus Dichlorvos pair, when compared to the AromaScan baseline system: (1) a 23% improvement in the overall ROC Az index using the S2000 kernel, with similar improvements with the Gaussian and polynomial (of degree 2) kernels, (2) a significant 173% improvement in specificity with the S2000 kernel. This means that the number of false negative errors were reduced by 173%, while making no false positive errors, when compared to the AromaScan base line performance. (3) The Gaussian and polynomial kernels demonstrated similar specificity at 100% sensitivity. All SVM classifiers provided essentially perfect classification performance for the Dichlorvos versus Trichlorfon pair. For the most difficult classification task, the Parathion versus