Sample records for relevant classification codes

  1. Product Work Classification and Coding


    detail is much more useful in planning steel welding processes. In this regard remember that mild steel , HSLA steel , and high-yield steel (e.g. HY80 ...manufacturing facility. In Figure 2.3-2, a classification and coding system for steel parts is shown. This classification and coding system sorts steel parts...system would provide a shop which produced steel parts with a means of organizing parts. Rather than attempting to manage all of its parts as a single

  2. Pairwise Document Classification for Relevance Feedback


    Pairwise Document Classification for Relevance Feedback Jonathan L. Elsas, Pinar Donmez, Jamie Callan, Jaime G. Carbonell Language Technologies...Collins-Thompson and J. Callan. Query expansion using random walk models. In CIKM ’05, page 711. ACM, 2005. [5] P. Donmez and J. Carbonell . Paired

  3. EAI-oriented information classification code system in manufacturing enterprises

    Junbiao WANG; Hu DENG; Jianjun JIANG; Binghong YANG; Bailing WANG


    Although the traditional information classifi-cation coding system in manufacturing enterprises (MEs) emphasizes the construction of code standards, it lacks the management of the code creation, code data transmission and so on. According to the demands of enterprise application integration (EAI) in manufacturing enter-prises, an enterprise application integration oriented information classification code system (EAIO-ICCS) is proposed. EAIO-ICCS expands the connotation of the information classification code system and assures the identity of the codes in manufacturing enterprises with unified management of codes at the view of its lifecycle.

  4. Quantitative information measurement and application for machine component classification codes

    LI Ling-Feng; TAN Jian-rong; LIU Bo


    Information embodied in machine component classification codes has internal relation with the probability distribution of the code symbol. This paper presents a model considering codes as information source based on Shannon's information theory. Using information entropy, it preserves the mathematical form and quantitatively measures the information amount of a symbol and a bit in the machine component classification coding system. It also gets the maximum value of information amount and the corresponding coding scheme when the category of symbols is fixed. Samples are given to show how to evaluate the information amount of component codes and how to optimize a coding system.


    A. V. Oberemko


    Full Text Available This review presents a generalized definition of vesicles as bilayer extracellular organelles of all celular forms of life: not only eu-, but also prokaryotic. The structure and composition of extracellular vesicles, history of research, nomenclature, their impact on life processes in health and disease are discussed. Moreover, vesicles may be useful as clinical instruments for biomarkers, and they are promising as biotechnological drug. However, many questions in this area are still unresolved and need to be addressed in the future. The most interesting from the point of view of practical health care represents a direction to study the effect of exosomes and microvesicles in the development and progression of a particular disease, the possibility of adjusting the pathological process by means of extracellular vesicles of a particular type, acting as an active ingredient. Relevant is the further elucidation of the role and importance of exosomes to the surrounding cells, tissues and organs at the molecular level, the prospects for the use of non-cellular vesicles as biomarkers of disease.

  6. Relevance theory: the Cognitive Pragmatic Foundation of Code-switching



    The paper will discuss the process of code-switching and its cognitive pragmatic motivation from the point of relevance.And code-switching is also regarded as a kind of communicative strategy.The process of the production of code-switching is also the cooperation and mutual constrain of communicator’s cognitive environment and ability.Cognitive effect can be obtained through communicator’s processing cognitive environment with their cognitive ability.In this process,the cooperation of cognitive ability and cognitive environment gives a guarantee to successful communication with code-switching.

  7. Complexity reduced coding of binary pattern units in image classification

    Kurmyshev, E. V.; Guillen-Bonilla, J. T.


    The ability to simulate and control complex physical situations in real time is an important element of many engineering and robotics applications, including pattern recognition and image classification. One of the ways to meet specific requirements of a process is a reduction of computational complexity of algorithms. In this work we propose a new coding of binary pattern units (BPU) that reduces the time and spatial complexity of algorithms of image classification significantly. We apply this coding to a particular but important case of the coordinated clusters representation (CCR) of images. This algorithm reduces the dimension of the CCR feature space and, as a consequence, the time and space complexity of the CCR based methods of image classification, exponentially. In addition, the new coding preserves all the fundamental properties of the CCR that are successfully used in the recognition, classification and segmentation of texture images. The same approach to the coding of BPUs can be used in the Local Binary Pattern (LBP) method. In order to evaluate the reduction of time and space complexity, we did an experiment on multiclass classification of images using the "traditional" and the new coding of the CCR. This test showed very effective reduction of the computing time and required computer memory with the use of the new coding of BPUs of the CCR, retaining 100% or a little less efficiency of classification at the time.

  8. Low Rank Sparse Coding for Image Classification


    Singapore 4 Institute of Automation, Chinese Academy of Sciences, P. R. China 5 University of Illinois at Urbana-Champaign, Urbana, IL USA Abstract In this...coding [36]. 1. Introduction The bag-of-words (BoW) model is one of the most pop - ular models for feature design. It has been successfully applied to...defense of softassignment coding. In ICCV, 2011. [26] S. Liu, J. Feng, Z. Song , T. Zhang, H. Lu, C. Xu, and S. Yan. Hi, magic closet, tell me what to

  9. On the relevance of spectral features for instrument classification

    Nielsen, Andreas Brinch; Sigurdsson, Sigurdur; Hansen, Lars Kai


    Automatic knowledge extraction from music signals is a key component for most music organization and music information retrieval systems. In this paper, we consider the problem of instrument modelling and instrument classification from the rough audio data. Existing systems for automatic instrument...... classification operate normally on a relatively large number of features, from which those related to the spectrum of the audio signal are particularly relevant. In this paper, we confront two different models about the spectral characterization of musical instruments. The first assumes a constant envelope...

  10. Human Behavior Classification Using Multi-Class Relevance Vector Machine

    Yogameena, B.


    Full Text Available Problem statement: In computer vision and robotics, one of the typical tasks is to identify specific objects in an image and to determine each object’s position and orientation relative to coordinate system. This study presented a Multi-class Relevance Vector machine (RVM classification algorithm which classifies different human poses from a single stationary camera for video surveillance applications. Approach: First the foreground blobs and their edges are obtained. Then the relevance vector machine classification scheme classified the normal and abnormal behavior. Results: The performance proposed by our method was compared with Support Vector Machine (SVM and multi-class support vector machine. Experimental results showed the effectiveness of the method. Conclusion: It is evident that RVM has good accuracy and lesser computational than SVM.

  11. Automatic counterfeit protection system code classification

    Van Beusekom, Joost; Schreyer, Marco; Breuel, Thomas M.


    Wide availability of cheap high-quality printing techniques make document forgery an easy task that can easily be done by most people using standard computer and printing hardware. To prevent the use of color laser printers or color copiers for counterfeiting e.g. money or other valuable documents, many of these machines print Counterfeit Protection System (CPS) codes on the page. These small yellow dots encode information about the specific printer and allow the questioned document examiner in cooperation with the manufacturers to track down the printer that was used to generate the document. However, the access to the methods to decode the tracking dots pattern is restricted. The exact decoding of a tracking pattern is often not necessary, as tracking the pattern down to the printer class may be enough. In this paper we present a method that detects what CPS pattern class was used in a given document. This can be used to specify the printer class that the document was printed on. Evaluation proved an accuracy of up to 91%.

  12. Convergent Validity of O*NET Holland Code Classifications

    Eggerth, Donald E.; Bowles, Shannon M.; Tunick, Roy H.; Andrew, Michael E.


    The interpretive ease and intuitive appeal of the Holland RIASEC typology have made it nearly ubiquitous in vocational guidance settings. Its incorporation into the Occupational Information Network (O*NET) has moved it another step closer to reification. This research investigated the rates of agreement between Holland code classifications from…

  13. Low-Rank Sparse Coding for Image Classification

    Zhang, Tianzhu


    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  14. Improving the coding and classification of ambulance data through the application of International Classification of Disease 10th revision.

    Cantwell, Kate; Morgans, Amee; Smith, Karen; Livingston, Michael; Dietze, Paul


    This paper aims to examine whether an adaptation of the International Classification of Disease (ICD) coding system can be applied retrospectively to final paramedic assessment data in an ambulance dataset with a view to developing more fine-grained, clinically relevant case definitions than are available through point-of-call data. Over 1.2 million case records were extracted from the Ambulance Victoria data warehouse. Data fields included dispatch code, cause (CN) and final primary assessment (FPA). Each FPA was converted to an ICD-10-AM code using word matching or best fit. ICD-10-AM codes were then converted into Major Diagnostic Categories (MDC). CN was aligned with the ICD-10-AM codes for external cause of morbidity and mortality. The most accurate results were obtained when ICD-10-AM codes were assigned using information from both FPA and CN. Comparison of cases coded as unconscious at point-of-call with the associated paramedic assessment highlighted the extra clinical detail obtained when paramedic assessment data are used. Ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Coding of ambulance data using ICD-10-AM allows for comparison of not only ambulance service users but also with other population groups. WHAT IS KNOWN ABOUT THE TOPIC? There is no reliable and standard coding and categorising system for paramedic assessment data contained in ambulance service databases. WHAT DOES THIS PAPER ADD? This study demonstrates that ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Representation of ambulance case types using ICD-10-AM-coded information obtained after paramedic assessment is more fine grained and clinically relevant than point-of-call data, which uses caller information before ambulance attendance. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? This paper describes

  15. 78 FR 21612 - Medical Device Classification Product Codes; Guidance for Industry and Food and Drug...


    ... HUMAN SERVICES Food and Drug Administration Medical Device Classification Product Codes; Guidance for Industry and Food and Drug Administration Staff; Availability AGENCY: Food and Drug Administration, HHS... guidance entitled ``Medical Device Classification Product Codes.'' This document describes how device...

  16. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Mustafa Basthikodi


    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  17. Model classification rate control algorithm for video coding


    A model classification rate control method for video coding is proposed. The macro-blocks are classified according to their prediction errors, and different parameters are used in the rate-quantization and distortion-quantization model.The different model parameters are calculated from the previous frame of the same type in the process of coding. These models are used to estimate the relations among rate, distortion and quantization of the current frame. Further steps,such as R-D optimization based quantization adjustment and smoothing of quantization of adjacent macroblocks, are used to improve the quality. The results of the experiments prove that the technique is effective and can be realized easily. The method presented in the paper can be a good way for MPEG and H. 264 rate control.

  18. On the classification of long non-coding RNAs

    Ma, Lina


    Long non-coding RNAs (lncRNAs) have been found to perform various functions in a wide variety of important biological processes. To make easier interpretation of lncRNA functionality and conduct deep mining on these transcribed sequences, it is convenient to classify lncRNAs into different groups. Here, we summarize classification methods of lncRNAs according to their four major features, namely, genomic location and context, effect exerted on DNA sequences, mechanism of functioning and their targeting mechanism. In combination with the presently available function annotations, we explore potential relationships between different classification categories, and generalize and compare biological features of different lncRNAs within each category. Finally, we present our view on potential further studies. We believe that the classifications of lncRNAs as indicated above are of fundamental importance for lncRNA studies, helpful for further investigation of specific lncRNAs, for formulation of new hypothesis based on different features of lncRNA and for exploration of the underlying lncRNA functional mechanisms. © 2013 Landes Bioscience.

  19. Classification of Histology Sections via Multispectral Convolutional Sparse Coding.

    Zhou, Yin; Chang, Hang; Barner, Kenneth; Spellman, Paul; Parvin, Bahram


    Image-based classification of histology sections plays an important role in predicting clinical outcomes. However this task is very challenging due to the presence of large technical variations (e.g., fixation, staining) and biological heterogeneities (e.g., cell type, cell state). In the field of biomedical imaging, for the purposes of visualization and/or quantification, different stains are typically used for different targets of interest (e.g., cellular/subcellular events), which generates multi-spectrum data (images) through various types of microscopes and, as a result, provides the possibility of learning biological-component-specific features by exploiting multispectral information. We propose a multispectral feature learning model that automatically learns a set of convolution filter banks from separate spectra to efficiently discover the intrinsic tissue morphometric signatures, based on convolutional sparse coding (CSC). The learned feature representations are then aggregated through the spatial pyramid matching framework (SPM) and finally classified using a linear SVM. The proposed system has been evaluated using two large-scale tumor cohorts, collected from The Cancer Genome Atlas (TCGA). Experimental results show that the proposed model 1) outperforms systems utilizing sparse coding for unsupervised feature learning (e.g., PSD-SPM [5]); 2) is competitive with systems built upon features with biological prior knowledge (e.g., SMLSPM [4]).

  20. 48 CFR 19.303 - Determining North American Industry Classification System (NAICS) codes and size standards.


    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Determining North American Industry Classification System (NAICS) codes and size standards. 19.303 Section 19.303 Federal Acquisition... Classification System (NAICS) codes and size standards. (a) The contracting officer shall determine...

  1. Recent changes in Criminal Procedure Code and Indian Penal Code relevant to medical profession.

    Agarwal, Swapnil S; Kumar, Lavlesh; Mestri, S C


    Some sections in Criminal Procedure Code and Indian Penal Code have a direct binding on medical practitioner. With changing times, few of them have been revised and these changes are presented in this article.

  2. Behaviorally relevant burst coding in primary sensory neurons.

    Sabourin, Patrick; Pollack, Gerald S


    Bursts of action potentials in sensory interneurons are believed to signal the occurrence of particularly salient stimulus features. Previous work showed that bursts in an identified, ultrasound-tuned interneuron (AN2) of the cricket Teleogryllus oceanicus code for conspicuous increases in amplitude of an ultrasound stimulus, resulting in behavioral responses that are interpreted as avoidance of echolocating bats. We show that the primary sensory neurons that inform AN2 about high-frequency acoustic stimuli also produce bursts. As is the case for AN2, bursts in sensory neurons perform better as feature detectors than isolated, nonburst, spikes. Bursting is temporally correlated between sensory neurons, suggesting that on occurrence of a salient stimulus feature, AN2 will receive strong synaptic input in the form of coincident bursts, from several sensory neurons, and that this might result in bursting in AN2. Our results show that an important feature of the temporal structure of interneuron spike trains can be established at the earliest possible level of sensory processing, i.e., that of the primary sensory neuron.

  3. Relevance theory:the Cognitive Pragmatic Foundation of Code-switching



    The paper will discuss the process of code-switching and its cognitive pragmatic motivation from the point of relevance. And code-switching is also regarded as a kind of communicative strategy. The process of the production of code-switching is also the cooperation and mutual constrain of communicator's cognitive environment and ability. Cog-nitive effect can be obtained through communicator's processing cognitive environment with their cognitive ability. In this process, the cooperation of cognitive ability and cognitive environment gives a guarantee to successful communication with code-switching.

  4. Fisher's Discriminant and Relevant Component Analysis for static facial expression classification

    Sorci, Matteo; Antonini, Gianluca; Thiran, Jean-Philippe


    This paper addresses the issue of automatic classification of the six universal emotional categories (joy, surprise, fear, anger, disgust, sadness) in the case of static images. Appearance parameters are extracted by an active appearance model(AAM) representing the input for the classification step. We show how Relevant Component Analysis (RCA) in combination with Fisher's Linear Discriminant (FLD) provides a good "plug-\\&-play" classifier in the context of facial expression recognitio...

  5. Mining discriminative class codes for multi-class classification based on minimizing generalization errors

    Eiadon, Mongkon; Pipanmaekaporn, Luepol; Kamonsantiroj, Suwatchai


    Error Correcting Output Code (ECOC) has emerged as one of promising techniques for solving multi-class classification. In the ECOC framework, a multi-class problem is decomposed into several binary ones with a coding design scheme. Despite this, the suitable multi-class decomposition scheme is still ongoing research in machine learning. In this work, we propose a novel multi-class coding design method to mine the effective and compact class codes for multi-class classification. For a given n-class problem, this method decomposes the classes into subsets by embedding a structure of binary trees. We put forward a novel splitting criterion based on minimizing generalization errors across the classes. Then, a greedy search procedure is applied to explore the optimal tree structure for the problem domain. We run experiments on many multi-class UCI datasets. The experimental results show that our proposed method can achieve better classification performance than the common ECOC design methods.

  6. Course and Research Analysis Using a Coded Classification System.

    Lochstet, Gwenn S.


    A system of course analysis was developed and used to code and compare faculty research, courses, and library materials in the Mathematics, Physics, and Statistics departments of the University of South Carolina. The purpose is to provide a guide in determining how well the library's collection supports the academic needs of these departments. (10…

  7. Classification of working processes to facilitate occupational hazard coding on industrial trawlers

    Jensen, Olaf C; Stage, Søren; Noer, Preben


    and a classification of the principal working processes on all kinds of vessels and a detailed classification for industrial trawlers. In industrial trawling, fish are landed for processing purposes, for example, for the production of fish oil and fish meal. The classification was subsequently used to code......BACKGROUND: Commercial fishing is an extremely dangerous economic activity. In order to more accurately describe the risks involved, a specific injury coding based on the working process was developed. METHOD: Observation on six different types of vessels was conducted and allowed a description...... the injuries reported to the Danish Maritime Authority over a 5-year period. RESULTS: On industrial trawlers, 374 of 394 (95%) injuries were captured by the classification. Setting out and hauling in the gear and nets were the processes with the most injuries and accounted for 58.9% of all injuries...

  8. 32 CFR 1636.8 - Considerations relevant to granting or denying a claim for classification as a conscientious...


    ... claim for classification as a conscientious objector. 1636.8 Section 1636.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM CLASSIFICATION OF CONSCIENTIOUS OBJECTORS § 1636.8 Considerations relevant to granting or denying a claim for classification as a...

  9. A new classification code is available in the Danish health-care classification system for patients with symptoms related to chemicals and scents

    Elberling, Jesper; Bonde, Jens Peter Ellekilde; Vesterhauge, Søren;


    From July 2012, a classification code for multiple chemical sensitivity has been available in the Danish healthcare classification system. The overall purpose is to register hospital contacts in Denmark. The diagnostic code is labelled "Symptoms related to chemicals and scents", DR688A1, and clas......, and classified as a subcategory to "Medically unexplained symptoms", DR688A, which is a specialization of the ICD-10 code "R68.8 Other specified general symptoms and signs". The classification was decided with reference to the present lack of scientific understanding.......From July 2012, a classification code for multiple chemical sensitivity has been available in the Danish healthcare classification system. The overall purpose is to register hospital contacts in Denmark. The diagnostic code is labelled "Symptoms related to chemicals and scents", DR688A1...

  10. Coding of hyperspectral imagery using adaptive classification and trellis-coded quantization

    Abousleman, Glen P.


    A system is presented for compression of hyperspectral imagery. Specifically, DPCM is used for spectral decorrelation, while an adaptive 2D discrete cosine transform coding scheme is used for spatial decorrelation. Trellis coded quantization is used to encode the transform coefficients. Side information and rate allocation strategies are discussed. Entropy-constrained codebooks are designed using a modified version of the generalized Lloyd algorithm. This entropy constrained system achieves a compression ratio of greater than 70:1 with an average PSNR of the coded hyperspectral sequence approaching 41 dB.

  11. Local coding based matching kernel method for image classification.

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  12. The Fisher Kernel Coding Framework for High Spatial Resolution Scene Classification

    Bei Zhao


    Full Text Available High spatial resolution (HSR image scene classification is aimed at bridging the semantic gap between low-level features and high-level semantic concepts, which is a challenging task due to the complex distribution of ground objects in HSR images. Scene classification based on the bag-of-visual-words (BOVW model is one of the most successful ways to acquire the high-level semantic concepts. However, the BOVW model assigns local low-level features to their closest visual words in the “visual vocabulary” (the codebook obtained by k-means clustering, which discards too many useful details of the low-level features in HSR images. In this paper, a feature coding method under the Fisher kernel (FK coding framework is introduced to extend the BOVW model by characterizing the low-level features with a gradient vector instead of the count statistics in the BOVW model, which results in a significant decrease in the codebook size and an acceleration of the codebook learning process. By considering the differences in the distributions of the ground objects in different regions of the images, local FK (LFK is proposed for the HSR image scene classification method. The experimental results show that the proposed scene classification methods under the FK coding framework can greatly reduce the computational cost, and can obtain a better scene classification accuracy than the methods based on the traditional BOVW model.

  13. Classifying Obstructive and Nonobstructive Code Clones of Type I Using Simplified Classification Scheme: A Case Study

    Miroslaw Staron


    Full Text Available Code cloning is a part of many commercial and open source development products. Multiple methods for detecting code clones have been developed and finding the clones is often used in modern quality assurance tools in industry. There is no consensus whether the detected clones are negative for the product and therefore the detected clones are often left unmanaged in the product code base. In this paper we investigate how obstructive code clones of Type I (duplicated exact code fragments are in large software systems from the perspective of the quality of the product after the release. We conduct a case study at Ericsson and three of its large products, which handle mobile data traffic. We show how to use automated analogy-based classification to decrease the classification effort required to determine whether a clone pair should be refactored or remain untouched. The automated method allows classifying 96% of Type I clones (both algorithms and data declarations leaving the remaining 4% for the manual classification. The results show that cloning is common in the studied commercial software, but that only 1% of these clones are potentially obstructive and can jeopardize the quality of the product if left unmanaged.

  14. Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data (SCED). NFES 2011-801

    National Forum on Education Statistics, 2011


    In this handbook, "Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data" (SCED), the National Center for Education Statistics (NCES) and the National Forum on Education Statistics have extended the existing secondary course classification system with codes and descriptions for courses offered at…

  15. Relevant Feature Integration and Extraction for Single-Trial Motor Imagery Classification

    Lili Li


    Full Text Available Brain computer interfaces provide a novel channel for the communication between brain and output devices. The effectiveness of the brain computer interface is based on the classification accuracy of single trial brain signals. The common spatial pattern (CSP algorithm is believed to be an effective algorithm for the classification of single trial brain signals. As the amplitude feature for spatial projection applied by this algorithm is based on a broad frequency bandpass filter (mainly 5–30 Hz in which the frequency band is often selected by experience, the CSP is sensitive to noise and the influence of other irrelevant information in the selected broad frequency band. In this paper, to improve the CSP, a novel relevant feature integration and extraction algorithm is proposed. Before projecting, we integrated the motor relevant information to suppress the interference of noise and irrelevant information, as well as to improve the spatial difference for projection. The algorithm was evaluated with public datasets. It showed significantly better classification performance with single trial electroencephalography (EEG data, increasing by 6.8% compared with the CSP.

  16. A 10-digit geo-coding system for classification of geomorphosites in India

    Kale, Vishwas


    India is a country with rich geo-wealth and geoheritage. There are numerous fascinating and exquisite landforms and landscapes in the Indian subcontinent that have immense cultural, socio-economic and scientific value and are significant from the point of view of geotourism and geoeducation. Presently, India has 32 World Heritage Properties, including seven natural properties. The Geological Survey of India (GSI) has declared 26 geosites as National Geological Monuments. Although a few attempts have been made in the last ten years to identify and catalog noteworthy geomorphosites in India, till date no attempt has been made to undertake multi-criteria or multi-attribute assessment and classification of the potential geomorphosites. In view of the limitations and difficulties in the ranking and/or scoring system adopted in many earlier studies on geoheritage sites, a simple ten-digit geo-coding system for some potential geomorphosites in India is suggested. The 10-digit coding system is a numerical scheme for the arrangement of geomorphosites on the basis of some key scientific value criteria, additional value criteria and management criteria as well as the IUCN geo-theme codes and the code numbers assigned to major geomorphic provinces in a region/country. This coding system could be used to establish a classification and the priority of geomorphosites and could be applied to any area or region in the world. The user-friendly geo-coding system has the potential to classify and sort geomorphosites of different characters, origin and value.

  17. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images

    Henry Joutsijoki


    Full Text Available The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC colony images can be classified using error-correcting output codes (ECOC. Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT based features in classification. The best accuracy (62.4% is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting. The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  18. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images.

    Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti


    The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC) colony images can be classified using error-correcting output codes (ECOC). Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good) which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN) searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT) based features in classification. The best accuracy (62.4%) is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting). The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  19. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Yan, Xiaozhen; Xie, Wu; Xu, Zhen


    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy.

  20. Multipath sparse coding for scene classification in very high resolution satellite imagery

    Fan, Jiayuan; Tan, Hui Li; Lu, Shijian


    With the rapid development of various satellite sensors, automatic and advanced scene classification technique is urgently needed to process a huge amount of satellite image data. Recently, a few of research works start to implant the sparse coding for feature learning in aerial scene classification. However, these previous research works use the single-layer sparse coding in their system and their performances are highly related with multiple low-level features, such as scale-invariant feature transform (SIFT) and saliency. Motivated by the importance of feature learning through multiple layers, we propose a new unsupervised feature learning approach for scene classification on very high resolution satellite imagery. The proposed unsupervised feature learning utilizes multipath sparse coding architecture in order to capture multiple aspects of discriminative structures within complex satellite scene images. In addition, the dense low-level features are extracted from the raw satellite data by using different image patches with varying size at different layers, and this approach is not limited to a particularly designed feature descriptors compared with the other related works. The proposed technique has been evaluated on two challenging high-resolution datasets, including the UC Merced dataset containing 21 different aerial scene categories with a 1 foot resolution and the Singapore dataset containing 5 land-use categories with a 0.5m spatial resolution. Experimental results show that it outperforms the state-of-the-art that uses the single-layer sparse coding. The major contributions of this proposed technique include (1) a new unsupervised feature learning approach to generate feature representation for very high-resolution satellite imagery, (2) the first multipath sparse coding that is used for scene classification in very high-resolution satellite imagery, (3) a simple low-level feature descriptor instead of many particularly designed low-level descriptor

  1. A Statistical Method without Training Step for the Classification of Coding Frame in Transcriptome Sequences.

    Carels, Nicolas; Frías, Diego


    In this study, we investigated the modalities of coding open reading frame (cORF) classification of expressed sequence tags (EST) by using the universal feature method (UFM). The UFM algorithm is based on the scoring of purine bias (Rrr) and stop codon frequencies. UFM classifies ORFs as coding or non-coding through a score based on 5 factors: (i) stop codon frequency; (ii) the product of the probabilities of purines occurring in the three positions of nucleotide triplets; (iii) the product of the probabilities of Cytosine (C), Guanine (G), and Adenine (A) occurring in the 1st, 2nd, and 3rd positions of triplets, respectively; (iv) the probabilities of a G occurring in the 1st and 2nd positions of triplets; and (v) the probabilities of a T occurring in the 1st and an A in the 2nd position of triplets. Because UFM is based on primary determinants of coding sequences that are conserved throughout the biosphere, it is suitable for cORF classification of any sequence in eukaryote transcriptomes without prior knowledge. Considering the protein sequences of the Protein Data Bank (RCSB PDB or more simply PDB) as a reference, we found that UFM classifies cORFs of ≥200 bp (if the coding strand is known) and cORFs of ≥300 bp (if the coding strand is unknown), and releases them in their coding strand and coding frame, which allows their automatic translation into protein sequences with a success rate equal to or higher than 95%. We first established the statistical parameters of UFM using ESTs from Plasmodium falciparum, Arabidopsis thaliana, Oryza sativa, Zea mays, Drosophila melanogaster, Homo sapiens and Chlamydomonas reinhardtii in reference to the protein sequences of PDB. Second, we showed that the success rate of cORF classification using UFM is expected to apply to approximately 95% of higher eukaryote genes that encode for proteins. Third, we used UFM in combination with CAP3 to assemble large EST samples into cORFs that we used to analyze transcriptome

  2. Stochastic relevance analysis of epileptic EEG signals for channel selection and classification.

    Duque-Muñoz, L; Guerrero-Mosquera, C; Castellanos-Dominguez, G


    Time-frequency decompositions (TFDs) are well known techniques that permit to extract useful information or features from EEG signals, being necessary to distinguish between irrelevant information and the features effectively representing the subjacent physiological phenomena, according to some evaluation measure. This work introduces a new method to obtain relevant features extracted from time-frequency plane for epileptic EEG signals. Particularly, EEG features are extracted by common spectral methods such as short time Fourier transform (STFT), wavelets transform and Empirical Mode Decomposition (EMD). Then, each method is evaluated by Stochastic Relevance Analysis (SRA) that is further used for EEG classification and channel selection. The classification measures are carried out based on the performance of the k-NN classifier, while the channels selected are validated by visual inspection and topographic scalp map. The study uses real and multi-channel EEG data and all the experiments have been supervised by an expert neurologist. Results obtained in this paper show that SRA is a good alternative for automatic seizure detection and also opens the possibility of formulating new criteria to select, classify or analyze abnormal EEG channels.

  3. Prognostic relevance of morphological classification models for myelodysplastic syndromes in an era of the revised International Prognostic Scoring System.

    van Spronsen, Margot F; Ossenkoppele, Gert J; Westers, Theresia M; van de Loosdrecht, Arjan A


    Numerous morphological classification models have been developed to organise the heterogeneous spectrum of myelodysplastic syndromes (MDS). While the 2008 update of the World Health Organisation (WHO) is the current standard, the publication of the revised International Prognostic Scoring System (IPSS-R) has illustrated the need for supplemental prognostic information. The aim of this study was to investigate whether morphological classification models for MDS - of both the French-American-British (FAB) group and WHO - provide reliable criteria for their classification into homogeneous and clinically relevant categories with prognostic relevance beyond the IPSS-R. We reclassified 238 MDS patients using each of the FAB, WHO 2001 and WHO 2008 criteria and studied classification categories in terms of clinical, haematological and cytogenetic features. Subsequently, we calculated prognostic scores using the IPSS-R and investigated whether the morphological classification models had significantly prognostic value in patients stratified by the IPSS-R and vice versa. By adopting the FAB, WHO 2001 and WHO 2008 classifications, MDS patients were organised into homogeneous categories with intrinsic prognostic information. However, whereas the morphological classification models showed no prognostic value beyond the IPSS-R, the IPSS-R had significant prognostic value beyond the FAB, WHO 2001 and WHO 2008 classifications. Even though morphological classification models for MDS might be clinically relevant from a prognostic point of view, their relevance in terms of risk stratification is evidently limited in light of the IPSS-R. Therefore, we suggest to stop the use of morphological classification models for MDS for risk stratification in routine clinical practice.

  4. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Fan Hu


    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  5. Accurate HEp-2 cell classification based on sparse bag of words coding.

    Ensafi, Shahab; Lu, Shijian; Kassim, Ashraf A; Tan, Chew Lim


    Autoimmune diseases (AD) are the abnormal response of the immune system of the body to healthy tissues. ADs have generally been on the increase. Efficient computer aided diagnosis of ADs through classification of the human epithelial type 2 (HEp-2) cells become beneficial. These methods make lower diagnosis costs, faster response and better diagnosis repeatability. In this paper, we present an automated HEp-2 cell image classification technique that exploits the sparse coding of the visual features together with the Bag of Words model (SBoW). In particular, SURF (Speeded Up Robust Features) and SIFT (Scale-invariant feature transform) features are specially integrated to work in a complementary fashion. This method helps greatly improve the cell classification accuracy. Additionally, a hierarchical max-pooling method is proposed to aggregate the local sparse codes in different layers to provide final feature vector. Furthermore, various parameters of the dictionary learning including the dictionary size, the learning iteration number, and the pooling strategy is also investigated. Experiments conducted on publicly available datasets show that the proposed technique clearly outperforms state-of-the-art techniques in cell and specimen levels. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Accelerating Relevance-Vector-Machine-Based Classification of Hyperspectral Image with Parallel Computing

    Chao Dong


    Full Text Available Benefiting from the kernel skill and the sparse property, the relevance vector machine (RVM could acquire a sparse solution, with an equivalent generalization ability compared with the support vector machine. The sparse property requires much less time in the prediction, making RVM potential in classifying the large-scale hyperspectral image. However, RVM is not widespread influenced by its slow training procedure. To solve the problem, the classification of the hyperspectral image using RVM is accelerated by the parallel computing technique in this paper. The parallelization is revealed from the aspects of the multiclass strategy, the ensemble of multiple weak classifiers, and the matrix operations. The parallel RVMs are implemented using the C language plus the parallel functions of the linear algebra packages and the message passing interface library. The proposed methods are evaluated by the AVIRIS Indian Pines data set on the Beowulf cluster and the multicore platforms. It shows that the parallel RVMs accelerate the training procedure obviously.

  7. Codon sextets with leading role of serine create "ideal" symmetry classification scheme of the genetic code.

    Rosandić, Marija; Paar, Vladimir


    The standard classification scheme of the genetic code is organized for alphabetic ordering of nucleotides. Here we introduce the new, "ideal" classification scheme in compact form, for the first time generated by codon sextets encoding Ser, Arg and Leu amino acids. The new scheme creates the known purine/pyrimidine, codon-anticodon, and amino/keto type symmetries and a novel A+U rich/C+G rich symmetry. This scheme is built from "leading" and "nonleading" groups of 32 codons each. In the ensuing 4 × 16 scheme, based on trinucleotide quadruplets, Ser has a central role as initial generator. Six codons encoding Ser and six encoding Arg extend continuously along a linear array in the "leading" group, and together with four of six Leu codons uniquely define construction of the "leading" group. The remaining two Leu codons enable construction of the "nonleading" group. The "ideal" genetic code suggests the evolution of genetic code with serine as an initiator. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Brain tumor classification and segmentation using sparse coding and dictionary learning.

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo


    This paper presents a novel fully automatic framework for multi-class brain tumor classification and segmentation using a sparse coding and dictionary learning method. The proposed framework consists of two steps: classification and segmentation. The classification of the brain tumors is based on brain topology and texture. The segmentation is based on voxel values of the image data. Using K-SVD, two types of dictionaries are learned from the training data and their associated ground truth segmentation: feature dictionary and voxel-wise coupled dictionaries. The feature dictionary consists of global image features (topological and texture features). The coupled dictionaries consist of coupled information: gray scale voxel values of the training image data and their associated label voxel values of the ground truth segmentation of the training data. For quantitative evaluation, the proposed framework is evaluated using different metrics. The segmentation results of the brain tumor segmentation (MICCAI-BraTS-2013) database are evaluated using five different metric scores, which are computed using the online evaluation tool provided by the BraTS-2013 challenge organizers. Experimental results demonstrate that the proposed approach achieves an accurate brain tumor classification and segmentation and outperforms the state-of-the-art methods.


    Gray, R. O. [Department of Physics and Astronomy, Appalachian State University, Boone, NC 28608 (United States); Corbally, C. J. [Vatican Observatory Research Group, Steward Observatory, Tucson, AZ 85721-0065 (United States); Cat, P. De [Royal Observatory of Belgium, Ringlaan 3, B-1180 Brussel (Belgium); Fu, J. N.; Ren, A. B. [Department of Astronomy, Beijing Normal University, 19 Avenue Xinjiekouwai, Beijing 100875 (China); Shi, J. R.; Luo, A. L.; Zhang, H. T.; Wu, Y.; Cao, Z.; Li, G. [Key Laboratory for Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Zhang, Y.; Hou, Y.; Wang, Y. [Nanjing Institute of Astronomical Optics and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Nanjing 210042 (China)


    The LAMOST-Kepler project was designed to obtain high-quality, low-resolution spectra of many of the stars in the Kepler field with the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST) spectroscopic telescope. To date 101,086 spectra of 80,447 objects over the entire Kepler field have been acquired. Physical parameters, radial velocities, and rotational velocities of these stars will be reported in other papers. In this paper we present MK spectral classifications for these spectra determined with the automatic classification code MKCLASS. We discuss the quality and reliability of the spectral types and present histograms showing the frequency of the spectral types in the main table organized according to luminosity class. Finally, as examples of the use of this spectral database, we compute the proportion of A-type stars that are Am stars, and identify 32 new barium dwarf candidates.

  10. The relevance of the International Classification of Functioning, Disability and Health (ICF) in monitoring and evaluating Community-based Rehabilitation (CBR).

    Madden, Rosamond H; Dune, Tinashe; Lukersmith, Sue; Hartley, Sally; Kuipers, Pim; Gargett, Alexandra; Llewellyn, Gwynnyth


    To examine the relevance of the International Classification of Functioning, Disability and Health (ICF) to CBR monitoring and evaluation by investigating the relationship between the ICF and information in published CBR monitoring and evaluation reports. A three-stage literature search and analysis method was employed. Studies were identified via online database searches for peer-reviewed journal articles, and hand-searching of CBR network resources, NGO websites and specific journals. From each study "information items" were extracted; extraction consistency among authors was established. Finally, the resulting information items were coded to ICF domains and categories, with consensus on coding being achieved. Thirty-six articles relating to monitoring and evaluating CBR were selected for analysis. Approximately one third of the 2495 information items identified in these articles (788 or 32%) related to concepts of functioning, disability and environment, and could be coded to the ICF. These information items were spread across the entire ICF classification with a concentration on Activities and Participation (49% of the 788 information items) and Environmental Factors (42%). The ICF is a relevant and potentially useful framework and classification, providing building blocks for the systematic recording of information pertaining to functioning and disability, for CBR monitoring and evaluation. Implications for Rehabilitation The application of the ICF, as one of the building blocks for CBR monitoring and evaluation, is a constructive step towards an evidence-base on the efficacy and outcomes of CBR programs. The ICF can be used to provide the infrastructure for functioning and disability information to inform service practitioners and enable national and international comparisons.

  11. Code Syntax-Comparison Algorithm Based on Type-Redefinition-Preprocessing and Rehash Classification

    Baojiang Cui


    Full Text Available The code comparison technology plays an important role in the fields of software security protection and plagiarism detection. Nowadays, there are mainly FIVE approaches of plagiarism detection, file-attribute-based, text-based, token-based, syntax-based and semantic-based. The prior three approaches have their own limitations, while the technique based on syntax has its shortage of detection ability and low efficiency that all of these approaches cannot meet the requirements on large-scale software plagiarism detection. Based on our prior research, we propose an algorithm on type redefinition plagiarism detection, which could detect the level of simple type redefinition, repeating pattern redefinition, and the redefinition of type with pointer. Besides, this paper also proposes a code syntax-comparison algorithm based on rehash classification, which enhances the node storage structure of the syntax tree, and greatly improves the efficiency.

  12. A physiologically-inspired model of numerical classification based on graded stimulus coding

    John Pearson


    Full Text Available In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes—quantities like time, length, and brightness that are linearly ordered—constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber’s Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task—numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli.

  13. Classification of Electrocardiogram Signals With Extreme Learning Machine and Relevance Vector Machine

    S. Karpagachelvi


    Full Text Available The ECG is one of the most effective diagnostic tools to detect cardiac diseases. It is a method to measure and record different electrical potentials of the heart. The electrical potential generated by electrical activity in cardiac tissue is measured on the surface of the human body. Current flow, in the form of ions, signals contraction of cardiac muscle fibers leading to the heart's pumping action. This ECG can be classified as normal and abnormal signals. In this paper, a thorough experimental study was conducted to show the superiority of the generalization capability of the Relevance Vector Machine (RVM compared with Extreme Learning Machine (ELM approach in the automatic classification of ECG beats. The generalization performance of the ELM classifier has not achieved the nearest maximum accuracy of ECG signal classsification. To achieve the maximum accuracy the RVM classifier design by searching for the best value of the parameters that tune its discriminant function, and upstream by looking for the best subset of features that feed the classifier. The experiments were conducted on the ECG data from the Massachusetts Institute of Technology-Beth Israel Hospital (MIT- BIH arrhythmia database to classify five kinds of abnormal waveforms and normal beats. In particular, the sensitivity of the RVM classifier is tested and that is compared with ELM. Both the approaches are compared by giving raw input data and preprocessed data. The obtained results clearly confirm the superiority of the RVM approach when compared to traditional classifiers.

  14. Block truncation coding with color clumps:A novel feature extraction technique for content based image classification



    The paper has explored principle of block truncation coding (BTC) as a means to perform feature extraction for content based image classification. A variation of block truncation coding, named BTC with color clumps has been implemented in this work to generate feature vectors. Classification performance with the proposed technique of feature extraction has been compared to existing techniques. Two widely used publicdataset named Wang dataset and Caltech dataset have been used for analyses and comparisons of classification performances based on four different metrics. The study has established BTC with color clumps as an effective alternative for feature extraction compared to existing methods. The experiments were carried out in RGB colorspace. Two different categories of classifiers viz. K Nearest Neighbor (KNN) Classifier and RIDOR Classifier were used to measure the classification performances. A paired t test was conducted to establish the statistical significance of the findings. Evaluation of classifier algorithms were done in receiver operating characteristic (ROC) space.

  15. Call for consistent coding in diabetes mellitus using the Royal College of General Practitioners and NHS pragmatic classification of diabetes

    Simon de Lusignan


    Full Text Available Background The prevalence of diabetes is increasing with growing levels of obesity and an aging population. New practical guidelines for diabetes provide an applicable classification. Inconsistent coding of diabetes hampers the use of computerised disease registers for quality improvement, and limits the monitoring of disease trends.Objective To develop a consensus set of codes that should be used when recording diabetes diagnostic data.Methods The consensus approach was hierarchical, with a preference for diagnostic/disorder codes, to define each type of diabetes and non-diabetic hyperglycaemia, which were listed as being completely, partially or not readily mapped to available codes. The practical classification divides diabetes into type 1 (T1DM, type 2 (T2DM, genetic, other, unclassified and non-diabetic fasting hyperglycaemia. We mapped the classification to Read version 2, Clinical Terms version 3 and SNOMED CT.Results T1DMand T2DM were completely mapped to appropriate codes. However, in other areas only partial mapping is possible. Genetics is a fast-moving field and there were considerable gaps in the available labels for genetic conditions; what the classification calls ‘other’ the coding system labels ‘secondary’ diabetes. The biggest gap was the lack of a code for diabetes where the type of diabetes was uncertain. Notwithstanding these limitations we were able to develop a consensus list.Conclusions It is a challenge to develop codes that readily map to contemporary clinical concepts. However, clinicians should adopt the standard recommended codes; and audit the quality of their existing records.

  16. A Crosswalk of Mineral Commodity End Uses and North American Industry Classification System (NAICS) codes

    Barry, James J.; Matos, Grecia R.; Menzie, W. David


    This crosswalk is based on the premise that there is a connection between the way mineral commodities are used and how this use is reflected in the economy. Raw mineral commodities are the basic materials from which goods, finished products, or intermediate materials are manufactured or made. Mineral commodities are vital to the development of the U.S. economy and they impact nearly every industrial segment of the economy, representing 12.2 percent of the U.S. gross domestic product (GDP) in 2010 (U.S. Bureau of Economic Analysis, 2014). In an effort to better understand the distribution of mineral commodities in the economy, the U.S. Geological Survey (USGS) attempts to link the end uses of mineral commodities to the corresponding North American Industry Classification System (NAICS) codes.

  17. Classification of melanoma lesions using sparse coded features and random forests

    Rastgoo, Mojdeh; Lemaître, Guillaume; Morel, Olivier; Massich, Joan; Garcia, Rafael; Meriaudeau, Fabrice; Marzani, Franck; Sidibé, Désiré


    Malignant melanoma is the most dangerous type of skin cancer, yet it is the most treatable kind of cancer, conditioned by its early diagnosis which is a challenging task for clinicians and dermatologists. In this regard, CAD systems based on machine learning and image processing techniques are developed to differentiate melanoma lesions from benign and dysplastic nevi using dermoscopic images. Generally, these frameworks are composed of sequential processes: pre-processing, segmentation, and classification. This architecture faces mainly two challenges: (i) each process is complex with the need to tune a set of parameters, and is specific to a given dataset; (ii) the performance of each process depends on the previous one, and the errors are accumulated throughout the framework. In this paper, we propose a framework for melanoma classification based on sparse coding which does not rely on any pre-processing or lesion segmentation. Our framework uses Random Forests classifier and sparse representation of three features: SIFT, Hue and Opponent angle histograms, and RGB intensities. The experiments are carried out on the public PH2 dataset using a 10-fold cross-validation. The results show that SIFT sparse-coded feature achieves the highest performance with sensitivity and specificity of 100% and 90.3% respectively, with a dictionary size of 800 atoms and a sparsity level of 2. Furthermore, the descriptor based on RGB intensities achieves similar results with sensitivity and specificity of 100% and 71.3%, respectively for a smaller dictionary size of 100 atoms. In conclusion, dictionary learning techniques encode strong structures of dermoscopic images and provide discriminant descriptors.

  18. Graphical Table of Contents for Library Collections: The Application of Universal Decimal Classification Codes to Subject Maps

    Victor Herrero-Solano


    Full Text Available The representation of information content by graphical maps is an extended ongoing research topic. The objective of this article consists in verifying whether it is possible to create map displays using Universal Decimal Classification (UDC codes (using co-classification analysis for the purpose of creating a graphical table of contents for a library collection. The application of UDC codes was introduced to subject maps development using the following graphic representation methods: (1 multidimensional scaling; (2 cluster analysis; and (3 neural networks (self-organizing maps. Finally, the authors conclude that the different kinds of maps have slightly different degrees of viability and types of application.

  19. Evaluation of potential emission spectra for the reliable classification of fluorescently coded materials

    Brunner, Siegfried; Kargel, Christian


    The conservation and efficient use of natural and especially strategic resources like oil and water have become global issues, which increasingly initiate environmental and political activities for comprehensive recycling programs. To effectively reutilize oil-based materials necessary in many industrial fields (e.g. chemical and pharmaceutical industry, automotive, packaging), appropriate methods for a fast and highly reliable automated material identification are required. One non-contacting, color- and shape-independent new technique that eliminates the shortcomings of existing methods is to label materials like plastics with certain combinations of fluorescent markers ("optical codes", "optical fingerprints") incorporated during manufacture. Since time-resolved measurements are complex (and expensive), fluorescent markers must be designed that possess unique spectral signatures. The number of identifiable materials increases with the number of fluorescent markers that can be reliably distinguished within the limited wavelength band available. In this article we shall investigate the reliable detection and classification of fluorescent markers with specific fluorescence emission spectra. These simulated spectra are modeled based on realistic fluorescence spectra acquired from material samples using a modern VNIR spectral imaging system. In order to maximize the number of materials that can be reliably identified, we evaluate the performance of 8 classification algorithms based on different spectral similarity measures. The results help guide the design of appropriate fluorescent markers, optical sensors and the overall measurement system.

  20. A new coordination pattern classification to assess gait kinematics when utilising a modified vector coding technique.

    Needham, Robert A; Naemi, Roozbeh; Chockalingam, Nachiappan


    A modified vector coding (VC) technique was used to quantify lumbar-pelvic coordination during gait. The outcome measure from the modified VC technique is known as the coupling angle (CA) which can be classified into one of four coordination patterns. This study introduces a new classification for this coordination pattern that expands on a current data analysis technique by introducing the terms in-phase with proximal dominancy, in-phase with distal dominancy, anti-phase with proximal dominancy and anti-phase with distal dominancy. This proposed coordination pattern classification can offer an interpretation of the CA that provides either in-phase or anti-phase coordination information, along with an understanding of the direction of segmental rotations and the segment that is the dominant mover at each point in time. Classifying the CA against the new defined coordination patterns and presenting this information in a traditional time-series format in this study has offered an insight into segmental range of motion. A new illustration is also presented which details the distribution of the CA within each of the coordination patterns and allows for the quantification of segmental dominancy. The proposed illustration technique can have important implications in demonstrating gait coordination data in an easily comprehensible fashion by clinicians and scientists alike.

  1. Multiclass relevance vector machine classification to explore annual and seasonal dynamics of an Invasive reed

    Zaman, B.; Torres, A.; McKee, M.


    Phragmites Australis forms dense stands which shade native vegetation and alter the ecosystem. Information on annual and seasonal dynamics of this plant contributes to the decision support system of wetland management. The study area is the Bear River Migratory bird refuge (BRMBR) which encompasses the Bear river and its delta where it flows into the northern part of theGreat Salt Lake, Utah. Seasonal change detection was carried out between the months of June 2010 and September 2010. The imagery from June 2011 and July 2011 were used for annual change detection. The remote sensing data was acquired by AggieAir, an unmanned aerial vehicle (UAV) platform, flown autonomously via pre-programmed flight plans at low altitudes to limit atmospheric effects. This UAV acquires high resolution multispectral images in the visible, near-infrared and thermal bands and has a flight interval of about 30 minutes. The reflectance values of the classes in wavebands 550, 650 and 850 nm were used to train the Multiclass relevance vector machine (MCRVM) model developed to classify the imagery of study area. There were a total of 5 classes: water, phragmites australis, marshy land, mixed vegetation and salt flats and three attributes. The multiclass classification accuracy achieved for June 2010, September 2010 and July 2011 were 95.2%, 95% and 98.7% respectively. The seasonal change detection indicated an average increase of 17% in area of phragmites and annual change detection results indicated an average increase of 110% from June 2010 to July 2011. It's astonishing rate of increase in distribution and abundance was alarming.

  2. RNA-CODE: a noncoding RNA classification tool for short reads in NGS data lacking reference genomes.

    Cheng Yuan

    Full Text Available The number of transcriptomic sequencing projects of various non-model organisms is still accumulating rapidly. As non-coding RNAs (ncRNAs are highly abundant in living organism and play important roles in many biological processes, identifying fragmentary members of ncRNAs in small RNA-seq data is an important step in post-NGS analysis. However, the state-of-the-art ncRNA search tools are not optimized for next-generation sequencing (NGS data, especially for very short reads. In this work, we propose and implement a comprehensive ncRNA classification tool (RNA-CODE for very short reads. RNA-CODE is specifically designed for ncRNA identification in NGS data that lack quality reference genomes. Given a set of short reads, our tool classifies the reads into different types of ncRNA families. The classification results can be used to quantify the expression levels of different types of ncRNAs in RNA-seq data and ncRNA composition profiles in metagenomic data, respectively. The experimental results of applying RNA-CODE to RNA-seq of Arabidopsis and a metagenomic data set sampled from human guts demonstrate that RNA-CODE competes favorably in both sensitivity and specificity with other tools. The source codes of RNA-CODE can be downloaded at

  3. [Classification and staging systems for hilar cholangio-carcinoma (Klatskin tumors): clinical application and practical relevance].

    Gavrilovici, V; Grecu, F; Seripcariu, V; Dragomir, Cr


    Hilar cholangiocarcinomas or Klatskin tumors have been classified in 1975 by French surgeons Henri Bismuth and Marvin B. Corlette and this remains largely used in clinical practice. The authors present the TNM classification and the changes introduced by the sixth and seventh edition of Union for International Cancer Control regarding the tumors of the proximal bile duct and describe Blumgart classification for tumors of this site. The usefulness of these systems is assessed considering the last six years experience of the service.

  4. A full computation-relevant topological dynamics classification of elementary cellular automata

    SchÜle, M.; Stoop, R.


    Cellular automata are both computational and dynamical systems. We give a complete classification of the dynamic behaviour of elementary cellular automata (ECA) in terms of fundamental dynamic system notions such as sensitivity and chaoticity. The "complex" ECA emerge to be sensitive, but not chaotic and not eventually weakly periodic. Based on this classification, we conjecture that elementary cellular automata capable of carrying out complex computations, such as needed for Turing-universal...

  5. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.


    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  6. [Changes in the pleura of subjects occupationally-exposed to asbestos: radiological study technique, spectrum, etiological classification and coding according to the ILO classification].

    Wiebe, V; Müller, K M; Reichel, G


    Pleural abnormalities of 119 occupationally asbestos-exposed with prominent internal stripe of the lateral thoracic wall were radiodiagnostically analysed by plain films of the thorax in four views and by computed tomography in the course of medical expert's certification. Abnormalities were coded according to 1980 ILO international classification of pneumoconioses. Hardly half of the patients had pleural abnormalities caused by asbestos exposure: Pleural plaques, "diffuse" pleural fibrosis, pleural effusions, organized pleural effusions and pleural tumors. The other half of the patients had pleural involvement of pulmonary and chest wall abnormalities or variations of the lateral thoracic wall not related to asbestos exposure. The 1980 ILO classification of pneumoconioses proved to be inadequate for complete coding of the abnormalities, since only the postero-anterior plain film of the thorax must be used, since the normal appearance of the pleura is insufficiently defined and since the entity of organized pleural effusion is lacking.

  7. Classification and coding of commercial fishing injuries by work processes: an experience in the Danish fresh market fishing industry

    Jensen, Olaf Chresten; Stage, Søren; Noer, Preben


    BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes was deve......BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes...... and up to 13 sub-categories of the work processes for each of the five different types of fishing. A total of 620 injury reports were reviewed and coded. Five percent (n = 33) of these were fatal injuries. The working processes were identified and coded according to the developed classification system...... to working with the gear and nets vary greatly in the different fishing methods. Coding of the injuries to the specific working processes allows for targeted prevention efforts....

  8. Classification of non-coding RNA using graph representations ofsecondary structure

    Karklin, Yan; Meraz, Richard F.; Holbrook, Stephen R.


    Some genes produce transcripts that function directly in regulatory, catalytic, or structural roles in the cell. These non-coding RNAs are prevalent in all living organisms, and methods that aid the understanding of their functional roles are essential. RNA secondary structure, the pattern of base-pairing, contains the critical information for determining the three dimensional structure and function of the molecule. In this work we examine whether the basic geometric and topological properties of secondary structure are sufficient to distinguish between RNA families in a learning framework. First, we develop a labeled dual graph representation of RNA secondary structure by adding biologically meaningful labels to the dual graphs proposed by Gan et al [1]. Next, we define a similarity measure directly on the labeled dual graphs using the recently developed marginalized kernels [2]. Using this similarity measure, we were able to train Support Vector Machine classifiers to distinguish RNAs of known families from random RNAs with similar statistics. For 22 of the 25 families tested, the classifier achieved better than 70% accuracy, with much higher accuracy rates for some families. Training a set of classifiers to automatically assign family labels to RNAs using a one vs. all multi-class scheme also yielded encouraging results. From these initial learning experiments, we suggest that the labeled dual graph representation, together with kernel machine methods, has potential for use in automated analysis and classification of uncharacterized RNA molecules or efficient genome-wide screens for RNA molecules from existing families.

  9. An analysis of feature relevance in the classification of astronomical transients with machine learning methods

    D'Isanto, Antonio; Brescia, Massimo; Donalek, Ciro; Longo, Giuseppe; Riccio, Giuseppe; Djorgovski, Stanislav G


    The exploitation of present and future synoptic (multi-band and multi-epoch) surveys requires an extensive use of automatic methods for data processing and data interpretation. In this work, using data extracted from the Catalina Real Time Transient Survey (CRTS), we investigate the classification performance of some well tested methods: Random Forest, MLPQNA (Multi Layer Perceptron with Quasi Newton Algorithm) and K-Nearest Neighbors, paying special attention to the feature selection phase. In order to do so, several classification experiments were performed. Namely: identification of cataclysmic variables, separation between galactic and extra-galactic objects and identification of supernovae.

  10. Comparisons of hadrontherapy-relevant data to nuclear interaction codes in the Geant4 toolkit

    Braunn, B.; Boudard, A.; Colin, J.; Cugnon, J.; Cussol, D.; David, J. C.; Kaitaniemi, P.; Labalme, M.; Leray, S.; Mancusi, D.


    Comparisons between experimental data, INCL and other nuclear models available in the Geant4 toolkit are presented. The data used for the comparisons come from a fragmentation experiment realised at GANIL facility. The main purpose of this experiment was to measure production rates and angular distributions of emitted particles from the collision of a 95.A MeV 12C beam and thick PMMA (plastic) targets. The latest version of the Intra Nuclear Cascade of Liege code extended to nucleus-nucleus collisions for ion beam therapy application will be described. This code as well as JQMD and the Geant4 binary cascade has been compared with these hadrontherapy-oriented experimental data. The results from the comparisons exhibit an overall qualitative agreement between the models and the experimental data. However, at a quantitative level, it has been shown that none of this three models manage to reproduce precisely all the data. The nucleus-nucleus extension of INCL, which is not predictive enough for ion beam therapy application yet, has nevertheless proven to be competitive with other nuclear collisions codes.

  11. Simulations of 3D LPI's relevant to IFE using the PIC code OSIRIS

    Tsung, F. S.; Mori, W. B.; Winjum, B. J.


    We will study three dimensional effects of laser plasma instabilities, including backward raman scattering, the high frequency hybrid instability, and the two plasmon instability using OSIRIS in 3D Cartesian geometry and cylindrical 2D OSIRIS with azimuthal mode decompositions. With our new capabilities we hope to demonstrate that we are capable of studying single speckle physics relevant to IFE in an efficent manner.

  12. A comparative study of surface EMG classification by fuzzy relevance vector machine and fuzzy support vector machine.

    Xie, Hong-Bo; Huang, Hu; Wu, Jianhua; Liu, Lei


    We present a multiclass fuzzy relevance vector machine (FRVM) learning mechanism and evaluate its performance to classify multiple hand motions using surface electromyographic (sEMG) signals. The relevance vector machine (RVM) is a sparse Bayesian kernel method which avoids some limitations of the support vector machine (SVM). However, RVM still suffers the difficulty of possible unclassifiable regions in multiclass problems. We propose two fuzzy membership function-based FRVM algorithms to solve such problems, based on experiments conducted on seven healthy subjects and two amputees with six hand motions. Two feature sets, namely, AR model coefficients and room mean square value (AR-RMS), and wavelet transform (WT) features, are extracted from the recorded sEMG signals. Fuzzy support vector machine (FSVM) analysis was also conducted for wide comparison in terms of accuracy, sparsity, training and testing time, as well as the effect of training sample sizes. FRVM yielded comparable classification accuracy with dramatically fewer support vectors in comparison with FSVM. Furthermore, the processing delay of FRVM was much less than that of FSVM, whilst training time of FSVM much faster than FRVM. The results indicate that FRVM classifier trained using sufficient samples can achieve comparable generalization capability as FSVM with significant sparsity in multi-channel sEMG classification, which is more suitable for sEMG-based real-time control applications.

  13. ncRNA-class Web Tool: Non-coding RNA feature extraction and pre-miRNA classification web tool

    Kleftogiannis, Dimitrios A.


    Until recently, it was commonly accepted that most genetic information is transacted by proteins. Recent evidence suggests that the majority of the genomes of mammals and other complex organisms are in fact transcribed into non-coding RNAs (ncRNAs), many of which are alternatively spliced and/or processed into smaller products. Non coding RNA genes analysis requires the calculation of several sequential, thermodynamical and structural features. Many independent tools have already been developed for the efficient calculation of such features but to the best of our knowledge there does not exist any integrative approach for this task. The most significant amount of existing work is related to the miRNA class of non-coding RNAs. MicroRNAs (miRNAs) are small non-coding RNAs that play a significant role in gene regulation and their prediction is a challenging bioinformatics problem. Non-coding RNA feature extraction and pre-miRNA classification Web Tool (ncRNA-class Web Tool) is a publicly available web tool ( ) which provides a user friendly and efficient environment for the effective calculation of a set of 58 sequential, thermodynamical and structural features of non-coding RNAs, plus a tool for the accurate prediction of miRNAs. © 2012 IFIP International Federation for Information Processing.

  14. The National Ecosystem Services Classification System: A Framework for Identifying and Reducing Relevant Uncertainties

    Rhodes, C. R.; Sinha, P.; Amanda, N.


    In recent years the gap between what scientists know and what policymakers should appreciate in environmental decision making has received more attention, as the costs of the disconnect have become more apparent to both groups. Particularly for water-related policies, the EPA's Office of Water has struggled with benefit estimates held low by the inability to quantify ecological and economic effects that theory, modeling, and anecdotal or isolated case evidence suggest may prove to be larger. Better coordination with ecologists and hydrologists is being explored as a solution. The ecosystem services (ES) concept now nearly two decades old links ecosystem functions and processes to the human value system. But there remains no clear mapping of which ecosystem goods and services affect which individual or economic values. The National Ecosystem Services Classification System (NESCS, 'nexus') project brings together ecologists, hydrologists, and social scientists to do this mapping for aquatic and other ecosystem service-generating systems. The objective is to greatly reduce the uncertainty in water-related policy making by mapping and ultimately quantifying the various functions and products of aquatic systems, as well as how changes to aquatic systems impact the human economy and individual levels of non-monetary appreciation for those functions and products. Primary challenges to fostering interaction between scientists, social scientists, and policymakers are lack of a common vocabulary, and the need for a cohesive comprehensive framework that organizes concepts across disciplines and accommodates scientific data from a range of sources. NESCS builds the vocabulary and the framework so both may inform a scalable transdisciplinary policy-making application. This talk presents for discussion the process and progress in developing both this vocabulary and a classifying framework capable of bridging the gap between a newer but existing ecosystem services classification

  15. Pelvic Arterial Anatomy Relevant to Prostatic Artery Embolisation and Proposal for Angiographic Classification

    Assis, André Moreira de, E-mail:; Moreira, Airton Mota, E-mail:; Paula Rodrigues, Vanessa Cristina de, E-mail: [University of Sao Paulo Medical School, Interventional Radiology and Endovascular Surgery Department, Radiology Institute (Brazil); Harward, Sardis Honoria, E-mail: [The Dartmouth Center for Health Care Delivery Science (United States); Antunes, Alberto Azoubel, E-mail:; Srougi, Miguel, E-mail: [University of Sao Paulo Medical School, Urology Department (Brazil); Carnevale, Francisco Cesar, E-mail: [University of Sao Paulo Medical School, Interventional Radiology and Endovascular Surgery Department, Radiology Institute (Brazil)


    PurposeTo describe and categorize the angiographic findings regarding prostatic vascularization, propose an anatomic classification, and discuss its implications for the PAE procedure.MethodsAngiographic findings from 143 PAE procedures were reviewed retrospectively, and the origin of the inferior vesical artery (IVA) was classified into five subtypes as follows: type I: IVA originating from the anterior division of the internal iliac artery (IIA), from a common trunk with the superior vesical artery (SVA); type II: IVA originating from the anterior division of the IIA, inferior to the SVA origin; type III: IVA originating from the obturator artery; type IV: IVA originating from the internal pudendal artery; and type V: less common origins of the IVA. Incidences were calculated by percentage.ResultsTwo hundred eighty-six pelvic sides (n = 286) were analyzed, and 267 (93.3 %) were classified into I–IV types. Among them, the most common origin was type IV (n = 89, 31.1 %), followed by type I (n = 82, 28.7 %), type III (n = 54, 18.9 %), and type II (n = 42, 14.7 %). Type V anatomy was seen in 16 cases (5.6 %). Double vascularization, defined as two independent prostatic branches in one pelvic side, was seen in 23 cases (8.0 %).ConclusionsDespite the large number of possible anatomical variations of male pelvis, four main patterns corresponded to almost 95 % of the cases. Evaluation of anatomy in a systematic fashion, following a standard classification, will make PAE a faster, safer, and more effective procedure.

  16. Translocation Properties of Primitive Molecular Machines and Their Relevance to the Structure of the Genetic Code

    Aldana, M; Larralde, H; Martínez-Mekler, G; Aldana, Maximino; Cocho, Germinal; Larralde, Hernan; Martinez-Mekler, Gustavo


    We address the question, related with the origin of the genetic code, of why are there three bases per codon in the translation to protein process. As a followup to our previous work, we approach this problem by considering the translocation properties of primitive molecular machines, which capture basic features of ribosomal/messenger RNA interactions, while operating under prebiotic conditions. Our model consists of a short one-dimensional chain of charged particles(rRNA antecedent) interacting with a polymer (mRNA antecedent) via electrostatic forces. The chain is subject to external forcing that causes it to move along the polymer which is fixed in a quasi one dimensional geometry. Our numerical and analytic studies of statistical properties of random chain/polymer potentials suggest that, under very general conditions, a dynamics is attained in which the chain moves along the polymer in steps of three monomers. By adjusting the model in order to consider present day genetic sequences, we show that the ab...

  17. Classification of fruits based on anthocyanin types and relevance to their health effects.

    Fang, Jim


    Anthocyanins are a group of water-soluble pigments that confer the blue, purple, and red color to many fruits. Anthocyanin-rich fruits can be divided into three groups based on the types of aglycones of their anthocyanins: pelargonidin group, cyanidin/peonidin group, and multiple anthocyanidins group. Some fruits contain a major anthocyanin type and can serve as useful research tools. Cyanidin glycosides and peonidin glycosides can be metabolically converted to each other by methylation and demethylation. Both cyanidin and peonidin glycosides can be metabolized to protocatechuic acid and vanillic acid. Pelargonidin-3-glucoside is metabolized to 4-hydroxybenoic acid. On the other hand, phenolic acid metabolites of delphinidin, malvidin, and petunidin glycosides are unstable and can be further fragmented into smaller molecules. A literature review indicates berries with higher cyanidin content, such as black raspberries, chokeberries, and bilberries are more likely to produce an antiinflammatory effect. This observation seems to be consistent with the hypothesis that one or more stable phenolic acid metabolites contribute to the antiinflammatory effects of anthocyanin-rich fruits. More studies are needed before we can conclude that fruits rich in cyanidin, peonidin, or pelargonidin glycosides have better antiinflammatory effects. Additionally, fruit polyphenols other than anthocyanins could contribute to their antiinflammatory effects. Furthermore, blueberries could exert their health effects with other mechanisms such as improving intestinal microbiota composition. In summary, this classification system can facilitate our understanding of the absorption and metabolic processes of anthocyanins and the health effects of different fruits.

  18. Classification

    Clary, Renee; Wandersee, James


    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  19. Image segmentation and classification of white blood cells with the extreme learning machine and the fast relevance vector machine.

    Ravikumar, S


    White blood cells (WBCs) or leukocytes are an important part of the body's defense against infectious organisms and foreign substances. WBC segmentation is a challenging issue because of the morphological diversity of WBCs and the complex and uncertain background of blood smear images. The standard ELM classification techniques are used for WBC segmentation. The generalization performance of the ELM classifier has not achieved the maximum nearest accuracy of image segmentation. This paper gives a novel technique for WBC detection based on the fast relevance vector machine (Fast-RVM). Firstly, astonishingly sparse relevance vectors (RVs) are obtained while fitting the histogram by RVM. Next, the relevant required threshold value is directly sifted from these limited RVs. Finally, the entire connective WBC regions are segmented from the original image. The proposed method successfully works for WBC detection, and effectively reduces the effects brought about by illumination and staining. To achieve the maximum accuracy of the RVM classifier, we design a search for the best value of the parameters that tune its discriminant function, and upstream by looking for the best subset of features that feed the classifier. Therefore, this proposed RVM method effectively works for WBC detection, and effectively reduces the computational time and preserves the images.

  20. Use of the Coding Causes of Death in HIV in the classification of deaths in Northeastern Brazil.

    Alves, Diana Neves; Bresani-Salvi, Cristiane Campello; Batista, Joanna d'Arc Lyra; Ximenes, Ricardo Arraes de Alencar; Miranda-Filho, Demócrito de Barros; Melo, Heloísa Ramos Lacerda de; Albuquerque, Maria de Fátima Pessoa Militão de


    Describe the coding process of death causes for people living with HIV/AIDS, and classify deaths as related or unrelated to immunodeficiency by applying the Coding Causes of Death in HIV (CoDe) system. A cross-sectional study that codifies and classifies the causes of deaths occurring in a cohort of 2,372 people living with HIV/AIDS, monitored between 2007 and 2012, in two specialized HIV care services in Pernambuco. The causes of death already codified according to the International Classification of Diseases were recoded and classified as deaths related and unrelated to immunodeficiency by the CoDe system. We calculated the frequencies of the CoDe codes for the causes of death in each classification category. There were 315 (13%) deaths during the study period; 93 (30%) were caused by an AIDS-defining illness on the Centers for Disease Control and Prevention list. A total of 232 deaths (74%) were related to immunodeficiency after application of the CoDe. Infections were the most common cause, both related (76%) and unrelated (47%) to immunodeficiency, followed by malignancies (5%) in the first group and external causes (16%), malignancies (12 %) and cardiovascular diseases (11%) in the second group. Tuberculosis comprised 70% of the immunodeficiency-defining infections. Opportunistic infections and aging diseases were the most frequent causes of death, adding multiple disease burdens on health services. The CoDe system increases the probability of classifying deaths more accurately in people living with HIV/AIDS. Descrever o processo de codificação das causas de morte em pessoas vivendo com HIV/Aids, e classificar os óbitos como relacionados ou não relacionados à imunodeficiência aplicando o sistema Coding Causes of Death in HIV (CoDe). Estudo transversal, que codifica e classifica as causas dos óbitos ocorridos em uma coorte de 2.372 pessoas vivendo com HIV/Aids acompanhadas entre 2007 e 2012 em dois serviços de atendimento especializado em HIV em

  1. Proteomic signatures reveal a dualistic and clinically relevant classification of anal canal carcinoma.

    Herfs, Michael; Longuespée, Rémi; Quick, Charles M; Roncarati, Patrick; Suarez-Carmona, Meggy; Hubert, Pascale; Lebeau, Alizée; Bruyere, Diane; Mazzucchelli, Gabriel; Smargiasso, Nicolas; Baiwir, Dominique; Lai, Keith; Dunn, Andrew; Obregon, Fabiola; Yang, Eric J; Pauw, Edwin De; Crum, Christopher P; Delvenne, Philippe


    Aetiologically linked to HPV infection, malignancies of the anal canal have substantially increased in incidence over the last 20 years. Although most anal squamous cell carcinomas (SCCs) respond well to chemoradiotherapy, about 30% of patients experience a poor outcome, for undetermined reasons. Despite cumulative efforts for discovering independent predictors of overall survival, both nodal status and tumour size are still the only reliable factors predicting patient outcome. Recent efforts have revealed that the biology of HPV-related lesions in the cervix is strongly linked to the originally infected cell population. To address the hypothesis that topography also influences both gene expression profile and behaviour of anal (pre)neoplastic lesions, we correlated both proteomic signatures and clinicopathological features of tumours arising from two distinct portions of the anal canal: the lower part (squamous zone) and the more proximal anal transitional zone. Although microdissected cancer cells appeared indistinguishable by morphology (squamous phenotype), unsupervised clustering analysis of the whole proteome significantly highlighted the heterogeneity that exists within anal canal tumours. More importantly, two region-specific subtypes of SCC were revealed. The expression profile (sensitivity/specificity) of several selected biomarkers (keratin filaments) further confirmed the subclassification of anal (pre)cancers based on their cellular origin. Less commonly detected compared to their counterparts located in the squamous mucosa, SCCs originating in the transitional zone more frequently displayed a poor or basaloid differentiation, and were significantly correlated with reduced disease-free and overall survivals. Taken together, we present direct evidence that anal canal SCC comprises two distinct entities with different cells of origin, proteomic signatures, and survival rates. This study forms the basis for a dualistic classification of anal carcinoma

  2. Detection of Inpatient Health Care Associated Injuries: Comparing Two ICD-9-CM Code Classifications


    example, currently the UTIDs database has nine fields for ICD-9-CM diagnosis codes plus a single additional field for an E-code. Some patients...information for physicians on sub-acute thromboses (SAT) and hypersensitivity reactions with use of the Cordis CYPHERTM Sirolimus-eluting coronary stent

  3. What Is the International Classification of Functioning, Disability and Health and Why Is It Relevant to Audiology?

    Meyer, Carly; Grenness, Caitlin; Scarinci, Nerina; Hickson, Louise


    The World Health Organization's International Classification of Functioning, Disability and Health (ICF) is widely used in disability and health sectors as a framework to describe the far-reaching effects of a range of health conditions on individuals. This biopsychosocial framework can be used to describe the experience of an individual in the components of body functions, body structures, and activities and participation, and it considers the influence of contextual factors (environmental and personal) on these components. Application of the ICF in audiology allows the use of a common language between health care professionals in both clinical and research settings. Furthermore, the ICF is promoted as a means of facilitating patient-centered care. In this article, the relevance and application of the ICF to audiology is described, along with clinical examples of its application in the assessment and management of children and adults with hearing loss. Importantly, the skills necessary for clinicians to apply the ICF effectively are discussed. PMID:27489397

  4. Classification

    Hjørland, Birger


    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  5. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)


    the proposition of a weight for averaging CDMA codes. This weighting function is referred in this discussion as the probability of the code matrix...Given a likelihood function of a multivariate Gaussian stochastic process (12), one can assume the values L and U and try to estimate the parameters...such as the average of the exponential functions were formulated. Averaging over a weight that depends on the TSC behaves as a filtering process where

  6. Validity of the International Classification of Diseases 10th revision code for hospitalisation with hyponatraemia in elderly patients

    Gandhi, Sonja; Shariff, Salimah Z; Fleet, Jamie L; Weir, Matthew A; Jain, Arsh K; Garg, Amit X


    Objective To evaluate the validity of the International Classification of Diseases, 10th Revision (ICD-10) diagnosis code for hyponatraemia (E87.1) in two settings: at presentation to the emergency department and at hospital admission. Design Population-based retrospective validation study. Setting Twelve hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Patients aged 66 years and older with serum sodium laboratory measurements at presentation to the emergency department (n=64 581) and at hospital admission (n=64 499). Main outcome measures Sensitivity, specificity, positive predictive value and negative predictive value comparing various ICD-10 diagnostic coding algorithms for hyponatraemia to serum sodium laboratory measurements (reference standard). Median serum sodium values comparing patients who were code positive and code negative for hyponatraemia. Results The sensitivity of hyponatraemia (defined by a serum sodium ≤132 mmol/l) for the best-performing ICD-10 coding algorithm was 7.5% at presentation to the emergency department (95% CI 7.0% to 8.2%) and 10.6% at hospital admission (95% CI 9.9% to 11.2%). Both specificities were greater than 99%. In the two settings, the positive predictive values were 96.4% (95% CI 94.6% to 97.6%) and 82.3% (95% CI 80.0% to 84.4%), while the negative predictive values were 89.2% (95% CI 89.0% to 89.5%) and 87.1% (95% CI 86.8% to 87.4%). In patients who were code positive for hyponatraemia, the median (IQR) serum sodium measurements were 123 (119–126) mmol/l and 125 (120–130) mmol/l in the two settings. In code negative patients, the measurements were 138 (136–140) mmol/l and 137 (135–139) mmol/l. Conclusions The ICD-10 diagnostic code for hyponatraemia differentiates between two groups of patients with distinct serum sodium measurements at both presentation to the emergency department and at hospital admission. However, these codes underestimate the true incidence of hyponatraemia

  7. Classification of \\textit{sum-networks} based on network coding capacity

    Rai, Brijesh Kumar


    We consider a wireline directed acyclic network having $m$ sources and $n$ terminals where every terminal wants to recover the sum of symbols generated at all the sources. We call such a network as a \\emph{sum-network}. The symbols are generated from a finite field. We show that the network coding capacity for a sum-network is upper bounded by the minimum of min-cut capacities of all source-terminal pairs. We call this upper bound the \\emph{min-cut bound}. We show that the min-cut bound bound is always achievable when $n=1$. Moreover, scalar linear network coding is sufficient to achieve the min-cut bound. For the case $m = 2, n \\geq 2$ or $m \\geq 2, n = 2$, the network coding capacity is known to be equal to the min-cut bound when the min-cut bound is 1. For the min-cut bound greater than 1, we give a lower bound on the network coding capacity. For the case $m \\geq 3, n \\geq 3$, we show that there exist sum-networks where the min-cut bound on the network coding capacity is not achievable. For this class, whe...

  8. Preliminary Classification of Army and Navy Entry-Level Occupations by the Holland Coding System.


    96) Realistic =55% Realistic z56% RIS 3 RIS 3 RIE 22 RIE 21 RSE 5 RIC I RSC I RSE 4 RSI I REI I I REI 15 RES 10 RES 15 REC1 REC 2 RCE 3 RCI 1 54 RCS...HOLLAND-CODED ARMY ENTRY-LEVEL OCCUPATIONS PdB-0 dM.Mill UI Holland-Coded Army Entry-Level Occupations 1. Still Photographer RSE 2. Motion Picture...9. Practical Nurse SAI 10. Medical Lab Technician ISA 11. Orthotist RSE 12. Electrocardiograph Technician RCI 13. Optometric Assistant SCI 14

  9. Finite Projective Geometries and Classification of the Weight Hierarchies of Codes (I)

    Wen De CHEN; Torleiv KLфVE


    The weight hierarchy of a binary linear [n, k] code C is the sequence (d1, d2,……, dk), where dr is the smallest support of an r-dimensional subcode of C. The codes of dimension 4 are collected in classes and the possible weight hierarchies in each class is determined by finite projective geometries.The possible weight hierarchies in class A, B, C, D are determined in Part (Ⅰ). The possible weight hierarchies in class E, F, G, H, I are determined in Part (Ⅱ).

  10. Anatomic variations of the pancreatic duct and their relevance with the Cambridge classification system: MRCP findings of 1158 consecutive patients.

    Adibelli, Zehra Hilal; Adatepe, Mustafa; Imamoglu, Cetin; Esen, Ozgur Sipahi; Erkan, Nazif; Yildirim, Mehmet


    The study was conducted to evaluate the frequencies of the anatomic variations and the gender distributions of these variations of the pancreatic duct and their relevance with the Cambridge classification system as morphological sign of chronic pancreatitis using magnetic resonance cholangiopancreatography (MRCP). We retrospectively reviewed 1312 consecutive patients who referred to our department for MRCP between January 2013 and August 2015. We excluded 154 patients from the study because of less than optimal results due to imaging limitations or a history of surgery on pancreas. Finally a total of 1158 patients were included in the study. Among the 1158 patients included in the study, 54 (4.6%) patients showed pancreas divisum, 13 patients (1.2%) were defined as ansa pancreatica. When we evaluated the course of the pancreatic duct, we found the prevalence 62.5% for descending, 30% for sigmoid, 5.5% for vertical and 2% for loop. The most commonly observed pancreatic duct configuration was Type 3 in 528 patients (45.6%) where 521 patients (45%) had Type 1 configuration. Vertical course (p = 0.004) and Type 2 (p = 0.03) configuration of pancreatic duct were more frequent in females than males. There were no statistically significant differences between the gender for the other pancreatic duct variations such as pancreas divisium, ansa pancreatica and course types other than vertical course (p > 0.05 for all). Variants of pancreas divisum and normal pancreatic duct variants were not associated with morphologic findings of chronic pancreatitis by using the Cambridge classification system. The ansa pancreatica is a rare type of anatomical variation of the pancreatic duct, which might be considered as a predisposing factor to the onset of idiopathic pancreatitis.

  11. A learning-based similarity fusion and filtering approach for biomedical image retrieval using SVM classification and relevance feedback.

    Rahman, Md Mahmudur; Antani, Sameer K; Thoma, George R


    This paper presents a classification-driven biomedical image retrieval framework based on image filtering and similarity fusion by employing supervised learning techniques. In this framework, the probabilistic outputs of a multiclass support vector machine (SVM) classifier as category prediction of query and database images are exploited at first to filter out irrelevant images, thereby reducing the search space for similarity matching. Images are classified at a global level according to their modalities based on different low-level, concept, and keypoint-based features. It is difficult to find a unique feature to compare images effectively for all types of queries. Hence, a query-specific adaptive linear combination of similarity matching approach is proposed by relying on the image classification and feedback information from users. Based on the prediction of a query image category, individual precomputed weights of different features are adjusted online. The prediction of the classifier may be inaccurate in some cases and a user might have a different semantic interpretation about retrieved images. Hence, the weights are finally determined by considering both precision and rank order information of each individual feature representation by considering top retrieved relevant images as judged by the users. As a result, the system can adapt itself to individual searches to produce query-specific results. Experiment is performed in a diverse collection of 5 000 biomedical images of different modalities, body parts, and orientations. It demonstrates the efficiency (about half computation time compared to search on entire collection) and effectiveness (about 10%-15% improvement in precision at each recall level) of the retrieval approach.

  12. A classification scheme of Amino Acids in the Genetic Code by Group Theory

    Sachse, Sebastian


    We derive the amino acid assignment to one codon representation (typical 64-dimensional irreducible representation) of the basic classical Lie superalgebra osp(5|2) from biochemical arguments. We motivate the approach of mathematical symmetries to the classification of the building constituents of the biosphere by analogy of its success in particle physics and chemistry. The model enables to calculate polarity and molecular volume of amino acids to a good approximation.

  13. The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification

    Jason L. Wright; Milos Manic


    This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.

  14. Sensitivity of International Classification of Diseases codes for hyponatremia among commercially insured outpatients in the United States

    Curtis Lesley H


    Full Text Available Abstract Background Administrative claims are a rich source of information for epidemiological and health services research; however, the ability to accurately capture specific diseases or complications using claims data has been debated. In this study, the authors examined the validity of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM diagnosis codes for the identification of hyponatremia in an outpatient managed care population. Methods We analyzed outpatient laboratory and professional claims for patients aged 18 years and older in the National Managed Care Benchmark Database from Integrated Healthcare Information Services. We obtained all claims for outpatient serum sodium laboratory tests performed in 2004 and 2005, and all outpatient professional claims with a primary or secondary ICD-9-CM diagnosis code of hyponatremia (276.1. Results A total of 40,668 outpatient serum sodium laboratory results were identified as hyponatremic (serum sodium 99% for all cutoff points. Conclusion ICD-9-CM codes in administrative data are insufficient to identify hyponatremia in an outpatient population.

  15. Relevance of the International Classification of Functioning, Health and Disability: Children & Youth Version in Early Hearing Detection and Intervention Programs.

    Bagatto, Marlene P; Moodie, Sheila T


    Early hearing detection and intervention (EHDI) programs have been guided by principles from the Joint Committee on Infant Hearing and an international consensus of best practice principles for family-centered early intervention. Both resources provide a solid foundation from which to design, implement, and sustain a high-quality, family-centered EHDI program. As a result, infants born with permanent hearing loss and their families will have the support they need to develop communication skills. These families also will benefit from programs that align with the framework offered by the World Health Organization's International Classification of Functioning, Disability and Health: Children & Youth Version (ICF-CY). Within this framework, health and functioning is defined and measured by describing the consequences of the health condition (i.e., hearing loss) in terms of body function, structures, activity, and participation as well as social aspects of the child. This article describes the relevance of the ICF-CY for EHDI programs and offers a modified approach by including aspects of quality of life and human development across time.

  16. A classification of diabetic foot infections using ICD-9-CM codes: application to a large computerized medical database

    Miller Donald R


    Full Text Available Abstract Background Diabetic foot infections are common, serious, and varied. Diagnostic and treatment strategies are correspondingly diverse. It is unclear how patients are managed in actual practice and how outcomes might be improved. Clarification will require study of large numbers of patients, such as are available in medical databases. We have developed and evaluated a system for identifying and classifying diabetic foot infections that can be used for this purpose. Methods We used the (VA Diabetes Epidemiology Cohorts (DEpiC database to conduct a retrospective observational study of patients with diabetic foot infections. DEpiC contains computerized VA and Medicare patient-level data for patients with diabetes since 1998. We determined which ICD-9-CM codes served to identify patients with different types of diabetic foot infections and ranked them in declining order of severity: Gangrene, Osteomyelitis, Ulcer, Foot cellulitis/abscess, Toe cellulitis/abscess, Paronychia. We evaluated our classification by examining its relationship to patient characteristics, diagnostic procedures, treatments given, and medical outcomes. Results There were 61,007 patients with foot infections, of which 42,063 were classifiable into one of our predefined groups. The different types of infection were related to expected patient characteristics, diagnostic procedures, treatments, and outcomes. Our severity ranking showed a monotonic relationship to hospital length of stay, amputation rate, transition to long-term care, and mortality. Conclusions We have developed a classification system for patients with diabetic foot infections that is expressly designed for use with large, computerized, ICD-9-CM coded administrative medical databases. It provides a framework that can be used to conduct observational studies of large numbers of patients in order to examine treatment variation and patient outcomes, including the effect of new management strategies

  17. Population-based evaluation of a suggested anatomic and clinical classification of congenital heart defects based on the International Paediatric and Congenital Cardiac Code

    Goffinet François


    Full Text Available Abstract Background Classification of the overall spectrum of congenital heart defects (CHD has always been challenging, in part because of the diversity of the cardiac phenotypes, but also because of the oft-complex associations. The purpose of our study was to establish a comprehensive and easy-to-use classification of CHD for clinical and epidemiological studies based on the long list of the International Paediatric and Congenital Cardiac Code (IPCCC. Methods We coded each individual malformation using six-digit codes from the long list of IPCCC. We then regrouped all lesions into 10 categories and 23 subcategories according to a multi-dimensional approach encompassing anatomic, diagnostic and therapeutic criteria. This anatomic and clinical classification of congenital heart disease (ACC-CHD was then applied to data acquired from a population-based cohort of patients with CHD in France, made up of 2867 cases (82% live births, 1.8% stillbirths and 16.2% pregnancy terminations. Results The majority of cases (79.5% could be identified with a single IPCCC code. The category "Heterotaxy, including isomerism and mirror-imagery" was the only one that typically required more than one code for identification of cases. The two largest categories were "ventricular septal defects" (52% and "anomalies of the outflow tracts and arterial valves" (20% of cases. Conclusion Our proposed classification is not new, but rather a regrouping of the known spectrum of CHD into a manageable number of categories based on anatomic and clinical criteria. The classification is designed to use the code numbers of the long list of IPCCC but can accommodate ICD-10 codes. Its exhaustiveness, simplicity, and anatomic basis make it useful for clinical and epidemiologic studies, including those aimed at assessment of risk factors and outcomes.

  18. Coding and classification in drug statistics – From national to global application

    Marit Rønning


    Full Text Available  SUMMARYThe Anatomical Therapeutic Chemical (ATC classification system and the defined daily dose (DDDwas developed in Norway in the early seventies. The creation of the ATC/DDD methodology was animportant basis for presenting drug utilisation statistics in a sensible way. Norway was in 1977 also thefirst country to publish national drug utilisation statistics from wholesalers on an annual basis. Thecombination of these activities in Norway in the seventies made us a pioneer country in the area of drugutilisation research. Over the years, the use of the ATC/DDD methodology has gradually increased incountries outside Norway. Since 1996, the methodology has been recommended by WHO for use ininternational drug utilisation studies. The WHO Collaborating Centre for Drug Statistics Methodologyin Oslo handles the maintenance and development of the ATC/DDD system. The Centre is now responsiblefor the global co-ordination. After nearly 30 years of experience with ATC/DDD, the methodologyhas demonstrated its suitability in drug use research. The main challenge in the coming years is toeducate the users worldwide in how to use the methodology properly.

  19. Fast approximations to structured sparse coding and applications to object classification

    Szlam, Arthur; LeCun, Yann


    We describe a method for fast approximation of sparse coding. The input space is subdivided by a binary decision tree, and we simultaneously learn a dictionary and assignment of allowed dictionary elements for each leaf of the tree. We store a lookup table with the assignments and the pseudoinverses for each node, allowing for very fast inference. We give an algorithm for learning the tree, the dictionary and the dictionary element assignment, and In the process of describing this algorithm, we discuss the more general problem of learning the groups in group structured sparse modelling. We show that our method creates good sparse representations by using it in the object recognition framework of \\cite{lazebnik06,yang-cvpr-09}. Implementing our own fast version of the SIFT descriptor the whole system runs at 20 frames per second on $321 \\times 481$ sized images on a laptop with a quad-core cpu, while sacrificing very little accuracy on the Caltech 101 and 15 scenes benchmarks.

  20. An imprinted non-coding genomic cluster at 14q32 defines clinically relevant molecular subtypes in osteosarcoma across multiple independent datasets.

    Hill, Katherine E; Kelly, Andrew D; Kuijjer, Marieke L; Barry, William; Rattani, Ahmed; Garbutt, Cassandra C; Kissick, Haydn; Janeway, Katherine; Perez-Atayde, Antonio; Goldsmith, Jeffrey; Gebhardt, Mark C; Arredouani, Mohamed S; Cote, Greg; Hornicek, Francis; Choy, Edwin; Duan, Zhenfeng; Quackenbush, John; Haibe-Kains, Benjamin; Spentzos, Dimitrios


    A microRNA (miRNA) collection on the imprinted 14q32 MEG3 region has been associated with outcome in osteosarcoma. We assessed the clinical utility of this miRNA set and their association with methylation status. We integrated coding and non-coding RNA data from three independent annotated clinical osteosarcoma cohorts (n = 65, n = 27, and n = 25) and miRNA and methylation data from one in vitro (19 cell lines) and one clinical (NCI Therapeutically Applicable Research to Generate Effective Treatments (TARGET) osteosarcoma dataset, n = 80) dataset. We used time-dependent receiver operating characteristic (tdROC) analysis to evaluate the clinical value of candidate miRNA profiles and machine learning approaches to compare the coding and non-coding transcriptional programs of high- and low-risk osteosarcoma tumors and high- versus low-aggressiveness cell lines. In the cell line and TARGET datasets, we also studied the methylation patterns of the MEG3 imprinting control region on 14q32 and their association with miRNA expression and tumor aggressiveness. In the tdROC analysis, miRNA sets on 14q32 showed strong discriminatory power for recurrence and survival in the three clinical datasets. High- or low-risk tumor classification was robust to using different microRNA sets or classification methods. Machine learning approaches showed that genome-wide miRNA profiles and miRNA regulatory networks were quite different between the two outcome groups and mRNA profiles categorized the samples in a manner concordant with the miRNAs, suggesting potential molecular subtypes. Further, miRNA expression patterns were reproducible in comparing high-aggressiveness versus low-aggressiveness cell lines. Methylation patterns in the MEG3 differentially methylated region (DMR) also distinguished high-aggressiveness from low-aggressiveness cell lines and were associated with expression of several 14q32 miRNAs in both the cell lines and the large TARGET clinical dataset

  1. Identification of relevant ICF (International Classification of Functioning, Disability and Health) categories in lymphedema patients: A cross-sectional study

    Viehoff, P.B.; Potijk, F.; Damstra, R.J.; Heerkens, Y.F.; Ravensberg, C.D. van; Berkel, D.M. van; Neumann, H.A.


    BACKGROUND: To describe functioning and health of lymphedema patients and to identify their most common problems using the International Classification of Functioning, Disability and Health (ICF) as part of the preparatory studies for the development of ICF Core Sets for lymphedema. METHODS: Cross-s

  2. Identification of relevant ICF (International Classification of Functioning, Disability and Health) categories in lymphedema patients: A cross-sectional study

    P.B. Viehoff (Peter); F. Potijk; R.J. Damstra (Robert); Y.F. Heerkens (Yvonne); C.D. van Ravensberg (Dorine); D.M. Van Berkel; H.A.M. Neumann (Martino)


    textabstractBackground. To describe functioning and health of lymphedema patients and to identify their most common problems using the International Classification of Functioning, Disability and Health (ICF) as part of the preparatory studies for the development of ICF Core Sets for lymphedema.Metho

  3. Classification of forensically-relevant larvae according to instar in a closely related species of carrion beetles (Coleoptera: Silphidae: Silphinae).

    Frątczak, Katarzyna; Matuszewski, Szymon


    Carrion beetle larvae of Necrodes littoralis (Linnaeus, 1758), Oiceoptoma thoracicum (Linnaeus, 1758), Thanatophilus sinuatus (Fabricius, 1775), and Thanatophilus rugosus (Linnaeus, 1758) (Silphidae: Silphinae) were studied to test the concept that a classifier of the subfamily level may be successfully used to classify larvae according to instar. Classifiers were created and validated using a linear discriminant analysis (LDA). LDA generates classification functions which are used to calculate classification values for tested specimens. The largest value indicates the larval instar to which the specimen should be assigned. Distance between dorsal stemmata and width of the pronotum were used as classification features. The classifier correctly classified larvae of N. littoralis and O. thoracicum, whereas in the case of T. sinuatus and T. rugosus a few misclassifications were recorded. For this reason, a separate genus level classifier was created for larvae of Thanatophilus. We conclude that larval instar classifiers of the subfamily or genus level have very high classification accuracy and therefore they may be safely used to classify carrion beetle larvae according to instar in forensic practice.

  4. Identification of relevant ICF (International Classification of Functioning, Disability and Health) categories in lymphedema patients: A cross-sectional study

    Viehoff, P.B.; Potijk, F.; Damstra, R.J.; Heerkens, Y.F.; Ravensberg, C.D. van; Berkel, D.M. van; Neumann, H.A.


    BACKGROUND: To describe functioning and health of lymphedema patients and to identify their most common problems using the International Classification of Functioning, Disability and Health (ICF) as part of the preparatory studies for the development of ICF Core Sets for lymphedema. METHODS:

  5. Identification of relevant ICF (International Classification of Functioning, Disability and Health) categories in lymphedema patients: A cross-sectional study

    P.B. Viehoff (Peter); F. Potijk; R.J. Damstra (Robert); Y.F. Heerkens (Yvonne); C.D. van Ravensberg (Dorine); D.M. Van Berkel; H.A.M. Neumann (Martino)


    textabstractBackground. To describe functioning and health of lymphedema patients and to identify their most common problems using the International Classification of Functioning, Disability and Health (ICF) as part of the preparatory studies for the development of ICF Core Sets for

  6. Revision, uptake and coding issues related to the open access Orchard Sports Injury Classification System (OSICS versions 8, 9 and 10.1

    John Orchard


    Full Text Available John Orchard1, Katherine Rae1, John Brooks2, Martin Hägglund3, Lluis Til4, David Wales5, Tim Wood61Sports Medicine at Sydney University, Sydney NSW Australia; 2Rugby Football Union, Twickenham, England, UK; 3Department of Medical and Health Sciences, Linköping University, Linköping, Sweden; 4FC Barcelona, Barcelona, Catalonia, Spain; 5Arsenal FC, Highbury, England, UK; 6Tennis Australia, Melbourne, Vic, AustraliaAbstract: The Orchard Sports Injury Classification System (OSICS is one of the world’s most commonly used systems for coding injury diagnoses in sports injury surveillance systems. Its major strengths are that it has wide usage, has codes specific to sports medicine and that it is free to use. Literature searches and stakeholder consultations were made to assess the uptake of OSICS and to develop new versions. OSICS was commonly used in the sports of football (soccer, Australian football, rugby union, cricket and tennis. It is referenced in international papers in three sports and used in four commercially available computerised injury management systems. Suggested injury categories for the major sports are presented. New versions OSICS 9 (three digit codes and OSICS 10.1 (four digit codes are presented. OSICS is a potentially helpful component of a comprehensive sports injury surveillance system, but many other components are required. Choices made in developing these components should ideally be agreed upon by groups of researchers in consensus statements.Keywords: sports injury classification, epidemiology, surveillance, coding

  7. Possible Relevance of Receptor-Receptor Interactions between Viral- and Host-Coded Receptors for Viral-Induced Disease

    Luigi F. Agnati


    Full Text Available It has been demonstrated that some viruses, such as the cytomegalovirus, code for G-protein coupled receptors not only to elude the immune system, but also to redirect cellular signaling in the receptor networks of the host cells. In view of the existence of receptor-receptor interactions, the hypothesis is introduced that these viral-coded receptors not only operate as constitutively active monomers, but also can affect other receptor function by interacting with receptors of the host cell. Furthermore, it is suggested that viruses could also insert not single receptors (monomers, but clusters of receptors (receptor mosaics, altering the cell metabolism in a profound way. The prevention of viral receptor-induced changes in host receptor networks may give rise to novel antiviral drugs that counteract viral-induced disease.

  8. The Use of System Codes in Scaling Studies: Relevant Techniques for Qualifying NPP Nodalizations for Particular Scenarios

    V. Martinez-Quiroga


    Full Text Available System codes along with necessary nodalizations are valuable tools for thermal hydraulic safety analysis. Qualifying both codes and nodalizations is an essential step prior to their use in any significant study involving code calculations. Since most existing experimental data come from tests performed on the small scale, any qualification process must therefore address scale considerations. This paper describes the methodology developed at the Technical University of Catalonia in order to contribute to the qualification of Nuclear Power Plant nodalizations by means of scale disquisitions. The techniques that are presented include the so-called Kv-scaled calculation approach as well as the use of “hybrid nodalizations” and “scaled-up nodalizations.” These methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The paper explains both the concepts and the general guidelines of the method, while an accompanying paper will complete the presentation of the methodology as well as showing the results of the analysis of scaling discrepancies that appeared during the posttest simulations of PKL-LSTF counterpart tests performed on the PKL-III and ROSA-2 OECD/NEA Projects. Both articles together produce the complete description of the methodology that has been developed in the framework of the use of NPP nodalizations in the support to plant operation and control.

  9. Administrative simplification: change to the compliance date for the International Classification of Diseases, 10th Revision (ICD-10-CM and ICD-10-PCS) medical data code sets. Final rule.


    This final rule implements section 212 of the Protecting Access to Medicare Act of 2014 by changing the compliance date for the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) for diagnosis coding, including the Official ICD-10-CM Guidelines for Coding and Reporting, and the International Classification of Diseases, 10th Revision, Procedure Coding System (ICD-10-PCS) for inpatient hospital procedure coding, including the Official ICD-10-PCS Guidelines for Coding and Reporting, from October 1, 2014 to October 1, 2015. It also requires the continued use of the International Classification of Diseases, 9th Revision, Clinical Modification, Volumes 1 and 2 (diagnoses), and 3 (procedures) (ICD-9-CM), including the Official ICD-9-CM Guidelines for Coding and Reporting, through September 30, 2015.

  10. Android event code automatic generation method based on object relevance%基于对象关联的Android事件代码自动生成方法

    李杨; 胡文


    为解决Android事件代码自动生成问题,结合对象关联理论,论述了控件对象关联关系,并给出控件对象关联关系定义并实现其构建过程,最终建立控件对象关联关系树COARTree,将其应用于Android事件代码生成过程中,解决了Android事件代码自动生成问题,并取得了良好的应用价值.以简易电话簿为实例,验证了Android事件代码自动生成的方法.%In order to solve the problem of Android event code automatically generated, this paper combined with the object of relevance theory (OAR) , discussed on the control object relationship, and gave the control object relationships theory ( COAR) defining and achieve their build process, and ultimately establish control object relationship tree(COARTree) applied to Android event code generation process to solve the problem of Android event code automatically generated, and have achieved good application value. Simple phone book, for instance, to verify the Android event code automatically generated.

  11. Z 编码在恶性肿瘤疾病分类中的应用%The Application of Z Coding in Malignant Tumor Disease Classification



      目的探讨Z编码在恶性肿瘤分类中的应用。方法对恶性肿瘤后续医疗病人采用Z编码中相应类目进行分类,并建议进一步补充现行的疾病编码字典库的第六位编码。结果Z编码确定了对恶性肿瘤反复住院不同情况的分类规则;结论Z编码中相应的类目是将恶性肿瘤首诊和死亡信息与后续医疗信息进行有效连接,达到了对恶性肿瘤信息进行全方位检索和利用,更好地为医疗科研服务,为医院管理工作提供参考资料。%Objective To investigate the application of Z coding in malignant tumor disease classification .Methods To classify the successor treatment of patients with malignant tumor diseases with Z coding category ,and to suggest making further supplement of the sixth coding in current disease coding dictionary database .Results Z coding determines the classification rules of repeated hospital-ization of patients with malignant tumor diseases .Conclusion With the help of Z coding category ,the effective information connectiv-ity is established between the first diagnosis ,death and successor treatment of patients with tumor diseases ,and achieve the goals of comprehensive retrieval and utilization of malignant tumor information consequently ,thus provide better service for the clinical prac-tice ,scientific research and management of the hospital .

  12. Exploring the landscape of pathogenic genetic variation in the ExAC population database: insights of relevance to variant classification.

    Song, Wei; Gardner, Sabrina A; Hovhannisyan, Hayk; Natalizio, Amanda; Weymouth, Katelyn S; Chen, Wenjie; Thibodeau, Ildiko; Bogdanova, Ekaterina; Letovsky, Stanley; Willis, Alecia; Nagan, Narasimhan


    We evaluated the Exome Aggregation Consortium (ExAC) database as a control cohort to classify variants across a diverse set of genes spanning dominant and recessively inherited disorders. The frequency of pathogenic variants in ExAC was compared with the estimated maximal pathogenic allele frequency (MPAF), based on the disease prevalence, penetrance, inheritance, allelic and locus heterogeneity of each gene. Additionally, the observed carrier frequency and the ethnicity-specific variant distribution were compared between ExAC and the published literature. The carrier frequency and ethnic distribution of pathogenic variants in ExAC were concordant with reported estimates. Of 871 pathogenic/likely pathogenic variants across 19 genes, only 3 exceeded the estimated MPAF. Eighty-four percent of variants with ExAC frequencies above the estimated MPAF were classified as "benign." Additionally, 20% of the cardiac and 19% of the Lynch syndrome gene variants originally classified as "VUS" occurred with ExAC frequencies above the estimated MPAF, making these suitable for reassessment. The ExAC database is a useful source for variant classification and is not overrepresented for pathogenic variants in the genes evaluated. However, the mutational spectrum, pseudogenes, genetic heterogeneity, and paucity of literature should be considered in deriving meaningful classifications using ExAC.Genet Med 18 8, 850-854.

  13. Identifying relevant areas of functioning in children and youth with Cerebral Palsy using the ICF-CY coding system: from whose perspective?

    Schiariti, Veronica; Mâsse, Louise C


    A standardized methodology endorsed by the World Health Organization was used to select the most relevant International Classification of Functioning, Disability and Health for children and youth (ICF-CY) categories to inform the development of the ICF Core Sets for CY with Cerebral Palsy (CP). The aim of this study was to appraise comparatively the results of the four studies included in the preparatory phase of the project exploring relevant areas of functioning in CY with CP. ICF-CY categories identified in the preparatory studies - systematic review, global expert survey, qualitative study, and clinical study - were ranked. We compared the ranking percentile scores of the categories across studies. Each study emphasized different ICF-CY components and provided unique categories. Professionals from the health, education and social sectors described areas of functioning that were well distributed across the ICF-CY components (global expert survey), CY with CP and caregivers highlighted areas within the components activity and participation (a & p) and environmental factors (qualitative study), while the research community and clinical encounters mainly focused on body functions and a & p (systematic review and clinical study). This study highlights the need to consider all relevant perspectives when describing the functional profile of CY with CP. Copyright © 2014 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.

  14. A new design criterion and construction method for space-time trellis codes based on classification of error events


    The known design criterions of Space-Time Trellis Codes (STTC) on slow Rayleigh fading channel are rank, determinant and trace criterion. These criterions are not advantageous not only in operation but also in performance. With classifying the error events of STTC, a new criterion was presented on slow Rayleigh fading channels. Based on the criterion, an effective and straightforward multi-step method is proposed to construct codes with better performance. This method can reduce the computation of search to small enough. Simulation results show that the codes searched by computer have the same or even better performance than the reported codes.

  15. Relevance and Effectiveness of the WHO Global Code Practice on the International Recruitment of Health Personnel – Ethical and Systems Perspectives

    Ruairi Brugha


    Full Text Available The relevance and effectiveness of the World Health Organization’s (WHO’s Global Code of Practice on the International Recruitment of Health Personnel is being reviewed in 2015. The Code, which is a set of ethical norms and principles adopted by the World Health Assembly (WHA in 2010, urges members states to train and retain the health personnel they need, thereby limiting demand for international migration, especially from the under-staffed health systems in low- and middle-income countries. Most countries failed to submit a first report in 2012 on implementation of the Code, including those source countries whose health systems are most under threat from the recruitment of their doctors and nurses, often to work in 4 major destination countries: the United States, United Kingdom, Canada and Australia. Political commitment by source country Ministers of Health needs to have been achieved at the May 2015 WHA to ensure better reporting by these countries on Code implementation for it to be effective. This paper uses ethics and health systems perspectives to analyse some of the drivers of international recruitment. The balance of competing ethics principles, which are contained in the Code’s articles, reflects a tension that was evident during the drafting of the Code between 2007 and 2010. In 2007-2008, the right of health personnel to migrate was seen as a preeminent principle by US representatives on the Global Council which co-drafted the Code. Consensus on how to balance competing ethical principles – giving due recognition on the one hand to the obligations of health workers to the countries that trained them and the need for distributive justice given the global inequities of health workforce distribution in relation to need, and the right to migrate on the other hand – was only possible after President Obama took office in January 2009. It is in the interests of all countries to implement the Global Code and not just those that

  16. Pan-Cancer Analyses Reveal Long Intergenic Non-Coding RNAs Relevant to Tumor Diagnosis, Subtyping and Prognosis.

    Ching, Travers; Peplowska, Karolina; Huang, Sijia; Zhu, Xun; Shen, Yi; Molnar, Janos; Yu, Herbert; Tiirikainen, Maarit; Fogelgren, Ben; Fan, Rong; Garmire, Lana X


    Long intergenic noncoding RNAs (lincRNAs) are a relatively new class of non-coding RNAs that have the potential as cancer biomarkers. To seek a panel of lincRNAs as pan-cancer biomarkers, we have analyzed transcriptomes from over 3300 cancer samples with clinical information. Compared to mRNA, lincRNAs exhibit significantly higher tissue specificities that are then diminished in cancer tissues. Moreover, lincRNA clustering results accurately classify tumor subtypes. Using RNA-Seq data from thousands of paired tumor and adjacent normal samples in The Cancer Genome Atlas (TCGA), we identify six lincRNAs as potential pan-cancer diagnostic biomarkers (PCAN-1 to PCAN-6). These lincRNAs are robustly validated using cancer samples from four independent RNA-Seq data sets, and are verified by qPCR in both primary breast cancers and MCF-7 cell line. Interestingly, the expression levels of these six lincRNAs are also associated with prognosis in various cancers. We further experimentally explored the growth and migration dependence of breast and colon cancer cell lines on two of the identified lncRNAs. In summary, our study highlights the emerging role of lincRNAs as potentially powerful and biologically functional pan-cancer biomarkers and represents a significant leap forward in understanding the biological and clinical functions of lincRNAs in cancers.

  17. Positive predictive values of the International Classification of Disease, 10th edition diagnoses codes for diverticular disease in the Danish National Registry of Patients

    Rune Erichsen


    Full Text Available Rune Erichsen1, Lisa Strate2, Henrik Toft Sørensen1, John A Baron31Department of Clinical Epidemiology, Aarhus University Hospital, Denmark; 2Division of Gastroenterology, University of Washington, Seattle, WA, USA; 3Departments of Medicine and of Community and Family Medicine, Dartmouth Medical School, NH, USAObjective: To investigate the accuracy of diagnostic coding for diverticular disease in the Danish National Registry of Patients (NRP.Study design and setting: At Aalborg Hospital, Denmark, with a catchment area of 640,000 inhabitants, we identified 100 patients recorded in the NRP with a diagnosis of diverticular disease (International Classification of Disease codes, 10th revision [ICD-10] K572–K579 during the 1999–2008 period. We assessed the positive predictive value (PPV as a measure of the accuracy of discharge codes for diverticular disease using information from discharge abstracts and outpatient notes as the reference standard.Results: Of the 100 patients coded with diverticular disease, 49 had complicated diverticular disease, whereas 51 had uncomplicated diverticulosis. For the overall diagnosis of diverticular disease (K57, the PPV was 0.98 (95% confidence intervals [CIs]: 0.93, 0.99. For the more detailed subgroups of diagnosis indicating the presence or absence of complications (K573–K579 the PPVs ranged from 0.67 (95% CI: 0.09, 0.99 to 0.92 (95% CI: 0.52, 1.00. The diagnosis codes did not allow accurate identification of uncomplicated disease or any specific complication. However, the combined ICD-10 codes K572, K574, and K578 had a PPV of 0.91 (95% CI: 0.71, 0.99 for any complication.Conclusion: The diagnosis codes in the NRP can be used to identify patients with diverticular disease in general; however, they do not accurately discern patients with uncomplicated diverticulosis or with specific diverticular complications.Keywords: diverticulum, colon, diverticulitis, validation studies

  18. Relevance of the mouse skin initiation-promotion model for the classification of carcinogenic substances encountered at the workplace.

    Schwarz, Michael; Thielmann, Heinz W; Meischner, Veronika; Fartasch, Manigé


    The Permanent Senate Commission for the Investigation of Health Hazards of Chemical Compounds in the Work Area (MAK Commission of the Deutsche Forschungsgemeinschaft) evaluates chemical substances using scientific criteria to prevent adverse effects on health at the work place. As part of this task there is a need to evaluate tumor promoting activity of chemicals (enhancement of formation of squamous cell carcinomas via premalignant papillomas) obtained from two-stage initiation/promotion experiments using the mouse skin model. In the present communication we address this issue by comparing responses seen in mouse skin with those in humans. We conclude that tumor promotional effects seen in such animal models be carefully analyzed on a case by case basis. Substances that elicit a rather non-specific effect that is restricted to the high dose range are considered to be irrelevant to humans and thus do not require classification as carcinogens. In contrast, substances that might have both a mode of action and a potency similar to the specific effects seen with TPA (12-O-tetradecanoylphorbol-13-acetate), the prototype tumor promoter in mouse skin, which triggers receptor-mediated signal cascades in the very low dose range, have to be classified in a category for carcinogens.

  19. Identification of aspects of functioning, disability and health relevant to patients experiencing vertigo: a qualitative study using the international classification of functioning, disability and health


    Purpose Aims of this study were to identify aspects of functioning and health relevant to patients with vertigo expressed by ICF categories and to explore the potential of the ICF to describe the patient perspective in vertigo. Methods We conducted a series of qualitative semi-structured face-to-face interviews using a descriptive approach. Data was analyzed using the meaning condensation procedure and then linked to categories of the International Classification of Functioning, Disability and Health (ICF). Results From May to July 2010 12 interviews were carried out until saturation was reached. Four hundred and seventy-one single concepts were extracted which were linked to 142 different ICF categories. 40 of those belonged to the component body functions, 62 to the component activity and participation, and 40 to the component environmental factors. Besides the most prominent aspect “dizziness” most participants reported problems within “Emotional functions (b152), problems related to mobility and carrying out the daily routine. Almost all participants reported “Immediate family (e310)” as a relevant modifying environmental factor. Conclusions From the patients’ perspective, vertigo has impact on multifaceted aspects of functioning and disability, mainly body functions and activities and participation. Modifying contextual factors have to be taken into account to cover the complex interaction between the health condition of vertigo on the individuals’ daily life. The results of this study will contribute to developing standards for the measurement of functioning, disability and health relevant for patients suffering from vertigo. PMID:22738067

  20. New hierarchical classification of food items for the assessment of exposure to packaging migrants: use of hub codes for different food groups.

    Northing, P; Oldring, P K T; Castle, L; Mason, P A S S


    This paper describes development work undertaken to expand the capabilities of an existing two-dimensional probabilistic modelling approach for assessing dietary exposure to chemicals migrating out of food contact materials. A new three-level hub-coding system has been devised for coding different food groups with regards to their consumption by individuals. The hub codes can be used at three different levels representing a high, medium and low level of aggregation of individual food items. The hub codes were developed because they have a greater relevance to packaging migration than coding used (largely and historically) for nutritional purposes. Also, the hub codes will assist pan-europeanization of the exposure model in the future, when up to 27 or more different food coding systems from 27 European Union Member States will have to be assimilated into the modelling approach. The applicability of the model with the new coding system has been tested by incorporating newly released 2001 UK consumption data. The example used was exposure to a hypothetical migrant from coated metal packaging for foodstuffs. When working at the three hierarchical levels, it was found that the tiered approach gave conservative estimates at the cruder level of refinement and a more realistic assessment was obtained as the refinement progressed. The work overall revealed that changes in eating habits over time had a relatively small impact on estimates of exposure. More important impacts are changes over time in packaging usage, packaging composition and migration levels. For countries like the UK, which has sophisticated food consumption data, it is uncertainties in these other areas that need to be addressed by new data collection.

  1. Revision, uptake and coding issues related to the open access Orchard Sports Injury Classification System (OSICS) versions 8, 9 and 10.1.

    Orchard, John; Rae, Katherine; Brooks, John; Hägglund, Martin; Til, Lluis; Wales, David; Wood, Tim


    The Orchard Sports Injury Classification System (OSICS) is one of the world's most commonly used systems for coding injury diagnoses in sports injury surveillance systems. Its major strengths are that it has wide usage, has codes specific to sports medicine and that it is free to use. Literature searches and stakeholder consultations were made to assess the uptake of OSICS and to develop new versions. OSICS was commonly used in the sports of football (soccer), Australian football, rugby union, cricket and tennis. It is referenced in international papers in three sports and used in four commercially available computerised injury management systems. Suggested injury categories for the major sports are presented. New versions OSICS 9 (three digit codes) and OSICS 10.1 (four digit codes) are presented. OSICS is a potentially helpful component of a comprehensive sports injury surveillance system, but many other components are required. Choices made in developing these components should ideally be agreed upon by groups of researchers in consensus statements.

  2. Classification of genus Pseudomonas by MALDI-TOF MS based on ribosomal protein coding in S10-spc-alpha operon at strain level.

    Hotta, Yudai; Teramoto, Kanae; Sato, Hiroaki; Yoshikawa, Hiromichi; Hosoda, Akifumi; Tamura, Hiroto


    We have proposed a rapid phylogenetic classification at the strain level by MALDI-TOF MS using ribosomal protein matching profiling. In this study, the S10-spc-alpha operon, encoding half of the ribosomal subunit proteins and highly conserved in eubacterial genomes, was selected for construction of the ribosomal protein database as biomarkers for bacterial identification by MALDI-TOF MS analysis to establish a more reliable phylogenetic classification. Our method revealed that the 14 reliable and reproducible ribosomal subunit proteins with less than m/z 15,000, except for L14, coded in the S10-spc-alpha operon were significantly useful biomarkers for bacterial classification at species and strain levels by MALDI-TOF MS analysis of genus Pseudomonas strains. The obtained phylogenetic tree was consisted with that based on genetic sequence (gyrB). Since S10-spc-alpha operons of genus Pseudomonas strains were sequenced using specific primers designed based on nucleotide sequences of genome-sequenced strains, the ribosomal subunit proteins encoded in S10-spc-alpha operon were suitable biomarkers for construction and correction of the database. MALDI-TOF MS analysis using these 14 selected ribosomal proteins is a rapid, efficient, and versatile bacterial identification method with the validation procedure for the obtained results.

  3. Validity of the International Classification of Diseases 10th revision code for hyperkalaemia in elderly patients at presentation to an emergency department and at hospital admission

    Fleet, Jamie L; Shariff, Salimah Z; Gandhi, Sonja; Weir, Matthew A; Jain, Arsh K; Garg, Amit X


    Objectives Evaluate the validity of the International Classification of Diseases, 10th revision (ICD-10) code for hyperkalaemia (E87.5) in two settings: at presentation to an emergency department and at hospital admission. Design Population-based validation study. Setting 12 hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum potassium values at presentation to an emergency department (n=64 579) and at hospital admission (n=64 497). Primary outcome Sensitivity, specificity, positive-predictive value and negative-predictive value. Serum potassium values in patients with and without a hyperkalaemia code (code positive and code negative, respectively). Results The sensitivity of the best-performing ICD-10 coding algorithm for hyperkalaemia (defined by serum potassium >5.5 mmol/l) was 14.1% (95% CI 12.5% to 15.9%) at presentation to an emergency department and 14.6% (95% CI 13.3% to 16.1%) at hospital admission. Both specificities were greater than 99%. In the two settings, the positive-predictive values were 83.2% (95% CI 78.4% to 87.1%) and 62.0% (95% CI 57.9% to 66.0%), while the negative-predictive values were 97.8% (95% CI 97.6% to 97.9%) and 96.9% (95% CI 96.8% to 97.1%). In patients who were code positive for hyperkalaemia, median (IQR) serum potassium values were 6.1 (5.7 to 6.8) mmol/l at presentation to an emergency department and 6.0 (5.1 to 6.7) mmol/l at hospital admission. For code-negative patients median (IQR) serum potassium values were 4.0 (3.7 to 4.4) mmol/l and 4.1 (3.8 to 4.5) mmol/l in each of the two settings, respectively. Conclusions Patients with hospital encounters who were ICD-10 E87.5 hyperkalaemia code positive and negative had distinct higher and lower serum potassium values, respectively. However, due to very low sensitivity, the incidence of hyperkalaemia is underestimated. PMID:23274674

  4. Ontology-supported processing of clinical text using medical knowledge integration for multi-label classification of diagnosis coding

    Waraporn, Phanu; Clayton, Gareth


    This paper discusses the knowledge integration of clinical information extracted from distributed medical ontology in order to ameliorate a machine learning-based multi-label coding assignment system. The proposed approach is implemented using a decision tree based cascade hierarchical technique on the university hospital data for patients with Coronary Heart Disease (CHD). The preliminary results obtained show a satisfactory finding.

  5. Research on CIMS Information Classification and Coding Based on OO%基于OO的CIMS信息分类编码研究

    皮德常; 张凤林; 丁宗红; 王宁生


    A classification and coding system FCC links-oriented in CIMS is introduced.This method is based on the analysis of a lot of national and international classification and coding systems, the requirement of information integrated in CIMS and the object-oriented method. Firstly, we analyze the necessity of adopting OO (Object-oriented) method to construct this model and propose, a scheme composed of complex object O and operation M which is applied to the object O which is a 3-tuples describing object identifier, object feature and graph feature. Object identifier is a 2-tuples. The item of graph feature is a 4-tuples, which describes identifier ,sign, signature and files of the graph. Secondly, we propose the FCC structure description of classification and coding based on OO. In the structure, we use the links of field domain and super class / subclass, then we flat the complex object model into a relation, and the flatted relation structure can be described asobject identifier, registration and signature description. Thirdly, the system bottom adopts relation to implement the OO structure, which is compatible with the other coding system. The above-mentioned method provides a guarantee for data exchanging and sharing in CIMS. Finally, we believe that it is an indefinite and flexible levels and orients the signature of object, it effectively solves the problem to classification and information in CIMS.%通过分析国内外现有各分类编码系统以及CIMS对信息集成的要求,结合面向对象(Object-oriented,OO)方法提出了一种面向CIMS各环节的分类编码结构模型FCC。该模型由复杂对象O和施加于O上的操作M构成,其中O可描述为一个三元组,用于类别标识,事物特征描述和图形特征描述。类别标识项是一个可用二元组描述的复杂对象;图形特征项是一个四元组,描述了图形标识,图形符号,图形特征数据和图形文件;为描述FCC结构引人了属性域连

  6. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E


    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Classification of the genus Bacillus based on MALDI-TOF MS analysis of ribosomal proteins coded in S10 and spc operons.

    Hotta, Yudai; Sato, Jun; Sato, Hiroaki; Hosoda, Akifumi; Tamura, Hiroto


    A rapid bacterial identification method by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS) using ribosomal proteins coded in S10 and spc operons as biomarkers, named the S10-GERMS (the S10-spc-alpha operon gene encoded ribosomal protein mass spectrum) method, was applied for the genus Bacillus a Gram-positive bacterium. The S10-GERMS method could successfully distinguish the difference between B. subtilis subsp. subtilis NBRC 13719(T) and B. subtilis subsp. spizizenii NBRC 101239(T) because of the mass difference of 2 ribosomal subunit proteins, despite the difference of only 2 bases in the 16S rRNA gene between them. The 8 selected reliable and reproducible ribosomal subunit proteins without disturbance of S/N level on MALDI-TOF MS analysis, S10, S14, S19, L18, L22, L24, L29, and L30, coded in S10 and spc operons were significantly useful biomarkers for rapid bacterial classification at species and strain levels by the S10-GERMS method of genus Bacillus strains without purification of ribosomal proteins.

  8. 无损图像压缩编码方法及其比较%A Study on Ways of Lossless Image Compression and Coding and Relevant Comparisons



    This essay studies the principles of three ways of lossless image compression including run length coding, LZW coding and Huffman coding as well as making comparative analyses of them,which contributes to the applica-tions of various coding methods in lossless image compression.%研究游程编码,LZW编码和哈夫曼编码三种无损图像压缩的原理,并对其进行分析,这有助于针对不同类型的图像选择合适的压缩编码方法。

  9. Staging of mobility, transfer and walking functions of elderly persons based on the codes of the International Classification of Functioning, Disability and Health.

    Okochi, Jiro; Takahashi, Tai; Takamuku, Kiyoshi; Escorpizo, Reuben


    The International Classification of Functioning, Disability and Health (ICF) was introduced by the World Health Organization as a common taxonomy to describe the burden of health conditions. This study focuses on the development of a scale for staging basic mobility and walking functions based on the ICF. Thirty-three ICF codes were selected to test their fit to the Rasch model and their location. Of these ICF items, four were used to develop a Guttman- type scale of "basic mobility" and another four to develop a"walking" scale to stage functional performance in the elderly. The content validity and differential item functioning of the scales were assessed. The participants, chosen at random, were Japanese over 65 years old using the services of public long-term care insurance, and whose functional assessments were used for scale development and scale validation. There were 1164 elderly persons who were eligible for scale development. To stage the functional performance of elderly persons, two Guttman-type scales of "basic mobility" and "walking" were constructed. The order of item difficulty was validated using 3260 elderly persons. There is no differential item functioning about study location, sex and age-group in the newly developed scales. These results suggested the newly developed scales have content validity. These scales divided functional performance into five stages according to four ICF codes, making the measurements simple and less time-consuming and enable clear descriptions of elderly functioning level. This was achieved by hierarchically rearranging the ICF items and constructing Guttman-type scales according to item difficulty using the Rasch model. In addition, each functional level might require similar resources and therefore enable standardization of care and rehabilitation. Illustrations facilitate the sharing of patient images among health care providers. By using the ICF as a common taxonomy, these scales could be used internationally as

  10. Developing A Specific Criteria For Categorization Of Radioactive Waste Classification System For Uganda Using The Radar's Computer Code

    Byamukama, Abdul [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Haiyong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)


    Radioactive materials are utilized in industries, agriculture and research, medical facilities and academic institutions for numerous purposes that are useful in the daily life of mankind. To effectively manage the radioactive waste and selecting appropriate disposal schemes, it is imperative to have a specific criteria for allocating radioactive waste to a particular waste class. Uganda has a radioactive waste classification scheme based on activity concentration and half-life albeit in qualitative terms as documented in the Uganda Atomic Energy Regulations 2012. There is no clear boundary between the different waste classes and hence difficult to; suggest disposal options, make decisions and enforcing compliance, communicate with stakeholders effectively among others. To overcome the challenges, the RESRAD computer code was used to derive a specific criteria for classifying between the different waste categories for Uganda basing on the activity concentration of radionuclides. The results were compared with that of Australia and were found to correlate given the differences in site parameters and consumption habits of the residents in the two countries.

  11. Parents' Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life

    Illum, Niels Ove; Gradel, Kim Oren


    AIM: To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. METHOD......: Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers...... of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after...

  12. A new coding system for metabolic disorders demonstrates gaps in the international disease classifications ICD-10 and SNOMED-CT, which can be barriers to genotype-phenotype data sharing.

    Sollie, Annet; Sijmons, Rolf H; Lindhout, Dick; van der Ploeg, Ans T; Rubio Gozalbo, M Estela; Smit, G Peter A; Verheijen, Frans; Waterham, Hans R; van Weely, Sonja; Wijburg, Frits A; Wijburg, Rudolph; Visser, Gepke


    Data sharing is essential for a better understanding of genetic disorders. Good phenotype coding plays a key role in this process. Unfortunately, the two most widely used coding systems in medicine, ICD-10 and SNOMED-CT, lack information necessary for the detailed classification and annotation of rare and genetic disorders. This prevents the optimal registration of such patients in databases and thus data-sharing efforts. To improve care and to facilitate research for patients with metabolic disorders, we developed a new coding system for metabolic diseases with a dedicated group of clinical specialists. Next, we compared the resulting codes with those in ICD and SNOMED-CT. No matches were found in 76% of cases in ICD-10 and in 54% in SNOMED-CT. We conclude that there are sizable gaps in the SNOMED-CT and ICD coding systems for metabolic disorders. There may be similar gaps for other classes of rare and genetic disorders. We have demonstrated that expert groups can help in addressing such coding issues. Our coding system has been made available to the ICD and SNOMED-CT organizations as well as to the Orphanet and HPO organizations for further public application and updates will be published online ( and

  13. Translation of the Department of Defense Disease and Injury Codes to the Eighth Revision International Classification of Diseases for use by the Military Services.


    Example: DDDIC ICDA-8 NHRC Code Ntmuber Code Number Code Number Disease Entity Desivnator 5000 08-467-01 Hypertrophy of tonsils and adenoids ...Unspecified with regard to surgical treatment 5100 08-467-02 Hypertrophy of tonsils and adenoids / Without mention of tonsillectomy or adenoidectomy 5101 08-467...03 Hypertrophy of tonsils and adenoids / With tonsillectomy or adenoidectomy 5110 5010 08-468-01 Peritonsillar abscess/All types 5120 5020 08-469-01

  14. 基于自回归模型和关联向量机的癫痫脑电信号自动分类%Automatic Classification of Epileptic EEG Signals Based on AR Model and Relevance Vector Machine

    韩敏; 孙磊磊; 洪晓军; 韩杰


    癫痫脑电信号自动分类方法的研究具有重要意义.基于自回归模型和关联向量机,实现癫痫脑电信号的自动分类.采用自回归模型,进行脑电信号特征提取;通过引入主成分分析和线性判别分析两种特征变换方法,降低特征空间维数;采用关联向量机作为分类器,提高模型稀疏性并可以得到概率式输出.在对波恩大学癫痫研究中心脑电信号的分类中,所提出的方法最高准确率可以达到99.875%;在将特征空间维数降至原始维数的1/15时,分类准确率仍可达到99.500%;采用关联向量机作为分类器,模型稀疏性大幅提高,与支持向量机相比,同等条件下关联向量数仅为支持向量数的几十分之一.所提方法可以很好地应用于癫痫脑电信号的自动分类.%Automatic classification system of epileptic EEG signals is one very important issue. In this paper a new epileptic EEG signal classification method was proposed on the basis of AR model and relevance vector machine. AR model was used to extract EEG features, and then principle components analysis and linear discriminant analysis were adopted to reduce the dimensionality of feature space. In order to obtain a sparser model and a model with probabilistic outputs, relevance vector machine was chosen as classifier. A publicly-available database was used to test the proposed method: the highest accuracy obtained in this paper is 99. 875% ; and even if the dimensionality of feature space is reduced to 1/15 of the original dimensionality, the classification accuracy was still able to reach 99. 500% . The introduction of relevance vector machine makes the model sparser; the number of relevance vectors is just a few tenths of that of support vectors. The results mentioned above suggest that the method can be well applied in epileptic EEG signal classification.

  15. Open Source Fundamental Industry Classification

    Kakushadze, Zura; Yu, Willie


    We provide complete source code for building a fundamental industry classification based on publically available and freely downloadable data. We compare various fundamental industry classifications by running a horserace of short-horizon trading signals (alphas) utilizing open source heterotic risk models ( built using such industry classifications. Our source code includes various stand-alone and portable modules, e.g., for downloading/parsing web data, etc.

  16. 荨麻疹性血管炎ICD-10的编码%Discussion of Disease Classification and Coding about Urticaria Vasculitis



    目的 确定荨麻疹性血管炎正确的ICD-10编码.方法 通过学习变应性血管炎的相关资料,按照国际疾病分类原则进行编码.结果 根据病案中免疫检验-抗肾小球基底膜抗体的检验结果,变应性血管炎有不同的编码.如果抗肾小球基底膜抗体阳性,变应性血管炎为M31.0;如果抗肾小球基底膜抗体阴性,变应性血管炎的编码为D69.0.结论 国际疾病分类技术性较强,专业要求较高,在实际工作中经常会遇到一些疑难编码,需要认真阅读病案分析情况,查找出准确编码,提高编码准确率.%Objective To discuss the correct ICD-10 coding of urticarialvasculitis. M ethods Coding urticarkl vasculitis according to ICD by studying related m aterial of allergic vasculitis. Results A llergic vasculitis has different codes according to test results of in m une inspectbn-anti gbm erular basem ent m em brane antibody . If gbm erular basem ent m em brane antibody positive, 1he code of allergic vasculitis is M 31.0 ; If gbm erular basem ent m em brane antibody negative, 1he code of allergic vasculitis is D 69 .0 . Conclusbn ICD is a highly technical, high professbnalrequirem ents subject. In 1he actual w ork w e often encounter som e difficulties coding. W e need to carefully read m edical records and find accurate coding to im prove 1he coding accuracy.

  17. Classification, disease, and diagnosis.

    Jutel, Annemarie


    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  18. Accelerate Implementation of the WHO Global Code of Practice on International Recruitment of Health Personnel: Experiences From the South East Asia Region; Comment on “Relevance and Effectiveness of the WHO Global Code Practice on the International Recruitment of Health Personnel – Ethical and Systems Perspectives”

    Viroj Tangcharoensathien


    Full Text Available Strengthening the health workforce and universal health coverage (UHC are among key targets in the heathrelated Sustainable Development Goals (SDGs to be committed by the United Nations (UN Member States in September 2015. The health workforce, the backbone of health systems, contributes to functioning delivery systems. Equitable distribution of functioning services is indispensable to achieve one of the UHC goals of equitable access. This commentary argues the World Health Organization (WHO Global Code of Practice on International Recruitment of Health Personnel is relevant to the countries in the South East Asia Region (SEAR as there is a significant outflow of health workers from several countries and a significant inflow in a few, increased demand for health workforce in high- and middle-income countries, and slow progress in addressing the “push factors.” Awareness and implementation of the Code in the first report in 2012 was low but significantly improved in the second report in 2015. An inter-country workshop in 2015 convened by WHO SEAR to review progress in implementation of the Code was an opportunity for countries to share lessons on policy implementation, on retention of health workers, scaling up health professional education and managing in and out migration. The meeting noted that capturing outmigration of health personnel, which is notoriously difficult for source countries, is possible where there is an active recruitment management through government to government (G to G contracts or licensing the recruiters and mandatory reporting requirement by them. According to the 2015 second report on the Code, the size and profile of outflow health workers from SEAR source countries is being captured and now also increasingly being shared by destination country professional councils. This is critical information to foster policy action and implementation of the Code in the Region.

  19. A Tale of Two Disability Coding Systems: The Veterans Administration Schedule for Rating Disabilities (VASRD) vs. Diagnostic Coding Using the International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9-CM)


    2, 34). It seems likely that these costs will only increase once Soldiers with disabilities related to Operation Iraqi Freedom are processed and...demyelinating 10 Convulsive Disorders disorders, residuals of cardiovascular accidents and traumatic brain injury, seizures , and peripheral...with a neurological disability. Epilepsy (ICD-9-CM code 345) was the most common diagnosis comprising at least 7.6% of the total neurological

  20. The Research and Appliaction of the Multi-classification Algorithm of Error-Correcting Codes Based on Support Vector Machine%基于SVM的纠错编码多分类算法的研究与应用

    祖文超; 苑津莎; 王峰; 刘磊


    In order to enhance the accuracy rate of transformer fault diagnosis,multiclass classification algorithm,which is based upon Error-correcting codes connects with SVM,has been proposedThe mathe-matical model of transformer fault diagnosis is set up according to the theory of Support Vector Machine. Firstly,the Error-correcting codes matrix constructs some irrelevant Support Vector Machine,so that the accuracy rate of classified model can be enhanced.Finally,taking the dissolved gases in the transformer oil as the practise and testing sample of Error-correcting codes and SVM to realize transformer fault diagno- sis.And checking the arithmetic by using UCI data.The multiclass classification algorithm has been verified through VS2008 combined with Libsvm has been verified.And the result shows the method has high ac- curacy of classification.%为了提高变压器故障诊断的准确率,提出了一种基于纠错编码和支持向量机相结合的多分类算法,根据SVM理论建立变压器故障诊断数学模型,首先基于纠错编码矩阵构造出若干个互不相关的子支持向量机,以提高分类模型的分类准确率。最后把变压器油中溶解气体(DGA)作为纠错编码支持向量机的训练以及测试样本,实现变压器的故障诊断,同时用UCI数据对该算法进行验证。通过VS2008和Libsvm相结合对其进行验证,结果表明该方法具有很高的分类精度。

  1. Establishment of database for food classification and coding in Chinese dietary exposure assessment%中国膳食暴露评估数据库食物分类及编码研究

    岳立文; 韩晓梅; 孙金芳; Hong Chen; 王灿楠; 吴永宁; 刘沛; 闵捷


    Objective To establish the basis for Chinese dietary exposure assessment database by classifying and coding the data from the national dietary survey and pollutant surveillance.Methods The method,which combined CODEX food classifying and coding of Codex Alimentarius Commission(CAC)with Chinese food classification of food composition table,was applied to classify and code the data of 1 810 703 Chinese dietary consumption and 487 819 pollutant surveillance.The coding system was according to the first two letters of the respective food group that represent the type or source of foods,the last four digits represent the serial number of the food in the CAC food classification.If the foods can be found in CAC food code system,its original food code is used.The new codes corresponding with the foods which are not exist in CAC food code system,is added according to CAC coding methods.Results Dietary consumption data were divided into 6 major categories,19 types,75 groups,the agricultural products of pollutant surveillance corresponding to 499 codes.Comparing with CAC food coding system,Chinese dietary consumption data have added F(candy snacks)and G(beverages)2 major categories,4 types,33 groups,302 new codes.The additional groups most were the processing food groups with Chinese characteristics,such as canned,beverages,candy,meat products.Conclusion The foundation of data communication to dietary exposure assessment has been established,and the connection of Chinese food classifying and coding with CAC data have been achieved.%目的 对膳食调查及污染物监测数据进行分类和编码,为构建中国膳食暴露评估数据库奠定基础.方法 采用国际食品法典委员会(Codex Alimentarius Commission,CAC)食品法典(CODEX)的食物分类与我国食物成分表食物分类原则相结合的方式,按照食物所属组别前2位英文字母代表该食物的种类或来源,后4位数字代表该食物在CAC食物分类系统中排列序号的

  2. Classification of EEG sleep stage based on Bayesian relevance vector machine%基于贝叶斯相关向量机的脑电睡眠分期

    沈跃; 刘慧; 谢洪波; 和卫星


    针对支持向量机(SVM)计算复杂度高和参数不易确定的局限性,提出一种基于稀疏贝叶斯相关向量机(RVM)的脑电数据睡眠分期方法.给出二分类RVM的参数推理和优化,并确定了二叉树多分类RVM模型.基于8例健康成年人的MIT/BIH睡眠脑电实测数据,根据已有的专家人工睡眠分期注释,首先提取清醒期和睡眠各期脑电数据的样本熵值作为特征向量样本,然后利用二叉树多分类器法构建贝叶斯RVM睡眠分期模型,输入清醒期和各睡眠期样本进行训练和测试,最终实现各睡眠分期的模式分类.结果表明:在两种径向基核函数下,基于RVM的睡眠分期识别准确率最高达到89.00%,高于SVM方法(87.67%),且较SVM需要更少的支持向量数目及更短的测试时间,即RVM比传统的SVM具有更优的分类能力和更高的计算效率,是一种有效的睡眠分期识别方法.%To overcome the disadvantages of complicated calculation and uncertain parameter selection of support vector machine (SVM), a new algorithm based on sparse Bayesian relevance vector machine (RVM) was proposed to classify electroencephalography (EEG) sleep stage. Inference and optimization of parameters of the binary classification RVM were given, and binary tree RVM multi-class model was established. According to the known sleep stage annotations by experts, sample entropy (SampEn) features of each sleep stage were extracted from the EEG sleep signals of eight healthy volunteers without any medication in MIT/BIH database. Then the sleep stage types were identified through multi-lay RVM pattern recognition classifier on binary tree categorization by training and testing samples of sleep and awake period. The results show that the maximal identification rate of RVM can reach 89.00%, which is better than that of the SVM (87.67%). The number of relevance vectors and test time of RVM are both less than those of SVM, which means that the RVM method is an

  3. Remote-Handled Transuranic Content Codes

    Washington TRU Solutions


    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document representsthe development of a uniform content code system for RH-TRU waste to be transported in the 72-Bcask. It will be used to convert existing waste form numbers, content codes, and site-specificidentification codes into a system that is uniform across the U.S. Department of Energy (DOE) sites.The existing waste codes at the sites can be grouped under uniform content codes without any lossof waste characterization information. The RH-TRUCON document provides an all-encompassing|description for each content code and compiles this information for all DOE sites. Compliance withwaste generation, processing, and certification procedures at the sites (outlined in this document foreach content code) ensures that prohibited waste forms are not present in the waste. The contentcode gives an overall description of the RH-TRU waste material in terms of processes and|packaging, as well as the generation location. This helps to provide cradle-to-grave traceability ofthe waste material so that the various actions required to assess its qualification as payload for the72-B cask can be performed. The content codes also impose restrictions and requirements on themanner in which a payload can be assembled.The RH-TRU Waste Authorized Methods for Payload Control (RH-TRAMPAC), Appendix 1.3.7of the 72-B Cask Safety Analysis Report (SAR), describes the current governing procedures|applicable for the qualification of waste as payload for the 72-B cask. The logic for this|classification is presented in the 72-B Cask SAR. Together, these documents (RH-TRUCON,|RH-TRAMPAC, and relevant sections of the 72-B Cask SAR) present the foundation and|justification for classifying RH-TRU waste into content codes. Only content codes described in thisdocument can be considered for transport in the 72-B cask. Revisions to this document will be madeas additional waste qualifies for transport. |Each content code uniquely

  4. Information gathering for CLP classification.

    Marcello, Ida; Giordano, Felice; Costamagna, Francesca Marina


    Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP). If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances) and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  5. Information gathering for CLP classification

    Ida Marcello


    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  6. nRC: non-coding RNA Classifier based on structural features.

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso


    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at The source code of nRC tool is also available at

  7. Validity of the recorded International Classification of Diseases, 10th edition diagnoses codes of bone metastases and skeletal-related events in breast and prostate cancer patients in the Danish National Registry of Patients

    Annette Østergaard Jensen


    Full Text Available Annette Østergaard Jensen1, Mette Nørgaard1, Mellissa Yong2, Jon P Fryzek2, Henrik Toft Sørensen11Department of Clinical Epidemiology, Aarhus University hospital, Århus, Denmark; 2Global Epidemiology, Amgen inc., Thousands Oaks, CA, USAObjective: The clinical history of bone metastases and skeletal-related events (SREs secondary to cancers is not well understood. In support of studies of the natural history of bone metastases and SREs in Danish prostate and breast cancer patients, we estimated the sensitivity and specificity of hospital diagnoses for bone metastases and SREs (ie, radiation therapy to the bone, pathological or osteoporotic fractures, spinal cord compression and surgery to the bone in a nationwide medical registry in Denmark.Study design and setting: In North Jutland County, Denmark, we randomly sampled 100 patients with primary prostate cancer and 100 patients with primary breast cancer diagnoses from the National Registry of Patients (NRP, during the period January 1st, 2000 to December 31st, 2000 and followed them for up to five years after their cancer diagnosis. We used information from medical chart reviews as the reference for estimating sensitivity, and specificity of the NRP International Classification of Diseases, 10th edition (ICD-10 coding for bone metastases and SRE diagnoses. Results: For prostate cancer, the overall sensitivity of bone metastases or SRE coding in the NRP was 0.54 (95% confidence interval [CI]: 0.39–0.69, and the specificity was 0.96 (95% CI: 0.87–1.00. For breast cancer, the overall sensitivity of bone metastases or SRE coding in the NRP was 0.58 (95% CI: 0.34–0.80, and the specificity was 0.95 (95% CI: 0.88–0.99. Conclusion: We measured the validity of ICD-10 coding in the Danish NRP for bone metastases and SREs in prostate and breast cancer patients and found it has adequate sensitivity and high specificity. The NRP remains a valuable tool for clinical epidemiological studies of bone

  8. 基于分类编码的自动化仓储系统的模拟与实现%Simulation and Implementation of Automated Warehousing System Based on Classification and Coding

    谭涛; 刘沛


    针对目前许多企业仓储管理缺乏合理的分类原则以及数据管理混乱等问题,本着节约成本的原则,该文提出了一种采用分类编码技术代替传统的传感器网络技术的自动化仓储系统的设计方案,并完成了基于GE RX3i系列PLC和触摸屏系统的模拟实现。通过在Proicy Machine Edition开发环境下的在线调试,系统不但能够实现基本的自动化仓储系统的出库和入库功能,而且还能实现标准化管理和可视化操作,为工业应用实现提供有利的参考和理论依据。%Aiming at the problem that Warehouse management lack of many enterprises has no reasonable classification principles and chaos of data management, we adhere to the principle of cost saving, and puts forward a kind of the classification and coding technology instead of traditional sensor network technology of automatic warehousing system design scheme, and completed based on GE RX3i series PLC and touch screen to the implementation of the system simulation. Through the Proficy Machine Edition de⁃velopment environment online debugging, system can not only realize the basic automation warehouse outbound and inbound func⁃tion in the system, but also can realize standardization management and visualization operation, for the industrial application imple⁃mentation to provide a beneficial reference and theoretical basis.

  9. The IASLC Lung Cancer Staging Project: Proposals for Coding T Categories for Subsolid Nodules and Assessment of Tumor Size in Part-Solid Tumors in the Forthcoming Eighth Edition of the TNM Classification of Lung Cancer.

    Travis, William D; Asamura, Hisao; Bankier, Alexander A; Beasley, Mary Beth; Detterbeck, Frank; Flieder, Douglas B; Goo, Jin Mo; MacMahon, Heber; Naidich, David; Nicholson, Andrew G; Powell, Charles A; Prokop, Mathias; Rami-Porta, Ramón; Rusch, Valerie; van Schil, Paul; Yatabe, Yasushi


    This article proposes codes for the primary tumor categories of adenocarcinoma in situ (AIS) and minimally invasive adenocarcinoma (MIA) and a uniform way to measure tumor size in part-solid tumors for the eighth edition of the tumor, node, and metastasis classification of lung cancer. In 2011, new entities of AIS, MIA, and lepidic predominant adenocarcinoma were defined, and they were later incorporated into the 2015 World Health Organization classification of lung cancer. To fit these entities into the T component of the staging system, the Tis category is proposed for AIS, with Tis (AIS) specified if it is to be distinguished from squamous cell carcinoma in situ (SCIS), which is to be designated Tis (SCIS). We also propose that MIA be classified as T1mi. Furthermore, the use of the invasive size for T descriptor size follows a recommendation made in three editions of the Union for International Cancer Control tumor, node, and metastasis supplement since 2003. For tumor size, the greatest dimension should be reported both clinically and pathologically. In nonmucinous lung adenocarcinomas, the computed tomography (CT) findings of ground glass versus solid opacities tend to correspond respectively to lepidic versus invasive patterns seen pathologically. However, this correlation is not absolute; so when CT features suggest nonmucinous AIS, MIA, and lepidic predominant adenocarcinoma, the suspected diagnosis and clinical staging should be regarded as a preliminary assessment that is subject to revision after pathologic evaluation of resected specimens. The ability to predict invasive versus noninvasive size on the basis of solid versus ground glass components is not applicable to mucinous AIS, MIA, or invasive mucinous adenocarcinomas because they generally show solid nodules or consolidation on CT.

  10. Facilitating Internet-Scale Code Retrieval

    Bajracharya, Sushil Krishna


    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  11. Positive predictive values of International Classification of Diseases, 10th revision codes for dermatologic events and hypersensitivity leading to hospitalization or emergency room visit among women with postmenopausal osteoporosis in the Danish and Swedish national patient registries.

    Adelborg, Kasper; Christensen, Lotte Brix; Munch, Troels; Kahlert, Johnny; Trolle Lagerros, Ylva; Tell, Grethe S; Apalset, Ellen M; Xue, Fei; Ehrenstein, Vera


    Clinical epidemiology research studies, including pharmacoepidemiology and pharmacovigilance studies, use routinely collected health data, such as diagnoses recorded in national health and administrative registries, to assess clinical effectiveness and safety of treatments. We estimated positive predictive values (PPVs) of International Classification of Diseases, 10th revision (ICD-10) codes for primary diagnoses of dermatologic events and hypersensitivity recorded at hospitalization or emergency room visit in the national patient registries of Denmark and Sweden among women with postmenopausal osteoporosis (PMO). This validation study included women with PMO identified from the Danish and Swedish national patient registries (2005-2014). Medical charts of the potential cases served as the gold standard for the diagnosis confirmation and were reviewed and adjudicated by physicians. We obtained and reviewed 189 of 221 sampled medical records (86%). The overall PPV was 92.4% (95% confidence interval [CI], 85.1%-96.3%) for dermatologic events, while the PPVs for bullous events and erythematous dermatologic events were 52.5% (95% CI, 37.5%-67.1%) and 12.5% (95% CI, 2.2%-47.1%), respectively. The PPV was 59.0% (95% CI, 48.3%-69.0%) for hypersensitivity; however, the PPV of hypersensitivity increased to 100.0% (95% CI, 67.6%-100.0%) when restricting to diagnostic codes for anaphylaxis. The overall results did not vary by country. Among women with PMO, the PPV for any dermatologic event recorded as the primary diagnosis at hospitalization or at an emergency room visit was high and acceptable for epidemiologic research in the Danish and Swedish national patient registries. The PPV was substantially lower for hypersensitivity leading to hospitalization or emergency room visit.

  12. Positive predictive values of International Classification of Diseases, 10th revision codes for dermatologic events and hypersensitivity leading to hospitalization or emergency room visit among women with postmenopausal osteoporosis in the Danish and Swedish national patient registries

    Adelborg, Kasper; Christensen, Lotte Brix; Munch, Troels; Kahlert, Johnny; Trolle Lagerros, Ylva; Tell, Grethe S; Apalset, Ellen M; Xue, Fei; Ehrenstein, Vera


    Background Clinical epidemiology research studies, including pharmacoepidemiology and pharmacovigilance studies, use routinely collected health data, such as diagnoses recorded in national health and administrative registries, to assess clinical effectiveness and safety of treatments. We estimated positive predictive values (PPVs) of International Classification of Diseases, 10th revision (ICD-10) codes for primary diagnoses of dermatologic events and hypersensitivity recorded at hospitalization or emergency room visit in the national patient registries of Denmark and Sweden among women with postmenopausal osteoporosis (PMO). Methods This validation study included women with PMO identified from the Danish and Swedish national patient registries (2005–2014). Medical charts of the potential cases served as the gold standard for the diagnosis confirmation and were reviewed and adjudicated by physicians. Results We obtained and reviewed 189 of 221 sampled medical records (86%). The overall PPV was 92.4% (95% confidence interval [CI], 85.1%–96.3%) for dermatologic events, while the PPVs for bullous events and erythematous dermatologic events were 52.5% (95% CI, 37.5%–67.1%) and 12.5% (95% CI, 2.2%–47.1%), respectively. The PPV was 59.0% (95% CI, 48.3%–69.0%) for hypersensitivity; however, the PPV of hypersensitivity increased to 100.0% (95% CI, 67.6%–100.0%) when restricting to diagnostic codes for anaphylaxis. The overall results did not vary by country. Conclusion Among women with PMO, the PPV for any dermatologic event recorded as the primary diagnosis at hospitalization or at an emergency room visit was high and acceptable for epidemiologic research in the Danish and Swedish national patient registries. The PPV was substantially lower for hypersensitivity leading to hospitalization or emergency room visit.

  13. An Analysis of the Relationship between IFAC Code of Ethics and CPI

    Ayşe İrem Keskin


    Full Text Available Abstract Code of ethics has become a significant concept as regards to the business world. That is why occupational organizations have developed their own codes of ethics over time. In this study, primarily the compatibility classification of the accounting code of ethics belonging to the IFAC (The International Federation of Accountants is carried out on the basis of the action plans assessing the levels of usage by the 175 IFAC national accounting organizations. It is determined as a result of the classification that 60,6% of the member organizations are applying the IFAC code in general, the rest 39,4% on the other hand, is not applying the code at all. With this classification, the hypothesis propounding that “The national accounting organizations in highly corrupt countries would be less likely to adopt the IFAC ethic code than those in very clean countries,” is tested using the “Corruption Perception Index-CPI” data. It is determined that the findings support this relevant hypothesis.          

  14. Patent classifications as indicators of intellectual organization

    L. Leydesdorff


    Using the 138,751 patents filed in 2006 under the Patent Cooperation Treaty, co-classification analysis is pursued on the basis of three- and four-digit codes in the International Patent Classification (IPC, 8th ed.). The co-classifications among the patents enable us to analyze and visualize the re

  15. Administrative simplification: adoption of a standard for a unique health plan identifier; addition to the National Provider Identifier requirements; and a change to the compliance date for the International Classification of Diseases, 10th Edition (ICD-10-CM and ICD-10-PCS) medical data code sets. Final rule.


    This final rule adopts the standard for a national unique health plan identifier (HPID) and establishes requirements for the implementation of the HPID. In addition, it adopts a data element that will serve as an other entity identifier (OEID), or an identifier for entities that are not health plans, health care providers, or individuals, but that need to be identified in standard transactions. This final rule also specifies the circumstances under which an organization covered health care provider must require certain noncovered individual health care providers who are prescribers to obtain and disclose a National Provider Identifier (NPI). Lastly, this final rule changes the compliance date for the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) for diagnosis coding, including the Official ICD-10-CM Guidelines for Coding and Reporting, and the International Classification of Diseases, 10th Revision, Procedure Coding System (ICD-10-PCS) for inpatient hospital procedure coding, including the Official ICD-10-PCS Guidelines for Coding and Reporting, from October 1, 2013 to October 1, 2014.

  16. Positive predictive values of International Classification of Diseases, 10th revision codes for dermatologic events and hypersensitivity leading to hospitalization or emergency room visit among women with postmenopausal osteoporosis in the Danish and Swedish national patient registries

    Adelborg K


    Full Text Available Kasper Adelborg,1 Lotte Brix Christensen,1 Troels Munch,1 Johnny Kahlert,1 Ylva Trolle Lagerros,2,3 Grethe S Tell,4 Ellen M Apalset,4,5 Fei Xue,6 Vera Ehrenstein1 1Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus N, Denmark; 2Department of Medicine, Clinical Epidemiology Unit, Karolinska Institutet, 3Department of Medicine, Clinic of Endocrinology, Metabolism and Diabetes, Karolinska University Hospital, Stockholm, Sweden; 4Department of Global Public Health and Primary Care, University of Bergen, 5Department of Rheumatology, Haukeland University Hospital, Bergen, Norway; 6Center for Observational Research, Amgen Inc. Thousand Oaks, CA, USA Background: Clinical epidemiology research studies, including pharmacoepidemiology and pharmacovigilance studies, use routinely collected health data, such as diagnoses recorded in national health and administrative registries, to assess clinical effectiveness and safety of treatments. We estimated positive predictive values (PPVs of International Classification of Diseases, 10th revision (ICD-10 codes for primary diagnoses of dermatologic events and hypersensitivity recorded at hospitalization or emergency room visit in the national patient registries of Denmark and Sweden among women with postmenopausal osteoporosis (PMO. Methods: This validation study included women with PMO identified from the Danish and Swedish national patient registries (2005–2014. Medical charts of the potential cases served as the gold standard for the diagnosis confirmation and were reviewed and adjudicated by physicians. Results: We obtained and reviewed 189 of 221 sampled medical records (86%. The overall PPV was 92.4% (95% confidence interval [CI], 85.1%–96.3% for dermatologic events, while the PPVs for bullous events and erythematous dermatologic events were 52.5% (95% CI, 37.5%–67.1% and 12.5% (95% CI, 2.2%–47.1%, respectively. The PPV was 59.0% (95% CI, 48.3%–69.0% for hypersensitivity; however

  17. Coding Partitions

    Fabio Burderi


    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  18. Classification system to describe workpieces definitions

    Macconnell, W R


    A Classification System to Describe Workpieces provides information pertinent to the fundamental aspects and principles of coding. This book discusses the various applications of the classification system of coding.Organized into three chapters, this book begins with an overview of the requirements of a system of classification pertaining adequately and equally to design, production, and work planning. This text then examines the purpose of the classification system in production to determine the most suitable means of machining a component. Other chapters consider the optimal utilization of m

  19. Holographic codes

    Latorre, Jose I


    There exists a remarkable four-qutrit state that carries absolute maximal entanglement in all its partitions. Employing this state, we construct a tensor network that delivers a holographic many body state, the H-code, where the physical properties of the boundary determine those of the bulk. This H-code is made of an even superposition of states whose relative Hamming distances are exponentially large with the size of the boundary. This property makes H-codes natural states for a quantum memory. H-codes exist on tori of definite sizes and get classified in three different sectors characterized by the sum of their qutrits on cycles wrapped through the boundaries of the system. We construct a parent Hamiltonian for the H-code which is highly non local and finally we compute the topological entanglement entropy of the H-code.

  20. Sharing code

    Kubilius, Jonas


    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  1. Supervised Transfer Sparse Coding

    Al-Shedivat, Maruan


    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  2. Speaking Code

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  3. Polar Codes


    QPSK Gaussian channels . .......................................................................... 39 vi 1. INTRODUCTION Forward error correction (FEC...Capacity of BSC. 7 Figure 5. Capacity of AWGN channel . 8 4. INTRODUCTION TO POLAR CODES Polar codes were introduced by E. Arikan in [1]. This paper...Under authority of C. A. Wilgenbusch, Head ISR Division EXECUTIVE SUMMARY This report describes the results of the project “More reliable wireless

  4. Land Cover - Minnesota Land Cover Classification System

    Minnesota Department of Natural Resources — Land cover data set based on the Minnesota Land Cover Classification System (MLCCS) coding scheme. This data was produced using a combination of aerial photograph...

  5. Linking of the American Academy of Orthopaedic Surgeons Distal Radius Fracture Clinical Practice Guidelines to the International Classification of Functioning, Disability, and Health; International Classification of Diseases; and ICF Core Sets for Hand Conditions.

    Esakki, Saravanan; MacDermid, Joy; Vajravelu, Saipriya


    Background: American Academy of Orthopaedic Surgeons (AAOS) distal radius fracture (DRF) clinical practice guidelines (CPG) are readily available to clinicians, patients, and policymakers. International Classification of Functioning, Disability, and Health (ICF) provides a framework for describing the impact of health conditions. The International Classification of Diseases-10th Revision (ICD-10) is a classification system to classify health conditions as specific disease or disorders. The aim of this study is to analyze and describe the scope and focus of the AAOS DRF CPG using the ICF and ICD-10 as a basis for content analysis, and to compare the content of the CPG with the ICF hand core sets as the reference standard. Methods: Established linking rules were used by 2 independent raters to analyze the 29 recommendations of the AAOS DRF CPG. ICD-10 codes were assigned in the same process. Summary linkage statistics were used to describe the results for ICF and the hand core sets. Results: Among the 29 recommendations of the AAOS DRF CPG, 5 meaningful concepts were linked to the ICF codes. Of these, 5 codes appeared on the comprehensive ICF core set and only 3 codes appeared in the brief ICF core set, and 7 conditions were covered in ICD-10 codes. Conclusions: The AAOS DRF CPG focuses on surgical interventions and has minimal linkage to the constructs of the ICD-10 and ICF. It does not address activity or participation (disability), and is not well linked to key concepts relevant to hand conditions.

  6. The structure of dual Grassmann codes

    Beelen, Peter; Pinero, Fernando


    In this article we study the duals of Grassmann codes, certain codes coming from the Grassmannian variety. Exploiting their structure, we are able to count and classify all their minimum weight codewords. In this classification the lines lying on the Grassmannian variety play a central role....... Related codes, namely the affine Grassmann codes, were introduced more recently in Beelen et al. (IEEE Trans Inf Theory 56(7):3166–3176, 2010), while their duals were introduced and studied in Beelen et al. (IEEE Trans Inf Theory 58(6):3843–3855, 2010). In this paper we also classify and count the minimum...... weight codewords of the dual affine Grassmann codes. Combining the above classification results, we are able to show that the dual of a Grassmann code is generated by its minimum weight codewords. We use these properties to establish that the increase of value of successive generalized Hamming weights...


    陈志华; 陈惟昌; 邱红霞; 王自强


    According to the degree of degeneracy of genetic codes,64 geneticcodons can be subdivided into two groups,the high degenerate group (3,4,6 triplets degeneracy)and the low degenerate group (single and 2 triplets degeneracy).There are 9 amino acids which belong to the high degenerate group (G,A,S,P,V,T,L,I,R)and 11 amino acids to the low degenerate group (C,N,D,Q,K,E,M,H,F,Y,W).Amino acids of the high degenerate group have relatively simple molecular structure,rather small molecular weights and comparatively concentrated distribution of isoelectric points.While in the low degenerate group,molecular structure is more complex,with relatively large molecular weights,and the distribution of their isoelectric points is more dispersed.Based on the two dimension distribution of molecular weights (M)and isoelectric points (P)of amino acids,a set of classification graph (Venn's diagram)of amino acids can be obtained.The MP classification graph can demonstrate many chemical properties of amino acids,such as:size of molecular weights,degree of degeneracy,polar or non-polar,charged or non-charged,hydrophobic or hydrophilic,and the functional groups of the residues.It is suggested that the amino acids of high degenerate group are mostly small and simple,and constitute the transmembranic structure or the structural domains of protein molecules.So,amino acids of high degenerate group might appear in the early evolution stage.On the other hand,the amino acids of low degenerate group are rather large and complex,and ultimately correlate to the functional domains of protein molecules,then,the amino acids of low degenerate group might appear more lately during evolution.%根据氨基酸遗传密码子的简并程度,可将64个遗传密码子分为高简并度类(3,4,6度简并组)和低简并度类(1,2度简并组)两大类[1]。高简并度类有9个氨基酸,其分子量比较小,等电点的分布比较集中。低简并度类有11个氨基酸,其分子结构

  8. Combined genetic and splicing analysis of BRCA1 c.[594-2A>C; 641A>G] highlights the relevance of naturally occurring in-frame transcripts for developing disease gene variant classification algorithms

    de la Hoya, Miguel; Soukarieh, Omar; López-perolio, Irene


    ,10 transcripts predicted to encode a BRCA1 protein with tumor suppression function.We confirm that BRCA1c.[594-2A > C;641A > G] should not be considered a high-risk pathogenic variant. Importantly, results from our detailed mRNA analysis suggest that BRCA-associated cancer risk is likely not markedly increased......A recent analysis using family history weighting and co-observation classification modeling indicated that BRCA1 c.594-2A > C (IVS9-2A > C), previously described to cause exon 10 skipping (a truncating alteration), displays characteristics inconsistent with those of a high risk pathogenic BRCA1...... for individuals who carry a truncating variant in BRCA1 exons 9 or 10, or any other BRCA1 allele that permits 20-30% of tumor suppressor function. More generally, our findings highlight the importance of assessing naturally occurring alternative splicing for clinical evaluation of variants in disease...

  9. The importance of wound documentation and classification.

    Russell, L

    Good wound documentation has become increasingly important over the last 10 years. Wound assessment provides a baseline situation against which a patient's plan of care can be evaluated. A number of documents have been implemented including the 'Code of Professional Conduct for Nurses, Midwives and Health Visitors' (UKCC, 1992), the 'Post-registration Education Project' (UKCC, 1997), 'Standards of Records and Record Keeping' (UKCC, 1998), and 'Keeping the Record Straight' (NHS Executive (NHS E), 1993). These documents require nurses to maintain their professional knowledge and competence, and to recognize any deficiency in their knowledge. Having recognized any deficiency they should read the relevant literature and/or attend a study day on wound care. Nursing records are the first source of evidence investigated when a complaint is made. Wound assessment is very complex and a standardized approach to evaluation needs to be adopted. Such evaluation should encompass colour classification, wound measurement, and classification of tissue type present in the wound. There are numerous methods of measuring wounds; these range from the simple, such as manual estimation by means of a ruler or wound tracing, to the more technical procedures, e.g. computer, image analysis, and colour imaging using hue saturation and intensity. Photography, in conjunction with nursing notes, provides a very good form of wound documentation and can provide clear evidence if required for legal cases.

  10. Creating a classification of image types in the medical literature for visual categorization

    Müller, Henning; Kalpathy-Cramer, Jayashree; Demner-Fushman, Dina; Antani, Sameer


    Content-based image retrieval (CBIR) from specialized collections has often been proposed for use in such areas as diagnostic aid, clinical decision support, and teaching. The visual retrieval from broad image collections such as teaching files, the medical literature or web images, by contrast, has not yet reached a high maturity level compared to textual information retrieval. Visual image classification into a relatively small number of classes (20-100) on the other hand, has shown to deliver good results in several benchmarks. It is, however, currently underused as a basic technology for retrieval tasks, for example, to limit the search space. Most classification schemes for medical images are focused on specific areas and consider mainly the medical image types (modalities), imaged anatomy, and view, and merge them into a single descriptor or classification hierarchy. Furthermore, they often ignore other important image types such as biological images, statistical figures, flowcharts, and diagrams that frequently occur in the biomedical literature. Most of the current classifications have also been created for radiology images, which are not the only types to be taken into account. With Open Access becoming increasingly widespread particularly in medicine, images from the biomedical literature are more easily available for use. Visual information from these images and knowledge that an image is of a specific type or medical modality could enrich retrieval. This enrichment is hampered by the lack of a commonly agreed image classification scheme. This paper presents a hierarchy for classification of biomedical illustrations with the goal of using it for visual classification and thus as a basis for retrieval. The proposed hierarchy is based on relevant parts of existing terminologies, such as the IRMA-code (Image Retrieval in Medical Applications), ad hoc classifications and hierarchies used in imageCLEF (Image retrieval task at the Cross-Language Evaluation

  11. Speech coding

    Ravishankar, C., Hughes Network Systems, Germantown, MD


    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  12. Speaking Code

    Cox, Geoff

    ; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....

  13. Classifications for cesarean section: a systematic review.

    Maria Regina Torloni

    Full Text Available BACKGROUND: Rising cesarean section (CS rates are a major public health concern and cause worldwide debates. To propose and implement effective measures to reduce or increase CS rates where necessary requires an appropriate classification. Despite several existing CS classifications, there has not yet been a systematic review of these. This study aimed to 1 identify the main CS classifications used worldwide, 2 analyze advantages and deficiencies of each system. METHODS AND FINDINGS: Three electronic databases were searched for classifications published 1968-2008. Two reviewers independently assessed classifications using a form created based on items rated as important by international experts. Seven domains (ease, clarity, mutually exclusive categories, totally inclusive classification, prospective identification of categories, reproducibility, implementability were assessed and graded. Classifications were tested in 12 hypothetical clinical case-scenarios. From a total of 2948 citations, 60 were selected for full-text evaluation and 27 classifications identified. Indications classifications present important limitations and their overall score ranged from 2-9 (maximum grade =14. Degree of urgency classifications also had several drawbacks (overall scores 6-9. Woman-based classifications performed best (scores 5-14. Other types of classifications require data not routinely collected and may not be relevant in all settings (scores 3-8. CONCLUSIONS: This review and critical appraisal of CS classifications is a methodologically sound contribution to establish the basis for the appropriate monitoring and rational use of CS. Results suggest that women-based classifications in general, and Robson's classification, in particular, would be in the best position to fulfill current international and local needs and that efforts to develop an internationally applicable CS classification would be most appropriately placed in building upon this

  14. An automated cirrus classification

    Gryspeerdt, Edward; Quaas, Johannes; Sourdeval, Odran; Goren, Tom


    Cirrus clouds play an important role in determining the radiation budget of the earth, but our understanding of the lifecycle and controls on cirrus clouds remains incomplete. Cirrus clouds can have very different properties and development depending on their environment, particularly during their formation. However, the relevant factors often cannot be distinguished using commonly retrieved satellite data products (such as cloud optical depth). In particular, the initial cloud phase has been identified as an important factor in cloud development, but although back-trajectory based methods can provide information on the initial cloud phase, they are computationally expensive and depend on the cloud parametrisations used in re-analysis products. In this work, a classification system (Identification and Classification of Cirrus, IC-CIR) is introduced. Using re-analysis and satellite data, cirrus clouds are separated in four main types: frontal, convective, orographic and in-situ. The properties of these classes show that this classification is able to provide useful information on the properties and initial phase of cirrus clouds, information that could not be provided by instantaneous satellite retrieved cloud properties alone. This classification is designed to be easily implemented in global climate models, helping to improve future comparisons between observations and models and reducing the uncertainty in cirrus clouds properties, leading to improved cloud parametrisations.

  15. Classification of waste packages

    Mueller, H.P.; Sauer, M.; Rojahn, T. [Versuchsatomkraftwerk GmbH, Kahl am Main (Germany)


    A barrel gamma scanning unit has been in use at the VAK for the classification of radioactive waste materials since 1998. The unit provides the facility operator with the data required for classification of waste barrels. Once these data have been entered into the AVK data processing system, the radiological status of raw waste as well as pre-treated and processed waste can be tracked from the point of origin to the point at which the waste is delivered to a final storage. Since the barrel gamma scanning unit was commissioned in 1998, approximately 900 barrels have been measured and the relevant data required for classification collected and analyzed. Based on the positive results of experience in the use of the mobile barrel gamma scanning unit, the VAK now offers the classification of barrels as a service to external users. Depending upon waste quantity accumulation, this measurement unit offers facility operators a reliable and time-saving and cost-effective means of identifying and documenting the radioactivity inventory of barrels scheduled for final storage. (orig.)

  16. The SIC Are Dying: New Federal Industry Code on the Way.

    Quint, Barbara


    Standard Industrial Classification (SIC) codes have structured most federal and many private collections of industry statistics. This article introduces the new industry classification hierarchy called the North American Industry Classification System (NAICS) and its effects on searchers. A sidebar includes sources of information on the new code.…

  17. The Aster code; Code Aster

    Delbecq, J.M


    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  18. Tissue Classification

    Van Leemput, Koen; Puonti, Oula


    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  19. Product Classification

    U.S. Department of Health & Human Services — This database contains medical device names and associated information developed by the Center. It includes a three letter device product code and a Device Class...

  20. Optimal codes as Tanner codes with cyclic component codes

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng


    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe...... the codes succinctly using Gröbner bases....

  1. 78 FR 58153 - Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System...


    ... RIN 3206-AM78 Prevailing Rate Systems; North American Industry Classification System Based Federal... Industry Classification System (NAICS) codes currently used in Federal Wage System wage survey industry... 2007 North American Industry Classification System (NAICS) codes used in Federal Wage System (FWS)...

  2. Minimally-sized balanced decomposition schemes for multi-class classification

    Smirnov, E.N.; Moed, M.; Nalbantov, G.I.; Sprinkhuizen-Kuyper, I.G.


    Error-Correcting Output Coding (ECOC) is a well-known class of decomposition schemes for multi-class classification. It allows representing any multiclass classification problem as a set of binary classification problems. Due to code redundancy ECOC schemes can significantly improve generalization p

  3. Transporter Classification Database (TCDB)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  4. One dimensional Convolutional Goppa Codes over the projective line

    Pérez, J A Domínguez; Sotelo, G Serrano


    We give a general method to construct MDS one-dimensional convolutional codes. Our method generalizes previous constructions of H. Gluesing-Luerssen and B. Langfeld. Moreover we give a classification of one-dimensional Convolutional Goppa Codes and propose a characterization of MDS codes of this type.

  5. 7 CFR 28.525 - Symbols and code numbers.


    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Symbols and code numbers. 28.525 Section 28.525... REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Standards Symbols and Code Numbers Used in Recording Cotton Classification § 28.525 Symbols and code numbers. For administrative convenience, the symbols...

  6. Xenolog classification.

    Darby, Charlotte A; Stolzer, Maureen; Ropp, Patrick J; Barker, Daniel; Durand, Dannie


    Orthology analysis is a fundamental tool in comparative genomics. Sophisticated methods have been developed to distinguish between orthologs and paralogs and to classify paralogs into subtypes depending on the duplication mechanism and timing, relative to speciation. However, no comparable framework exists for xenologs: gene pairs whose history, since their divergence, includes a horizontal transfer. Further, the diversity of gene pairs that meet this broad definition calls for classification of xenologs with similar properties into subtypes. We present a xenolog classification that uses phylogenetic reconciliation to assign each pair of genes to a class based on the event responsible for their divergence and the historical association between genes and species. Our classes distinguish between genes related through transfer alone and genes related through duplication and transfer. Further, they separate closely-related genes in distantly-related species from distantly-related genes in closely-related species. We present formal rules that assign gene pairs to specific xenolog classes, given a reconciled gene tree with an arbitrary number of duplications and transfers. These xenology classification rules have been implemented in software and tested on a collection of ∼13 000 prokaryotic gene families. In addition, we present a case study demonstrating the connection between xenolog classification and gene function prediction. The xenolog classification rules have been implemented in N otung 2.9, a freely available phylogenetic reconciliation software package. . Gene trees are available at . Supplementary data are available at Bioinformatics online.

  7. Optimizing Classification in Intelligence Processing


    ACC Classification Accuracy AUC Area Under the ROC Curve CI Competitive Intelligence COMINT Communications Intelligence DoD Department of...indispensible tool to support a national leader’s decision making process, competitive intelligence (CI) has emerged in recent decades as an environment meant...effectiveness for the intelligence product in competitive intelligence environment: accuracy, objectivity, usability, relevance, readiness, and timeliness

  8. State building energy codes status



    This document contains the State Building Energy Codes Status prepared by Pacific Northwest National Laboratory for the U.S. Department of Energy under Contract DE-AC06-76RL01830 and dated September 1996. The U.S. Department of Energy`s Office of Codes and Standards has developed this document to provide an information resource for individuals interested in energy efficiency of buildings and the relevant building energy codes in each state and U.S. territory. This is considered to be an evolving document and will be updated twice a year. In addition, special state updates will be issued as warranted.

  9. Combined genetic and splicing analysis of BRCA1 c.[594-2A>C; 641A>G] highlights the relevance of naturally occurring in-frame transcripts for developing disease gene variant classification algorithms.

    de la Hoya, Miguel; Soukarieh, Omar; López-Perolio, Irene; Vega, Ana; Walker, Logan C; van Ierland, Yvette; Baralle, Diana; Santamariña, Marta; Lattimore, Vanessa; Wijnen, Juul; Whiley, Philip; Blanco, Ana; Raponi, Michela; Hauke, Jan; Wappenschmidt, Barbara; Becker, Alexandra; Hansen, Thomas V O; Behar, Raquel; Investigators, KConFaB; Niederacher, Diether; Arnold, Norbert; Dworniczak, Bernd; Steinemann, Doris; Faust, Ulrike; Rubinstein, Wendy; Hulick, Peter J; Houdayer, Claude; Caputo, Sandrine M; Castera, Laurent; Pesaran, Tina; Chao, Elizabeth; Brewer, Carole; Southey, Melissa C; van Asperen, Christi J; Singer, Christian F; Sullivan, Jan; Poplawski, Nicola; Mai, Phuong; Peto, Julian; Johnson, Nichola; Burwinkel, Barbara; Surowy, Harald; Bojesen, Stig E; Flyger, Henrik; Lindblom, Annika; Margolin, Sara; Chang-Claude, Jenny; Rudolph, Anja; Radice, Paolo; Galastri, Laura; Olson, Janet E; Hallberg, Emily; Giles, Graham G; Milne, Roger L; Andrulis, Irene L; Glendon, Gord; Hall, Per; Czene, Kamila; Blows, Fiona; Shah, Mitul; Wang, Qin; Dennis, Joe; Michailidou, Kyriaki; McGuffog, Lesley; Bolla, Manjeet K; Antoniou, Antonis C; Easton, Douglas F; Couch, Fergus J; Tavtigian, Sean; Vreeswijk, Maaike P; Parsons, Michael; Meeks, Huong D; Martins, Alexandra; Goldgar, David E; Spurdle, Amanda B


    A recent analysis using family history weighting and co-observation classification modeling indicated that BRCA1 c.594-2A > C (IVS9-2A > C), previously described to cause exon 10 skipping (a truncating alteration), displays characteristics inconsistent with those of a high risk pathogenic BRCA1 variant. We used large-scale genetic and clinical resources from the ENIGMA, CIMBA and BCAC consortia to assess pathogenicity of c.594-2A > C. The combined odds for causality considering case-control, segregation and breast tumor pathology information was 3.23 × 10(-8) Our data indicate that c.594-2A > C is always in cis with c.641A > G. The spliceogenic effect of c.[594-2A > C;641A > G] was characterized using RNA analysis of human samples and splicing minigenes. As expected, c.[594-2A > C; 641A > G] caused exon 10 skipping, albeit not due to c.594-2A > C impairing the acceptor site but rather by c.641A > G modifying exon 10 splicing regulatory element(s). Multiple blood-based RNA assays indicated that the variant allele did not produce detectable levels of full-length transcripts, with a per allele BRCA1 expression profile composed of ≈70-80% truncating transcripts, and ≈20-30% of in-frame Δ9,10 transcripts predicted to encode a BRCA1 protein with tumor suppression function.We confirm that BRCA1c.[594-2A > C;641A > G] should not be considered a high-risk pathogenic variant. Importantly, results from our detailed mRNA analysis suggest that BRCA-associated cancer risk is likely not markedly increased for individuals who carry a truncating variant in BRCA1 exons 9 or 10, or any other BRCA1 allele that permits 20-30% of tumor suppressor function. More generally, our findings highlight the importance of assessing naturally occurring alternative splicing for clinical evaluation of variants in disease-causing genes. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail:

  10. Opportunities and challenges for quality and safety applications in ICD-11: an international survey of users of coded health data.

    Southern, Danielle A; Hall, Marc; White, Deborah E; Romano, Patrick S; Sundararajan, Vijaya; Droesler, Saskia E; Pincus, Harold A; Ghali, William A


    In 2018, the World Health Organization (WHO) plans to release the 11th revision of the International Classification of Diseases (ICD). The overall goal of the WHO is to produce a new disease classification that has an enhanced ability to capture health concepts in a manner that is compatible with contemporary information systems. Accordingly, our objective was to identify opportunities and challenges in improving the utility of ICD-11 for quality and safety applications. A survey study of international stakeholders with expertise in either the production or use of coded health data. International producers or users of ICD-coded health care data. We used a snowball sampling approach to identify individuals with relevant expertise in 12 countries, mostly from North America, Europe, and Australasia. An 8-item online survey included questions on demographic characteristics, familiarity with ICD, experience using ICD-coded data on healthcare quality and safety, opinions regarding the use of ICD classification systems for quality and safety measurement, and current limitations and potential future improvements that would permit better coding of quality and safety concepts in ICD-11. Two-hundred fifty-eight unique individuals accessed the online survey; 246 provided complete responses. The respondents identified specific desires for the ICD revision: more code content for adverse events/complications; a desire for code clustering mechanisms; the need for diagnosis timing information; and the addition of better code definitions to reference materials. These findings reinforce the vision and existing work plan of the WHO's ICD revision process, because each of these desires is being addressed. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.


    Wang Feixue; Ou Gang; Zhuang Zhaowen


    A kind of novel binary phase code named sidelobe suppression code is proposed in this paper. It is defined to be the code whose corresponding optimal sidelobe suppression filter outputs the minimum sidelobes. It is shown that there do exist sidelobe suppression codes better than the conventional optimal codes-Barker codes. For example, the sidelobe suppression code of length 11 with filter of length 39 has better sidelobe level up to 17dB than that of Barker code with the same code length and filter length.

  12. Fractal methods in image analysis and coding

    Neary, David


    In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...

  13. Why relevance theory is relevant for lexicography

    Bothma, Theo; Tarp, Sven


    , socio-cognitive and affective relevance. It then shows, at the hand of examples, why relevance is important from a user perspective in the extra-lexicographical pre- and post-consultation phases and in the intra-lexicographical consultation phase. It defines an additional type of subjective relevance...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...

  14. From concatenated codes to graph codes

    Justesen, Jørn; Høholdt, Tom


    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  15. Discriminative sparse coding on multi-manifolds

    Wang, J.J.-Y.


    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  16. Good Codes From Generalised Algebraic Geometry Codes

    Jibril, Mubarak; Ahmed, Mohammed Zaki; Tjhai, Cen


    Algebraic geometry codes or Goppa codes are defined with places of degree one. In constructing generalised algebraic geometry codes places of higher degree are used. In this paper we present 41 new codes over GF(16) which improve on the best known codes of the same length and rate. The construction method uses places of small degree with a technique originally published over 10 years ago for the construction of generalised algebraic geometry codes.

  17. Small-scale classification schemes

    Hertzum, Morten


    . While coordination mechanisms focus on how classification schemes enable cooperation among people pursuing a common goal, boundary objects embrace the implicit consequences of classification schemes in situations involving conflicting goals. Moreover, the requirements specification focused on functional...... requirements and provided little information about why these requirements were considered relevant. This stands in contrast to the discussions at the project meetings where the software engineers made frequent use of both abstract goal descriptions and concrete examples to make sense of the requirements....... This difference between the written requirements specification and the oral discussions at the meetings may help explain software engineers’ general preference for people, rather than documents, as their information sources....

  18. Reactive transport codes for subsurface environmental simulation

    Steefel, C.I.; Appelo, C.A.J.; Arora, B.; Kalbacher, D.; Kolditz, O.; Lagneau, V.; Lichtner, P.C.; Mayer, K.U.; Meeussen, J.C.L.; Molins, S.; Moulton, D.; Shao, D.; Simunek, J.; Spycher, N.; Yabusaki, S.B.; Yeh, G.T.


    A general description of the mathematical and numerical formulations used in modern numerical reactive transport codes relevant for subsurface environmental simulations is presented. The formulations are followed by short descriptions of commonly used and available subsurface simulators that conside

  19. A Better Handoff for Code Officials

    Conover, David R.; Yerkes, Sara


    The U.S. Department of Energy's Building Energy Codes Program has partnered with ICC to release the new Building Energy Codes Resource Guide: Code Officials Edition. We created this binder of practical materials for a simple reason: code officials are busy learning and enforcing several codes at once for the diverse buildings across their jurisdictions. This doesn’t leave much time to search,, or the range of other helpful web-based resources for the latest energy codes tools, support, and information. So, we decided to bring the most relevant materials to code officials in a way that works best with their daily routine, and point to where they can find even more. Like a coach’s game plan, the Resource Guide is an "energy playbook" for code officials.

  20. 基于用友系统中物料编码规则及存货分类规划%Material coding rule and inventory classification planning based on UFIDA software



    介绍了ERP软件运行的编码规则,以及在企业运行中的应用.可以提升企业运行效率、快速响应订单,提高企业市场竞争力.%The coding rule of the ERP has been introduced in the text. The application of ERP in the enterprise has been put forward, which improves the running efficiency with quick response to the order, which can improve th competitive power of the enterprise.

  1. My View on Code-Switching



    Code-switching is a linguistic phenomenon that has been studied by linguists from different aspects.It is widely used in people’s daily communication,especially for people who have developed some knowledge and ability in second language and thus become bilingual.In this article,the author intends to present her understanding about the definition of code-switching,its classification with the help of some specific examples.

  2. Wavelet features in motion data classification

    Szczesna, Agnieszka; Świtoński, Adam; Słupik, Janusz; Josiński, Henryk; Wojciechowski, Konrad


    The paper deals with the problem of motion data classification based on result of multiresolution analysis implemented in form of quaternion lifting scheme. Scheme processes directly on time series of rotations coded in form of unit quaternion signal. In the work new features derived from wavelet energy and entropy are proposed. To validate the approach gait database containing data of 30 different humans is used. The obtained results are satisfactory. The classification has over than 91% accuracy.

  3. Conditional entropy coding of DCT coefficients for video compression

    Sipitca, Mihai; Gillman, David W.


    We introduce conditional Huffman encoding of DCT run-length events to improve the coding efficiency of low- and medium-bit rate video compression algorithms. We condition the Huffman code for each run-length event on a classification of the current block. We classify blocks according to coding mode and signal type, which are known to the decoder, and according to energy, which the decoder must receive as side information. Our classification schemes improve coding efficiency with little or no increased running time and some increased memory use.

  4. Discriminative Structured Dictionary Learning for Image Classification

    王萍; 兰俊花; 臧玉卫; 宋占杰


    In this paper, a discriminative structured dictionary learning algorithm is presented. To enhance the dictionary’s discriminative power, the reconstruction error, classification error and inhomogeneous representation error are integrated into the objective function. The proposed approach learns a single structured dictionary and a linear classifier jointly. The learned dictionary encourages the samples from the same class to have similar sparse codes, and the samples from different classes to have dissimilar sparse codes. The solution to the objective function is achieved by employing a feature-sign search algorithm and Lagrange dual method. Experimental results on three public databases demonstrate that the proposed approach outperforms several recently proposed dictionary learning techniques for classification.

  5. Cracking the code of oscillatory activity.

    Philippe G Schyns


    Full Text Available Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.

  6. Schrödinger's code-script: not a genetic cipher but a code of development.

    Walsby, A E; Hodge, M J S


    In his book What is Life? Erwin Schrödinger coined the term 'code-script', thought by some to be the first published suggestion of a hereditary code and perhaps a forerunner of the genetic code. The etymology of 'code' suggests three meanings relevant to 'code-script which we distinguish as 'cipher-code', 'word-code' and 'rule-code'. Cipher-codes and word-codes entail translation of one set of characters into another. The genetic code comprises not one but two cipher-codes: the first is the DNA 'base-pairing cipher'; the second is the 'nucleotide-amino-acid cipher', which involves the translation of DNA base sequences into amino-acid sequences. We suggest that Schrödinger's code-script is a form of 'rule-code', a set of rules that, like the 'highway code' or 'penal code', requires no translation of a message. Schrödinger first relates his code-script to chromosomal genes made of protein. Ignorant of its properties, however, he later abandons 'protein' and adopts in its place a hypothetical, isomeric 'aperiodic solid' whose atoms he imagines rearranged in countless different conformations, which together are responsible for the patterns of ontogenetic development. In an attempt to explain the large number of combinations required, Schrödinger referred to the Morse code (a cipher) but in doing so unwittingly misled readers into believing that he intended a cipher-code resembling the genetic code. We argue that the modern equivalent of Schrödinger's code-script is a rule-code of organismal development based largely on the synthesis, folding, properties and interactions of numerous proteins, each performing a specific task. Copyright © 2016. Published by Elsevier Ltd.

  7. Holistic facial expression classification

    Ghent, John; McDonald, J.


    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  8. Space Time Codes from Permutation Codes

    Henkel, Oliver


    A new class of space time codes with high performance is presented. The code design utilizes tailor-made permutation codes, which are known to have large minimal distances as spherical codes. A geometric connection between spherical and space time codes has been used to translate them into the final space time codes. Simulations demonstrate that the performance increases with the block lengths, a result that has been conjectured already in previous work. Further, the connection to permutation codes allows for moderate complex en-/decoding algorithms.

  9. Classification in context

    Mai, Jens Erik


    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  10. Classification in Australia.

    McKinlay, John

    Despite some inroads by the Library of Congress Classification and short-lived experimentation with Universal Decimal Classification and Bliss Classification, Dewey Decimal Classification, with its ability in recent editions to be hospitable to local needs, remains the most widely used classification system in Australia. Although supplemented at…

  11. 新版城市用地分类标准实施后的混合用地规划对策初探%The Planning Countermeasures of Mixed-use Land after the Implementation of the New Edition of the Code for Classification of Urban Land Use

    任利剑; 运迎霞


    新颁布实施的《城市用地分类与规划建设用地标准(GB50137-2011)》通过明确混合用地定性原则、拓展部分用地兼容性及增强分类系统开放性等方式体现了土地混合使用理念,但在指导混合用地的规划编制与管理实践方面仍存在不足之处。本文通过借鉴相关成熟经验,结合新版用地分类标准和我国规划体系特点,从总规、控规、修规三个层面初步探讨了混合用地的规划应对方法。%Code for classification of urban land use and planning standards of development land GB50137-2011 was enforced since January, 2012. The code embodies the idea of mixed-use by way of deifning the nature of mixed-use land, expanding the compatibility of some land type, and enhancing the openness of classiifcation system. However, there are still some shortcomings in guiding the planning and management practices of mixed-use land. Referring to the experiences abroad, this paper points out that the utilization of mixed-use land should be based on the revised code and the features of our urban planning system, then it renders suggestions on countermeasures to improve the planning methods from the aspects of master plan, regulatory plan and site plan.

  12. Fundamentals of convolutional coding

    Johannesson, Rolf


    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  13. Tools for Rapid Understanding of Malware Code


    semantics-based techniques for identifying and removing obfuscation code; and the synthesis of simplification techniques to transform the resulting... software obfuscation. 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON...Specific accomplishments include: (a) Development of improved techniques for information flow analysis in software [8]. (b) Generic techniques for

  14. Transcriptome classification reveals molecular subtypes in psoriasis

    Ainali Chrysanthi


    Full Text Available Abstract Background Psoriasis is an immune-mediated disease characterised by chronically elevated pro-inflammatory cytokine levels, leading to aberrant keratinocyte proliferation and differentiation. Although certain clinical phenotypes, such as plaque psoriasis, are well defined, it is currently unclear whether there are molecular subtypes that might impact on prognosis or treatment outcomes. Results We present a pipeline for patient stratification through a comprehensive analysis of gene expression in paired lesional and non-lesional psoriatic tissue samples, compared with controls, to establish differences in RNA expression patterns across all tissue types. Ensembles of decision tree predictors were employed to cluster psoriatic samples on the basis of gene expression patterns and reveal gene expression signatures that best discriminate molecular disease subtypes. This multi-stage procedure was applied to several published psoriasis studies and a comparison of gene expression patterns across datasets was performed. Conclusion Overall, classification of psoriasis gene expression patterns revealed distinct molecular sub-groups within the clinical phenotype of plaque psoriasis. Enrichment for TGFb and ErbB signaling pathways, noted in one of the two psoriasis subgroups, suggested that this group may be more amenable to therapies targeting these pathways. Our study highlights the potential biological relevance of using ensemble decision tree predictors to determine molecular disease subtypes, in what may initially appear to be a homogenous clinical group. The R code used in this paper is available upon request.

  15. Hydrography - MO 2009 WQS Stream Classifications and Use (SHP)

    NSGIC GIS Inventory (aka Ramona) — This data set contains Missouri Water Quality Standards (WQS) stream classifications and use designations described in the Missouri Code of State Regulations (CSR),...

  16. Collaborative Representation based Classification for Face Recognition

    Zhang, Lei; Feng, Xiangchu; Ma, Yi; Zhang, David


    By coding a query sample as a sparse linear combination of all training samples and then classifying it by evaluating which class leads to the minimal coding residual, sparse representation based classification (SRC) leads to interesting results for robust face recognition. It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored. In this paper we discuss how SRC works, and show that the collaborative representation mechanism used in SRC is much more crucial to its success of face classification. The SRC is a special case of collaborative representation based classification (CRC), which has various instantiations by applying different norms to the coding residual and coding coefficient. More specifically, the l1 or l2 norm characterization of coding residual is related to the robustness of CRC to outlier facial pixels, while the l1 or l2 norm c...

  17. Strong Trinucleotide Circular Codes

    Christian J. Michel


    Full Text Available Recently, we identified a hierarchy relation between trinucleotide comma-free codes and trinucleotide circular codes (see our previous works. Here, we extend our hierarchy with two new classes of codes, called DLD and LDL codes, which are stronger than the comma-free codes. We also prove that no circular code with 20 trinucleotides is a DLD code and that a circular code with 20 trinucleotides is comma-free if and only if it is a LDL code. Finally, we point out the possible role of the symmetric group ∑4 in the mathematical study of trinucleotide circular codes.

  18. Classifying Coding DNA with Nucleotide Statistics

    Nicolas Carels


    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  19. Transformer fault diagnosis based on chemical reaction optimization algorithm and relevance vector machine

    Luo Wei


    Full Text Available Power transformer is one of the most important equipment in power system. In order to predict the potential fault of power transformer and identify the fault types correctly, we proposed a transformer fault intelligent diagnosis model based on chemical reaction optimization (CRO algorithm and relevance vector machine(RVM. RVM is a powerful machine learning method, which can solve nonlinear, high-dimensional classification problems with a limited number of samples. CRO algorithm has well global optimization and simple calculation, so it is suitable to solve parameter optimization problems. In this paper, firstly, a multi-layer RVM classification model was built by binary tree recognition strategy. Secondly, CRO algorithm was adopted to optimize the kernel function parameters which could enhance the performance of RVM classifiers. Compared with IEC three-ratio method and the RVM model, the CRO-RVM model not only overcomes the coding defect problem of IEC three-ratio method, but also has higher classification accuracy than the RVM model. Finally, the new method was applied to analyze a transformer fault case, Its predicted result accord well with the real situation. The research provides a practical method for transformer fault intelligent diagnosis and prediction.

  20. Classification Using Markov Blanket for Feature Selection

    Zeng, Yifeng; Luo, Jian


    Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...

  1. Joint source channel coding using arithmetic codes

    Bi, Dongsheng


    Based on the encoding process, arithmetic codes can be viewed as tree codes and current proposals for decoding arithmetic codes with forbidden symbols belong to sequential decoding algorithms and their variants. In this monograph, we propose a new way of looking at arithmetic codes with forbidden symbols. If a limit is imposed on the maximum value of a key parameter in the encoder, this modified arithmetic encoder can also be modeled as a finite state machine and the code generated can be treated as a variable-length trellis code. The number of states used can be reduced and techniques used fo

  2. Deep learning relevance

    Lioma, Christina; Larsen, Birger; Petersen, Casper


    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared......What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a single document? We present a preliminary study that makes a first step towards answering this question. Given a query, we...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  3. Chilean Pitavia more closely related to Oceania and Old World Rutaceae than to Neotropical groups: evidence from two cpDNA non-coding regions, with a new subfamilial classification of the family.

    Groppo, Milton; Kallunki, Jacquelyn A; Pirani, José Rubens; Antonelli, Alexandre


    The position of the plant genus Pitavia within an infrafamilial phylogeny of Rutaceae (rue, or orange family) was investigated with the use of two non-coding regions from cpDNA, the trnL-trnF region and the rps16 intron. The only species of the genus, Pitavia punctata Molina, is restricted to the temperate forests of the Coastal Cordillera of Central-Southern Chile and threatened by loss of habitat. The genus traditionally has been treated as part of tribe Zanthoxyleae (subfamily Rutoideae) where it constitutes the monogeneric tribe Pitaviinae. This tribe and genus are characterized by fruits of 1 to 4 fleshy drupelets, unlike the dehiscent fruits typical of the subfamily. Fifty-five taxa of Rutaceae, representing 53 genera (nearly one-third of those in the family) and all subfamilies, tribes, and almost all subtribes of the family were included. Parsimony and Bayesian inference were used to infer the phylogeny; six taxa of Meliaceae, Sapindaceae, and Simaroubaceae, all members of Sapindales, were also used as out-groups. Results from both analyses were congruent and showed Pitavia as sister to Flindersia and Lunasia, both genera with species scattered through Australia, Philippines, Moluccas, New Guinea and the Malayan region, and phylogenetically far from other Neotropical Rutaceae, such as the Galipeinae (Galipeeae, Rutoideae) and Pteleinae (Toddalieae, former Toddalioideae). Additionally, a new circumscription of the subfamilies of Rutaceae is presented and discussed. Only two subfamilies (both monophyletic) are recognized: Cneoroideae (including Dictyolomatoideae, Spathelioideae, Cneoraceae, and Ptaeroxylaceae) and Rutoideae (including not only traditional Rutoideae but also Aurantioideae, Flindersioideae, and Toddalioideae). As a consequence, Aurantioideae (Citrus and allies) is reduced to tribal rank as Aurantieae.

  4. Chilean Pitavia more closely related to Oceania and Old World Rutaceae than to Neotropical groups: evidence from two cpDNA non-coding regions, with a new subfamilial classification of the family

    Milton Groppo


    Full Text Available The position of the plant genus Pitavia within an infrafamilial phylogeny of Rutaceae (rue, or orange family was investigated with the use of two non-coding regions from cpDNA, the trnL-trnF region and the rps16 intron. The only species of the genus, Pitavia punctata Molina, is restricted to the temperate forests of the Coastal Cordillera of Central-Southern Chile and threatened by loss of habitat. The genus traditionally has been treated as part of tribe Zanthoxyleae (subfamily Rutoideae where it constitutes the monogeneric tribe Pitaviinae. This tribe and genus are characterized by fruits of 1 to 4 fleshy drupelets, unlike the dehiscent fruits typical of the subfamily. Fifty-five taxa of Rutaceae, representing 53 genera (nearly one-third of those in the family and all subfamilies, tribes, and almost all subtribes of the family were included. Parsimony and Bayesian inference were used to infer the phylogeny; six taxa of Meliaceae, Sapindaceae, and Simaroubaceae, all members of Sapindales, were also used as out-groups. Results from both analyses were congruent and showed Pitavia as sister to Flindersia and Lunasia, both genera with species scattered through Australia, Philippines, Moluccas, New Guinea and the Malayan region, and phylogenetically far from other Neotropical Rutaceae, such as the Galipeinae (Galipeeae, Rutoideae and Pteleinae (Toddalieae, former Toddalioideae. Additionally, a new circumscription of the subfamilies of Rutaceae is presented and discussed. Only two subfamilies (both monophyletic are recognized: Cneoroideae (including Dictyolomatoideae, Spathelioideae, Cneoraceae, and Ptaeroxylaceae and Rutoideae (including not only traditional Rutoideae but also Aurantioideae, Flindersioideae, and Toddalioideae. As a consequence, Aurantioideae (Citrus and allies is reduced to tribal rank as Aurantieae.

  5. Beyond the High Point Code in Testing Holland's Theory

    Andrews, Hans A.


    This study was designed to test and expand Holland's vocational development theory by utilizing more than a single high point code in classification of personality patterns of jobs. A more "refined" and/or "subtle" difference was shown in the personality-job relationships when two high point codes were used. (Author)

  6. 14 CFR Sec. 1-4 - System of accounts coding.


    ... General Accounting Provisions Sec. 1-4 System of accounts coding. (a) A four digit control number is... digit code assigned to each profit and loss account denote a detailed area of financial activity or... sequentially within blocks, designating more general classifications of financial activity and...

  7. Identification of ICD Codes Suggestive of Child Maltreatment

    Schnitzer, Patricia G.; Slusher, Paula L.; Kruse, Robin L.; Tarleton, Molly M.


    Objective: In order to be reimbursed for the care they provide, hospitals in the United States are required to use a standard system to code all discharge diagnoses: the International Classification of Disease, 9th Revision, Clinical Modification (ICD-9). Although ICD-9 codes specific for child maltreatment exist, they do not identify all…

  8. The ICNP's relevance in the US.

    Henry, S B; Elfrink, V; McNeil, B; Warren, J


    Efforts to develop an International Classification for Nursing Practice (ICNP) were initiated nearly a decade ago. To update nurses on progress, below is a critical review of the ICNP using the Computer-based Patient Record Institute (CPRI) Features Framework and a discussion of its relevance to current US efforts: 1) the activities of the American Nurses' Association (ANA) Steering Committee on Databases To Support Clinical Nursing Practice; 2) implementation of formal approaches for representing nursing concepts and 3) Health Level 7 standards.

  9. Hyperspectral image classification using functional data analysis.

    Li, Hong; Xiao, Guangrun; Xia, Tian; Tang, Y Y; Li, Luoqing


    The large number of spectral bands acquired by hyperspectral imaging sensors allows us to better distinguish many subtle objects and materials. Unlike other classical hyperspectral image classification methods in the multivariate analysis framework, in this paper, a novel method using functional data analysis (FDA) for accurate classification of hyperspectral images has been proposed. The central idea of FDA is to treat multivariate data as continuous functions. From this perspective, the spectral curve of each pixel in the hyperspectral images is naturally viewed as a function. This can be beneficial for making full use of the abundant spectral information. The relevance between adjacent pixel elements in the hyperspectral images can also be utilized reasonably. Functional principal component analysis is applied to solve the classification problem of these functions. Experimental results on three hyperspectral images show that the proposed method can achieve higher classification accuracies in comparison to some state-of-the-art hyperspectral image classification methods.

  10. Model Children's Code.

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  11. Unification as a Measure of Natural Classification

    Victor Gijsbers


    Full Text Available Recent interest in the idea that there can be scientific understanding without explanation lends new relevance to Duhem's notion of natural classification. According to Duhem, a classification that is natural teaches us something about nature without being explanatory. However, Duhem's conception of naturalness leaves much to be desired. In this paper, I argue that we can measure the naturalness of classification by using an amended version of the notion of unification as defined by Schurz and Lambert. If this thesis is correct, it both leads to a better conceptual understanding of scientific understanding, and also gives the nascent literature on this topic some much-needed precision.

  12. Classification differences and maternal mortality

    Salanave, B; Bouvier-Colle, M H; Varnoux, N


    OBJECTIVES: To compare the ways maternal deaths are classified in national statistical offices in Europe and to evaluate the ways classification affects published rates. METHODS: Data on pregnancy-associated deaths were collected in 13 European countries. Cases were classified by a European panel....... This change was substantial in three countries (P deaths to obstetric causes. In the other countries, no differences were detected. According to official published data, the aggregated maternal mortality rate for participating countries was 7.7 per...... 100,000 live births, but it increased to 8.7 after classification by the European panel (P deaths differs between European countries. These differences in coding contribute to variations in the reported numbers of maternal deaths...

  13. Fuzziness and Relevance Theory

    Grace Qiao Zhang


    This paper investigates how the phenomenon of fuzzy language, such as `many' in `Mary has many friends', can be explained by Relevance Theory. It is concluded that fuzzy language use conforms with optimal relevance in that it can achieve the greatest positive effect with the least processing effort. It is the communicators themselves who decide whether or not optimal relevance is achieved, rather than the language form (fuzzy or non-fuzzy) used. People can skillfully adjust the deployment of different language forms or choose appropriate interpretations to suit different situations and communication needs. However, there are two challenges to RT: a. to extend its theory from individual relevance to group relevance; b. to embrace cultural considerations (because when relevance principles and cultural protocols are in conflict, the latter tends to prevail).

  14. Perceptions of document relevance

    Peter eBruza


    Full Text Available This article presents a study of how humans perceive the relevance of documents.Humans are adept at making reasonably robust and quick decisions about what information is relevant to them, despite the ever increasing complexity and volume of their surrounding information environment. The literature on document relevance has identified various dimensions of relevance (e.g., topicality, novelty, etc., however little is understood about how these dimensions may interact.We performed a crowdsourced study of how human subjects judge two relevance dimensions in relation to document snippets retrieved from an internet search engine.The order of the judgement was controlled.For those judgements exhibiting an order effect, a q-test was performed to determine whether the order effects can be explained by a quantum decision model based on incompatible decision perspectives.Some evidence of incompatibility was found which suggests incompatible decision perspectives is appropriate for explaining interacting dimensions of relevance.

  15. [Non elective cesarean section: use of a color code to optimize management of obstetric emergencies].

    Rudigoz, René-Charles; Huissoud, Cyril; Delecour, Lisa; Thevenet, Simone; Dupont, Corinne


    The medical team of the Croix Rousse teaching hospital maternity unit has developed, over the last ten years, a set of procedures designed to respond to various emergency situations necessitating Caesarean section. Using the Lucas classification, we have defined as precisely as possible the degree of urgency of Caesarian sections. We have established specific protocols for the implementation of urgent and very urgent Caesarean section and have chosen a simple means to convey the degree of urgency to all team members, namely a color code system (red, orange and green). We have set time goals from decision to delivery: 15 minutes for the red code and 30 minutes for the orange code. The results seem very positive: The frequency of urgent and very urgent Caesareans has fallen over time, from 6.1 % to 1.6% in 2013. The average time from decision to delivery is 11 minutes for code red Caesareans and 21 minutes for code orange Caesareans. These time goals are now achieved in 95% of cases. Organizational and anesthetic difficulties are the main causes of delays. The indications for red and orange code Caesarians are appropriate more than two times out of three. Perinatal outcomes are generally favorable, code red Caesarians being life-saving in 15% of cases. No increase in maternal complications has been observed. In sum: Each obstetric department should have its own protocols for handling urgent and very urgent Caesarean sections. Continuous monitoring of their implementation, relevance and results should be conducted Management of extreme urgency must be integrated into the management of patients with identified risks (scarred uterus and twin pregnancies for example), and also in structures without medical facilities (birthing centers). Obstetric teams must keep in mind that implementation of these protocols in no way dispenses with close monitoring of labour.

  16. Classification of the web

    Mai, Jens Erik


    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...

  17. Relevance Theory in Translation

    Shao Jun; Jiang Min


    In perspective of relevance theory, translation is regarded as communication. According to relevance theory, communication not only requires encoding, transfer and decoding processes, but also involves inference in addition. As communication, translation decision-making is also based on the human beings' inferential mental faculty. Concentrating on relevance theory, this paper tries to analyze and explain some translation phenomena in two English versions of Cai Gen Tan-My Crude Philosophy of Life.

  18. 77 FR 125 - Draft Guidance for Industry and Food and Drug Administration Staff; Medical Device Classification...


    ... educate regulated industry and FDA Staff on how, when, and why to use classification product codes for... HUMAN SERVICES Food and Drug Administration Draft Guidance for Industry and Food and Drug Administration Staff; Medical Device Classification Product Codes; Availability AGENCY: Food and Drug Administration...

  19. Rateless feedback codes

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip


    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  20. Coding for dummies

    Abraham, Nikhil


    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  1. Advanced video coding systems

    Gao, Wen


    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  2. Automated Classification of Power Signals


    the classification code of the n th event. Boolean EVC [n] The ‘Event file created?’ Boolean is set to 1 if the event has had an event file created...indicate the type of event. int EVC [MAX_EVENTS]; // Boolean to indicate whether an event has had an .evt file created int local_det=0...i; // CLEAN THE EVENT TEXT DATA. for (i=0;i<MAX_EVENTS;i++) { Class[i]="Empty." Class_ID[i]=0; EVC [i]=FALSE; event_class_status[i

  3. Making Science Relevant

    Eick, Charles; Deutsch, Bill; Fuller, Jennifer; Scott, Fletcher


    Science teachers are always looking for ways to demonstrate the relevance of science to students. By connecting science learning to important societal issues, teachers can motivate students to both enjoy and engage in relevant science (Bennet, Lubben, and Hogarth 2007). To develop that connection, teachers can help students take an active role in…

  4. 78 FR 18252 - Prevailing Rate Systems; North American Industry Classification System Based Federal Wage System...


    ... Industry Classification System Based Federal Wage System Wage Surveys AGENCY: U. S. Office of Personnel... is issuing a proposed rule that would update the 2007 North American Industry Classification System... North American Industry Classification System (NAICS) codes used in Federal Wage System (FWS)...

  5. Classification issues related to neuropathic trigeminal pain.

    Zakrzewska, Joanna M


    The goal of a classification system of medical conditions is to facilitate accurate communication, to ensure that each condition is described uniformly and universally and that all data banks for the storage and retrieval of research and clinical data related to the conditions are consistent. Classification entails deciding which kinds of diagnostic entities should be recognized and how to order them in a meaningful way. Currently there are 3 major pain classification systems of relevance to orofacial pain: The International Association for the Study of Pain classification system, the International Headache Society classification system, and the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD). All use different methodologies, and only the RDC/TMD take into account social and psychologic factors in the classification of conditions. Classification systems need to be reliable, valid, comprehensive, generalizable, and flexible, and they need to be tested using consensus views of experts as well as the available literature. There is an urgent need for a robust classification system for neuropathic trigeminal pain.

  6. Recommendations for the establishment of the seismic code of Haiti

    Pierristal, G.; Benito, B.; Cervera, J.; Belizaire, D.


    propose the more suitable classification for Haiti. Finally, we have proposed a methodology for the forces estimation providing the values of the relevant coefficients. References: EN 1998-1:2004 (E): Eurocode 8, Design of structures for earthquake resistance, Part 1(General Rules, seismic actions and rules for buildings), 2004. -MTPTC, (2011). Règles de calcul intérimaires pour les bâtiments en Haïti, Ministère des Travaux Publics, Transports et Communications, Février 2011, Haïti. -NBCC 2005: National Building Code of Canada, vol1, National Research Council of Canada 2005. -NCSE-02: Norma de construcción sismorresistente de España. BOE num.244, Viernes 11 Octubre 2002. -NEHRP, 2009. Recommended Provisions for Seismic Regulations for new Buildings and Other Structures, FEMA P-750, February, Part 1 (Provisions) and Part 2 (Commentary). -R-001 (2011): Reglamento para el análisis y diseño sísmico de estructuras de República Dominicana. Decreto No. 201-11. Ministerio de Obras Públicas y Comunicaciones.

  7. Coding Theory and Applications : 4th International Castle Meeting

    Malonek, Paula; Vettori, Paolo


    The topics covered in this book, written by researchers at the forefront of their field, represent some of the most relevant research areas in modern coding theory: codes and combinatorial structures, algebraic geometric codes, group codes, quantum codes, convolutional codes, network coding and cryptography. The book includes a survey paper on the interconnections of coding theory with constrained systems, written by an invited speaker, as well as 37 cutting-edge research communications presented at the 4th International Castle Meeting on Coding Theory and Applications (4ICMCTA), held at the Castle of Palmela in September 2014. The event’s scientific program consisted of four invited talks and 39 regular talks by authors from 24 different countries. This conference provided an ideal opportunity for communicating new results, exchanging ideas, strengthening international cooperation, and introducing young researchers into the coding theory community.

  8. Lossless Compression of Classification-Map Data

    Hua, Xie; Klimesh, Matthew


    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  9. Locally Orderless Registration Code


    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  10. Locally orderless registration code


    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  11. Causes of death and associated conditions (Codac – a utilitarian approach to the classification of perinatal deaths

    Harrison Catherine


    Full Text Available Abstract A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD, although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes. We tested the Causes of Death and Associated Conditions (Codac classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions. The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies, two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy, a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal. For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured. The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions

  12. Cluster Based Text Classification Model


    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases th...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  13. QR Codes 101

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark


    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  14. Constructing quantum codes


    Quantum error correcting codes are indispensable for quantum information processing and quantum computation.In 1995 and 1996,Shor and Steane gave first several examples of quantum codes from classical error correcting codes.The construction of efficient quantum codes is now an active multi-discipline research field.In this paper we review the known several constructions of quantum codes and present some examples.

  15. The Relevance of Hyperuricaemia

    Jan T. Kielstein


    Full Text Available The aim of the present review is to summarise the results from recent clinical studies on the basis of the newly proposed temporal classification of hyperuricaemia and gout, introducing the now evident condition of hyperuricaemia with monosodium urate deposits. Furthermore, it provides an overview of evidence concerning the link between hyperuricaemia and specific pathological conditions, including cardiovascular disease, renal disease, and hypertension.

  16. A clinical classification system for rheumatoid forefoot deformity

    Doorn, P.F.; Keijsers, N.L.; Limbeek, J. van; Anderson, P.G.; Laan, R.F.J.M.; Bosch, P.V.; Malefijt, M.C.; Louwerens, J.W.


    BACKGROUND AND PURPOSE: In the present study a classification system for the rheumatoid forefoot is reported with its intra- and interobserver reliability and clinical relevance. The classification is based on the sequence of anatomical changes resulting from the loss of integrity of the MTP joints,

  17. Criticisms of Relevance Theory

    尚静; 孟晔; 焦丽芳


    This paper briefly introduces first the notion of Sperber and Wilson's Relevance Theory. Then, the motivation of S & W putting forward their RT is also mentioned. Secondly, the paper gives some details about the methodology of RT, in which ostensive-inferential communication, context and optimal relevance are highlighted. Thirdly, the paper focuses on the criticisms of RT from different areas of research on human language and communication. Finally, the paper draws a conclusion on the great importance of RT in pragmatics.

  18. Biogeographic classification of the Caspian Sea

    Fendereski, F.; Vogt, M.; Payne, Mark


    using the Hierarchical Agglomerative Clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total...... confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics....

  19. Classification of cultivated plants.

    Brandenburg, W.A.


    Agricultural practice demands principles for classification, starting from the basal entity in cultivated plants: the cultivar. In establishing biosystematic relationships between wild, weedy and cultivated plants, the species concept needs re-examination. Combining of botanic classification, based

  20. Turbo Codes Extended with Outer BCH Code

    Andersen, Jakob Dahl


    The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is proposed...

  1. Towards automatic classification of all WISE sources

    Kurcz, Agnieszka; Solarz, Aleksandra; Krupa, Magdalena; Pollo, Agnieszka; Małek, Katarzyna


    The WISE satellite has detected hundreds of millions sources over the entire sky. Classifying them reliably is however a challenging task due to degeneracies in WISE multicolour space and low levels of detection in its two longest-wavelength bandpasses. Here we aim at obtaining comprehensive and reliable star, galaxy and quasar catalogues based on automatic source classification in full-sky WISE data. This means that the final classification will employ only parameters available from WISE itself, in particular those reliably measured for a majority of sources. For the automatic classification we applied the support vector machines (SVM) algorithm, which requires a training sample with relevant classes already identified, and we chose to use the SDSS spectroscopic dataset for that purpose. By calibrating the classifier on the test data drawn from SDSS, we first established that a polynomial kernel is preferred over a radial one for this particular dataset. Next, using three classification parameters (W1 magnit...

  2. Hybrid Noncoherent Network Coding

    Skachek, Vitaly; Nedic, Angelia


    We describe a novel extension of subspace codes for noncoherent networks, suitable for use when the network is viewed as a communication system that introduces both dimension and symbol errors. We show that when symbol erasures occur in a significantly large number of different basis vectors transmitted through the network and when the min-cut of the networks is much smaller then the length of the transmitted codewords, the new family of codes outperforms their subspace code counterparts. For the proposed coding scheme, termed hybrid network coding, we derive two upper bounds on the size of the codes. These bounds represent a variation of the Singleton and of the sphere-packing bound. We show that a simple concatenated scheme that represents a combination of subspace codes and Reed-Solomon codes is asymptotically optimal with respect to the Singleton bound. Finally, we describe two efficient decoding algorithms for concatenated subspace codes that in certain cases have smaller complexity than subspace decoder...


    Moskvin, V.P.


    Full Text Available There is represented the general classification of semantic transfers. As the research has shown, transfers can be systematized based on four parameters: 1 the type of associations lying on their basis: similarity, contiguity and contrast, the associations by similarity and contrast being regarded as the basis for taxonomic transfers (from genus to species, from species to genus, from species to species, etc.; 2 the functional parameter: functionally relevant and irrelevant; 3 the sphere of action: transfer applies both to lexical and grammatical semantics; 4 the degree of ex-pressiveness: thus, the metonymic associations are more predictable than the metaphoric ones.

  4. Classifications of patterned hair loss: a review

    Mrinal Gupta


    Full Text Available Patterned hair loss is the most common cause of hair loss seen in both the sexes after puberty. Numerous classification systems have been proposed by various researchers for grading purposes. These systems vary from the simpler systems based on recession of the hairline to the more advanced multifactorial systems based on the morphological and dynamic parameters that affect the scalp and the hair itself. Most of these preexisting systems have certain limitations. Currently, the Hamilton-Norwood classification system for males and the Ludwig system for females are most commonly used to describe patterns of hair loss. In this article, we review the various classification systems for patterned hair loss in both the sexes. Relevant articles were identified through searches of MEDLINE and EMBASE. Search terms included but were not limited to androgenic alopecia classification, patterned hair loss classification, male pattern baldness classification, and female pattern hair loss classification. Further publications were identified from the reference lists of the reviewed articles.

  5. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan


    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. XSTAR Code and Database Status

    Kallman, Timothy R.


    The XSTAR code is a simulation tool for calculating spectra associated with plasmas which are in a time-steady balance among the microphysical processes. It allows for treatment of plasmas which are exposed to illumination by energetic photons, but also treats processes relevant to collision-dominated plasmas. Processes are treated in a full collisional-radiative formalism which includes convergence to local thermodynamic equilibrium under suitable conditions. It features an interface to the most widely used software for fitting to astrophysical spectra, and has also been compared with laboratory plasma experiments. This poster will describe the recent updates to XSTAR, including atomic data, new features, and some recent applications of the code.

  7. Classification of refrigerants; Classification des fluides frigorigenes



    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  8. Network coding for computing: Linear codes

    Appuswamy, Rathinakumar; Karamchandani, Nikhil; Zeger, Kenneth


    In network coding it is known that linear codes are sufficient to achieve the coding capacity in multicast networks and that they are not sufficient in general to achieve the coding capacity in non-multicast networks. In network computing, Rai, Dey, and Shenvi have recently shown that linear codes are not sufficient in general for solvability of multi-receiver networks with scalar linear target functions. We study single receiver networks where the receiver node demands a target function of the source messages. We show that linear codes may provide a computing capacity advantage over routing only when the receiver demands a `linearly-reducible' target function. % Many known target functions including the arithmetic sum, minimum, and maximum are not linearly-reducible. Thus, the use of non-linear codes is essential in order to obtain a computing capacity advantage over routing if the receiver demands a target function that is not linearly-reducible. We also show that if a target function is linearly-reducible,...

  9. The Limits to Relevance

    Averill, M.; Briggle, A.


    Science policy and knowledge production lately have taken a pragmatic turn. Funding agencies increasingly are requiring scientists to explain the relevance of their work to society. This stems in part from mounting critiques of the "linear model" of knowledge production in which scientists operating according to their own interests or disciplinary standards are presumed to automatically produce knowledge that is of relevance outside of their narrow communities. Many contend that funded scientific research should be linked more directly to societal goals, which implies a shift in the kind of research that will be funded. While both authors support the concept of useful science, we question the exact meaning of "relevance" and the wisdom of allowing it to control research agendas. We hope to contribute to the conversation by thinking more critically about the meaning and limits of the term "relevance" and the trade-offs implicit in a narrow utilitarian approach. The paper will consider which interests tend to be privileged by an emphasis on relevance and address issues such as whose goals ought to be pursued and why, and who gets to decide. We will consider how relevance, narrowly construed, may actually limit the ultimate utility of scientific research. The paper also will reflect on the worthiness of research goals themselves and their relationship to a broader view of what it means to be human and to live in society. Just as there is more to being human than the pragmatic demands of daily life, there is more at issue with knowledge production than finding the most efficient ways to satisfy consumer preferences or fix near-term policy problems. We will conclude by calling for a balanced approach to funding research that addresses society's most pressing needs but also supports innovative research with less immediately apparent application.

  10. Civil classification of the acquirer and operator of a photovoltaic power plant. Consumer or entrepreneur?; Zivilrechtliche Einordnung des Erwerbers und Betreibers einer Photovoltaikanlage. Verbraucher oder Unternehmer?

    Schneidewindt, Holger [Verbraucherzentrale Nordrhein-Westfalen e.V., Duesseldorf (Germany)


    With the prospect of revenue from the feed consumption and cost savings by means of private 'small investors' for the energy policy turnaround are obtained. The civil protection in acquisition and operation of the photovoltaic power plant largely depends on their classification according to paragraph paragraph 13, 14 Civil Law Code (BGB). paragraph paragraph 305 ff. BGB are fully applicable only to consumers. Consumer organizations can act only under the participation of consumers. The recent judgments show that the registration of relevant aspects and their proper legal assessment are a major challenge. Therefore, the feed-in tariff as a demarcation criterion was a wrong decision.

  11. Civil And Arbitration Proceedings Unification In Russia: Relevance, Problems, Prospects

    Ksenia M. Belikova


    Full Text Available In the present article authors, by analyzing provisions of the applicable civil and arbitration codes of Russia (hereinafter - the Code of Civil Procedure and the Code of Arbitration Procedure of the Russian Federation justify the relevance of the procedural reform and indicate its future prospects. Considerable attention is paid to the recently adopted Concept of a Code of Civil Procedure of the Russian Federation (hereinafter - the Concept. According to the author's position, now the creation and adoption of a united Code of Civil Procedure is not only relevant, but necessary phenomenon. In connection with this, the subject of analysis of the proposed concept is improved procedure for handling the application for disqualification of a judge, in comparison with the similar provisions of the existing Code of Civil Procedure and of the Code of Arbitration Procedure stand out its advantages. In addition, authors focus on other existing problems of legal regulation of various issues in the current Code of Civil Procedure and of the Code of Arbitration Procedure. Thus, in the view of the author in this article falls consideration of the problems associated with the absentee in civil proceedings, as well as the sole consideration (some provided by the Code of Civil Procedure, and the cases of the Code of Arbitration Procedure appeal against the decision of the trial court. The article also contains a disagreement with the position of the proposed by the concept for the recovery of legal costs for the services of their representatives to the proof in full and invited author's position on this issue. In addition to the analysis of the issues authors also offer some options for their solutions. At the end of the article authors make conclusions regarding the relevance of the Concept, its strengths and weaknesses in the regulation of the studied issues and the prospects of unification of civil and arbitration proceedings in Russia.

  12. Practices in Code Discoverability

    Teuben, Peter; Nemiroff, Robert J; Shamir, Lior


    Much of scientific progress now hinges on the reliability, falsifiability and reproducibility of computer source codes. Astrophysics in particular is a discipline that today leads other sciences in making useful scientific components freely available online, including data, abstracts, preprints, and fully published papers, yet even today many astrophysics source codes remain hidden from public view. We review the importance and history of source codes in astrophysics and previous efforts to develop ways in which information about astrophysics codes can be shared. We also discuss why some scientist coders resist sharing or publishing their codes, the reasons for and importance of overcoming this resistance, and alert the community to a reworking of one of the first attempts for sharing codes, the Astrophysics Source Code Library (ASCL). We discuss the implementation of the ASCL in an accompanying poster paper. We suggest that code could be given a similar level of referencing as data gets in repositories such ...

  13. Coding for optical channels

    Djordjevic, Ivan; Vasic, Bane


    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  14. Enhancing QR Code Security

    Zhang, Linfan; Zheng, Shuang


    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  15. The Relevance of Literature.

    Dunham, L. L.


    The "legacy" of the humanities is discussed in terms of relevance, involvement, and other philosophical considerations. Reasons for studying foreign literature in language classes are developed in the article. Comment is also made on attitudes and ideas culled from the writings of Clifton Fadiman, Jean Paul Sartre, and James Baldwin. (RL)

  16. Relevant Subspace Clustering

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan


    . We prove that computation of this model is NP-hard. For RESCU, we propose an approximative solution that shows high accuracy with respect to our relevance model. Thorough experiments on synthetic and real world data show that RESCU successfully reduces the result to manageable sizes. It reliably...... achieves top clustering quality while competing approaches show greatly varying performance....

  17. Is Information Still Relevant?

    Ma, Lia


    Introduction: The term "information" in information science does not share the characteristics of those of a nomenclature: it does not bear a generally accepted definition and it does not serve as the bases and assumptions for research studies. As the data deluge has arrived, is the concept of information still relevant for information…

  18. Relevant Subspace Clustering

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan;


    Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace c...

  19. Refactoring test code

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok


    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  20. Informal Control code logic

    Bergstra, Jan A


    General definitions as well as rules of reasoning regarding control code production, distribution, deployment, and usage are described. The role of testing, trust, confidence and risk analysis is considered. A rationale for control code testing is sought and found for the case of safety critical embedded control code.

  1. Gauge color codes

    Bombin Palomo, Hector


    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  2. Refactoring test code

    Deursen, A. van; Moonen, L.M.F.; Bergh, A. van den; Kok, G.


    Two key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from refactoring product

  3. ARC Code TI: CODE Software Framework

    National Aeronautics and Space Administration — CODE is a software framework for control and observation in distributed environments. The basic functionality of the framework allows a user to observe a distributed...

  4. ARC Code TI: ROC Curve Code Augmentation

    National Aeronautics and Space Administration — ROC (Receiver Operating Characteristic) curve Code Augmentation was written by Rodney Martin and John Stutz at NASA Ames Research Center and is a modification of ROC...

  5. Fountain Codes: LT And Raptor Codes Implementation

    Ali Bazzi, Hiba Harb


    Full Text Available Digital fountain codes are a new class of random error correcting codes designed for efficient and reliable data delivery over erasure channels such as internet. These codes were developed to provide robustness against erasures in a way that resembles a fountain of water. A digital fountain is rateless in a way that sender can send limitless number of encoded packets. The receiver doesn’t care which packets are received or lost as long as the receiver gets enough packets to recover original data. In this paper, the design of the fountain codes is explored with its implementation of the encoding and decoding algorithm so that the performance in terms of encoding/decoding symbols, reception overhead, data length, and failure probability is studied.

  6. The 2002 Revision of the American Psychological Association's Ethics Code: Implications for School Psychologists

    Flanagan, Rosemary; Miller, Jeffrey A.; Jacob, Susan


    The Ethical Principles for Psychologists and Code of Conduct has been recently revised. The organization of the code changed, and the language was made more specific. A number of points relevant to school psychology are explicitly stated in the code. A clear advantage of including these items in the code is the assistance to school psychologists…

  7. 76 FR 39039 - Establishment of a New Drug Code for Marihuana Extract


    ... Enforcement Administration 21 CFR Part 1308 RIN 1117-AB33 Establishment of a New Drug Code for Marihuana... Controlled Substances Code Number (``Code Number'' or ``drug code'') under 21 CFR 1308.11 for ``Marihuana... material separately from quantities of marihuana. This in turn will aid in complying with relevant treaty...

  8. Universal Rateless Codes From Coupled LT Codes

    Aref, Vahid


    It was recently shown that spatial coupling of individual low-density parity-check codes improves the belief-propagation threshold of the coupled ensemble essentially to the maximum a posteriori threshold of the underlying ensemble. We study the performance of spatially coupled low-density generator-matrix ensembles when used for transmission over binary-input memoryless output-symmetric channels. We show by means of density evolution that the threshold saturation phenomenon also takes place in this setting. Our motivation for studying low-density generator-matrix codes is that they can easily be converted into rateless codes. Although there are already several classes of excellent rateless codes known to date, rateless codes constructed via spatial coupling might offer some additional advantages. In particular, by the very nature of the threshold phenomenon one expects that codes constructed on this principle can be made to be universal, i.e., a single construction can uniformly approach capacity over the cl...

  9. Software Certification - Coding, Code, and Coders

    Havelund, Klaus; Holzmann, Gerard J.


    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  10. The algorithm of malicious code detection based on data mining

    Yang, Yubo; Zhao, Yang; Liu, Xiabi


    Traditional technology of malicious code detection has low accuracy and it has insufficient detection capability for new variants. In terms of malicious code detection technology which is based on the data mining, its indicators are not accurate enough, and its classification detection efficiency is relatively low. This paper proposed the information gain ratio indicator based on the N-gram to choose signature, this indicator can accurately reflect the detection weight of the signature, and helped by C4.5 decision tree to elevate the algorithm of classification detection.

  11. Security classification of information

    Quist, A.S.


    This document is the second of a planned four-volume work that comprehensively discusses the security classification of information. The main focus of Volume 2 is on the principles for classification of information. Included herein are descriptions of the two major types of information that governments classify for national security reasons (subjective and objective information), guidance to use when determining whether information under consideration for classification is controlled by the government (a necessary requirement for classification to be effective), information disclosure risks and benefits (the benefits and costs of classification), standards to use when balancing information disclosure risks and benefits, guidance for assigning classification levels (Top Secret, Secret, or Confidential) to classified information, guidance for determining how long information should be classified (classification duration), classification of associations of information, classification of compilations of information, and principles for declassifying and downgrading information. Rules or principles of certain areas of our legal system (e.g., trade secret law) are sometimes mentioned to .provide added support to some of those classification principles.

  12. Security classification of information

    Quist, A.S.


    Certain governmental information must be classified for national security reasons. However, the national security benefits from classifying information are usually accompanied by significant costs -- those due to a citizenry not fully informed on governmental activities, the extra costs of operating classified programs and procuring classified materials (e.g., weapons), the losses to our nation when advances made in classified programs cannot be utilized in unclassified programs. The goal of a classification system should be to clearly identify that information which must be protected for national security reasons and to ensure that information not needing such protection is not classified. This document was prepared to help attain that goal. This document is the first of a planned four-volume work that comprehensively discusses the security classification of information. Volume 1 broadly describes the need for classification, the basis for classification, and the history of classification in the United States from colonial times until World War 2. Classification of information since World War 2, under Executive Orders and the Atomic Energy Acts of 1946 and 1954, is discussed in more detail, with particular emphasis on the classification of atomic energy information. Adverse impacts of classification are also described. Subsequent volumes will discuss classification principles, classification management, and the control of certain unclassified scientific and technical information. 340 refs., 6 tabs.

  13. Coding for Electronic Mail

    Rice, R. F.; Lee, J. J.


    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  14. General regression and representation model for classification.

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  15. Noisy Network Coding

    Lim, Sung Hoon; Gamal, Abbas El; Chung, Sae-Young


    A noisy network coding scheme for sending multiple sources over a general noisy network is presented. For multi-source multicast networks, the scheme naturally extends both network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to general discrete memoryless and Gaussian networks. The scheme also recovers as special cases the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves message repetition coding, relay signal compression, and simultaneous decoding. Unlike previous compress--forward schemes, where independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner--Ziv binning as in previous compress-forward sch...

  16. Testing algebraic geometric codes

    CHEN Hao


    Property testing was initially studied from various motivations in 1990's.A code C (∩)GF(r)n is locally testable if there is a randomized algorithm which can distinguish with high possibility the codewords from a vector essentially far from the code by only accessing a very small (typically constant) number of the vector's coordinates.The problem of testing codes was firstly studied by Blum,Luby and Rubinfeld and closely related to probabilistically checkable proofs (PCPs).How to characterize locally testable codes is a complex and challenge problem.The local tests have been studied for Reed-Solomon (RS),Reed-Muller (RM),cyclic,dual of BCH and the trace subcode of algebraicgeometric codes.In this paper we give testers for algebraic geometric codes with linear parameters (as functions of dimensions).We also give a moderate condition under which the family of algebraic geometric codes cannot be locally testable.

  17. Chinese remainder codes

    ZHANG Aili; LIU Xiufeng


    Chinese remainder codes are constructed by applying weak block designs and the Chinese remainder theorem of ring theory.The new type of linear codes take the congruence class in the congruence class ring R/I1 ∩ I2 ∩…∩ In for the information bit,embed R/Ji into R/I1 ∩ I2 ∩…∩ In,and assign the cosets of R/Ji as the subring of R/I1 ∩ I2 ∩…∩ In and the cosets of R/Ji in R/I1 ∩ I2 ∩…∩ In as check lines.Many code classes exist in the Chinese remainder codes that have high code rates.Chinese remainder codes are the essential generalization of Sun Zi codes.

  18. Chinese Remainder Codes

    张爱丽; 刘秀峰; 靳蕃


    Chinese Remainder Codes are constructed by applying weak block designs and Chinese Remainder Theorem of ring theory. The new type of linear codes take the congruence class in the congruence class ring R/I1∩I2∩…∩In for the information bit, embed R/Ji into R/I1∩I2∩…∩In, and asssign the cosets of R/Ji as the subring of R/I1∩I2∩…∩In and the cosets of R/Ji in R/I1∩I2∩…∩In as check lines. There exist many code classes in Chinese Remainder Codes, which have high code rates. Chinese Remainder Codes are the essential generalization of Sun Zi Codes.

  19. Code of Ethics

    Adelstein, Jennifer; Clegg, Stewart


    Ethical codes have been hailed as an explicit vehicle for achieving more sustainable and defensible organizational practice. Nonetheless, when legal compliance and corporate governance codes are conflated, codes can be used to define organizational interests ostentatiously by stipulating norms...... for employee ethics. Such codes have a largely cosmetic and insurance function, acting subtly and strategically to control organizational risk management and protection. In this paper, we conduct a genealogical discourse analysis of a representative code of ethics from an international corporation...... to understand how management frames expectations of compliance. Our contribution is to articulate the problems inherent in codes of ethics, and we make some recommendations to address these to benefit both an organization and its employees. In this way, we show how a code of ethics can provide a foundation...

  20. Defeating the coding monsters.

    Colt, Ross


    Accuracy in coding is rapidly becoming a required skill for military health care providers. Clinic staffing, equipment purchase decisions, and even reimbursement will soon be based on the coding data that we provide. Learning the complicated myriad of rules to code accurately can seem overwhelming. However, the majority of clinic visits in a typical outpatient clinic generally fall into two major evaluation and management codes, 99213 and 99214. If health care providers can learn the rules required to code a 99214 visit, then this will provide a 90% solution that can enable them to accurately code the majority of their clinic visits. This article demonstrates a step-by-step method to code a 99214 visit, by viewing each of the three requirements as a monster to be defeated.

  1. Testing algebraic geometric codes


    Property testing was initially studied from various motivations in 1990’s. A code C  GF (r)n is locally testable if there is a randomized algorithm which can distinguish with high possibility the codewords from a vector essentially far from the code by only accessing a very small (typically constant) number of the vector’s coordinates. The problem of testing codes was firstly studied by Blum, Luby and Rubinfeld and closely related to probabilistically checkable proofs (PCPs). How to characterize locally testable codes is a complex and challenge problem. The local tests have been studied for Reed-Solomon (RS), Reed-Muller (RM), cyclic, dual of BCH and the trace subcode of algebraicgeometric codes. In this paper we give testers for algebraic geometric codes with linear parameters (as functions of dimensions). We also give a moderate condition under which the family of algebraic geometric codes cannot be locally testable.

  2. Serially Concatenated IRA Codes

    Cheng, Taikun; Belzer, Benjamin J


    We address the error floor problem of low-density parity check (LDPC) codes on the binary-input additive white Gaussian noise (AWGN) channel, by constructing a serially concatenated code consisting of two systematic irregular repeat accumulate (IRA) component codes connected by an interleaver. The interleaver is designed to prevent stopping-set error events in one of the IRA codes from propagating into stopping set events of the other code. Simulations with two 128-bit rate 0.707 IRA component codes show that the proposed architecture achieves a much lower error floor at higher SNRs, compared to a 16384-bit rate 1/2 IRA code, but incurs an SNR penalty of about 2 dB at low to medium SNRs. Experiments indicate that the SNR penalty can be reduced at larger blocklengths.

  3. The Classification and Indexing of Imaginative Literature

    Eriksson, Rune


    With the indexing of imaginative literature included in an expanding number of bibliographic databases, the overall representation of this kind of literature has definitely been improved. Still, in terms of information retrieval and being able to judge the relevance of the titles, it seems...... that the usefulness of classification and indexing alike are still being restricted by some old romantic and objectivistic, or even positivistic, ideas and ideals. In order to argue that point the paper firstly re-examines the classification of imaginative literature in early editions of the Dewey Decimal...

  4. Clinical Relevance of Adipokines

    Matthias Blüher


    Full Text Available The incidence of obesity has increased dramatically during recent decades. Obesity increases the risk for metabolic and cardiovascular diseases and may therefore contribute to premature death. With increasing fat mass, secretion of adipose tissue derived bioactive molecules (adipokines changes towards a pro-inflammatory, diabetogenic and atherogenic pattern. Adipokines are involved in the regulation of appetite and satiety, energy expenditure, activity, endothelial function, hemostasis, blood pressure, insulin sensitivity, energy metabolism in insulin sensitive tissues, adipogenesis, fat distribution and insulin secretion in pancreatic β-cells. Therefore, adipokines are clinically relevant as biomarkers for fat distribution, adipose tissue function, liver fat content, insulin sensitivity, chronic inflammation and have the potential for future pharmacological treatment strategies for obesity and its related diseases. This review focuses on the clinical relevance of selected adipokines as markers or predictors of obesity related diseases and as potential therapeutic tools or targets in metabolic and cardiovascular diseases.

  5. Korrek, volledig, relevant

    Bergenholtz, Henning; Gouws, Rufus


    as detrimental to the status of a dictionary as a container of linguistic knowledge. This paper shows that, from a lexicographic perspective, such a distinction is not relevant. What is important is that definitions should contain information that is relevant to and needed by the target users of that specific......In explanatory dictionaries, both general language dictionaries and dictionaries dealing with languages for special purposes, the lexicographic definition is an important item to present the meaning of a given lemma. Due to a strong linguistic bias, resulting from an approach prevalent in the early...... phases of the development of theoretical lexicography, a distinction is often made between encyclopaedic information and semantic information in dictionary definitions, and dictionaries had often been criticized when their definitions were dominated by an encyclopaedic approach. This used to be seen...

  6. An efficient adaptive arithmetic coding image compression technology

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei


    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm.The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding.The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block.The decoded image block can accurately recover the encoded image according to the code book information.We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate.The results show that it is an effective compression technology.

  7. Information Needs/Relevance

    Wildemuth, Barbara M.


    A user's interaction with a DL is often initiated as the result of the user experiencing an information need of some kind. Aspects of that experience and how it might affect the user's interactions with the DL are discussed in this module. In addition, users continuously make decisions about and evaluations of the materials retrieved from a DL, relative to their information needs. Relevance judgments, and their relationship to the user's information needs, are discussed in this module. Draft

  8. Ontologies vs. Classification Systems

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne


    What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta d...... classification systems and meta data taxonomies, should be based on ontologies.......What is an ontology compared to a classification system? Is a taxonomy a kind of classification system or a kind of ontology? These are questions that we meet when working with people from industry and public authorities, who need methods and tools for concept clarification, for developing meta...... data sets or for obtaining advanced search facilities. In this paper we will present an attempt at answering these questions. We will give a presentation of various types of ontologies and briefly introduce terminological ontologies. Furthermore we will argue that classification systems, e.g. product...

  9. Classification of Spreadsheet Errors

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian


    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  10. The Application of Code Switching in Private College English Teaching



    The paper presents an overview of code switching in terms of its definition,classification and functions on the part of both teachers and students.The appropriate use of code switching between the target language English and the native language Chinese in classroom teaching will help facilitate private college students’ English proficiency,improve their learning efficiency as well as achieve better classroom teaching effect.

  11. Information gathering for CLP classification

    Ida Marcello; Felice Giordano; Francesca Marina Costamagna


    Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP). If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requireme...

  12. Greedy vs. L1 Convex Optimization in Sparse Coding

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... and action recognition, a comparative study of codes in abnormal event detection is less studied and hence no conclusion is gained on the effect of codes in detecting abnormalities. We constrict our comparison in two types of the above L0-norm solutions: greedy algorithms and convex L1-norm solutions....... Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate their performance...

  13. Studies on Relevance, Ranking and Results Display

    Gelernter, Judith; Carbonell, Jaime


    This study considers the extent to which users with the same query agree as to what is relevant, and how what is considered relevant may translate into a retrieval algorithm and results display. To combine user perceptions of relevance with algorithm rank and to present results, we created a prototype digital library of scholarly literature. We confine studies to one population of scientists (paleontologists), one domain of scholarly scientific articles (paleo-related), and a prototype system (PaleoLit) that we built for the purpose. Based on the principle that users do not pre-suppose answers to a given query but that they will recognize what they want when they see it, our system uses a rules-based algorithm to cluster results into fuzzy categories with three relevance levels. Our system matches at least 1/3 of our participants' relevancy ratings 87% of the time. Our subsequent usability study found that participants trusted our uncertainty labels but did not value our color-coded horizontal results layout ...

  14. Admire LVQ—adaptive distance measures in Relevance Learning Vector quantization

    Biehl, Michael


    The extension of Learning Vector Quantization by Matrix Relevance Learning is presented and discussed. The basic concept, essential properties, and several modifications of the scheme are outlined. A particularly successful application in the context of tumor classification highlights the usefulness

  15. Concepts of Classification and Taxonomy. Phylogenetic Classification

    Fraix-Burnet, Didier


    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works. 1 Why phylogenetic tools in astrophysics? 1.1 History of classification The need for classifying living organisms is very ancient, and the first classification system can be dated back to the Greeks. The goal was very practical since it was intended to distinguish between eatable and toxic aliments, or kind and dangerous animals. Simple resemblance was used and has been used for centuries. Basically, until the XVIIIth...

  16. Image Classification Workflow Using Machine Learning Methods

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.


    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  17. The RCVS codes of conduct: what's in a word?

    Mcculloch, S.; Reiss, M.; Jinman, P.; Wathes, C.


    In 2012, the RCVS introduced a new Code of Professional Conduct for Veterinary Surgeons, replacing the Guide to Professional Conduct which had existed until then. Is a common Code relevant for the veterinarian's many roles? There's more to think about here than just the change of name, write Steven McCulloch, Michael Reiss, Peter Jinman and Christopher Wathes.

  18. Rewriting the Genetic Code.

    Mukai, Takahito; Lajoie, Marc J; Englert, Markus; Söll, Dieter


    The genetic code-the language used by cells to translate their genomes into proteins that perform many cellular functions-is highly conserved throughout natural life. Rewriting the genetic code could lead to new biological functions such as expanding protein chemistries with noncanonical amino acids (ncAAs) and genetically isolating synthetic organisms from natural organisms and viruses. It has long been possible to transiently produce proteins bearing ncAAs, but stabilizing an expanded genetic code for sustained function in vivo requires an integrated approach: creating recoded genomes and introducing new translation machinery that function together without compromising viability or clashing with endogenous pathways. In this review, we discuss design considerations and technologies for expanding the genetic code. The knowledge obtained by rewriting the genetic code will deepen our understanding of how genomes are designed and how the canonical genetic code evolved.

  19. On Polynomial Remainder Codes

    Yu, Jiun-Hung


    Polynomial remainder codes are a large class of codes derived from the Chinese remainder theorem that includes Reed-Solomon codes as a special case. In this paper, we revisit these codes and study them more carefully than in previous work. We explicitly allow the code symbols to be polynomials of different degrees, which leads to two different notions of weight and distance. Algebraic decoding is studied in detail. If the moduli are not irreducible, the notion of an error locator polynomial is replaced by an error factor polynomial. We then obtain a collection of gcd-based decoding algorithms, some of which are not quite standard even when specialized to Reed-Solomon codes.

  20. Plant names and classification

    This chapter updates one of the same title from Edition 12 of Stearn’s Introductory Biology published in 2011. It reviews binomial nomenclature, discusses three codes of plant nomenclature (the International Code of Botanical Nomenclature, the International Code of Nomenclature for Cultivated Plants...

  1. Generating code adapted for interlinking legacy scalar code and extended vector code

    Gschwind, Michael K


    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  2. The aeroelastic code FLEXLAST

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)


    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  3. Opening up codings?

    Steensig, Jakob; Heinemann, Trine


    We welcome Tanya Stivers’s discussion (Stivers, 2015/this issue) of coding social interaction and find that her descriptions of the processes of coding open up important avenues for discussion, among other things of the precise ad hoc considerations that researchers need to bear in mind, both when....... Instead we propose that the promise of coding-based research lies in its ability to open up new qualitative questions....

  4. Industrial Computer Codes

    Shapiro, Wilbur


    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  5. Hazard classification assessment for the High Voltage Initiator

    Cogan, J.D.


    An investigation was conducted to determine whether the High Voltage Initiator (Sandia p number 395710; Navy NAVSEA No. 6237177) could be assigned a Department of Transportation (DOT) hazard classification of ``IGNITERS, 1.4G, UN0325`` under Code of Federal Regulations, 49 CFR 173.101, when packaged per Mound drawing NXB911442. A hazard classification test was performed, and the test data led to a recommended hazard classification of ``IGNITERS, 1.4G, UN0325,`` based on guidance outlined in DOE Order 1540.2 and 49 CFR 173.56.

  6. Solving Classification Problems Using Genetic Programming Algorithms on GPUs

    Cano, Alberto; Zafra, Amelia; Ventura, Sebastián

    Genetic Programming is very efficient in problem solving compared to other proposals but its performance is very slow when the size of the data increases. This paper proposes a model for multi-threaded Genetic Programming classification evaluation using a NVIDIA CUDA GPUs programming model to parallelize the evaluation phase and reduce computational time. Three different well-known Genetic Programming classification algorithms are evaluated using the parallel evaluation model proposed. Experimental results using UCI Machine Learning data sets compare the performance of the three classification algorithms in single and multithreaded Java, C and CUDA GPU code. Results show that our proposal is much more efficient.

  7. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.


    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  8. ARC Code TI: ACCEPT

    National Aeronautics and Space Administration — ACCEPT consists of an overall software infrastructure framework and two main software components. The software infrastructure framework consists of code written to...

  9. QR codes for dummies

    Waters, Joe


    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  10. Tokamak Systems Code

    Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.


    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.

  11. MORSE Monte Carlo code

    Cramer, S.N.


    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  12. Mapping a classification system to architectural education

    Hermund, Anders; Klint, Lars; Rostrup, Nicolai


    a questionnaire survey among 88 students in graduate school. Qualitative interviews with a handful of practicing architects, to be able to cross check the relevance of the education with the profession. The examination indicates the need of a new definition, in addition to the CCS’s scale, covering the earliest......This paper examines to what extent a new classification system, Cuneco Classification System, CCS, proves useful in the education of architects, and to what degree the aim of an architectural education, rather based on an arts and crafts approach than a polytechnic approach, benefits from...... the distinct terminology of the classification system. The method used to examine the relationship between education, practice and the CCS bifurcates in a quantitative and a qualitative exploration: Quantitative comparison of the curriculum with the students’ own descriptions of their studies through...

  13. Web Classification Using DYN FP Algorithm

    Bhanu Pratap Singh


    Full Text Available Web mining is the application of data mining techniques to extract knowledge from Web. Web mining has been explored to a vast degree and different techniques have been proposed for a variety of applications that includes Web Search, Classification and Personalization etc. The primary goal of the web site is to provide the relevant information to the users. Web mining technique is used to categorize users and pages by analyzing users behavior, the content of pages and order of URLs accessed. In this paper, proposes an auto-classification algorithm of web pages using data mining techniques. The problem of discovering association rules between terms in a set of web pages belonging to a category in a search engine database, and present an auto – classification algorithm for solving this problem that are fundamentally based on FP-growth algorithm

  14. Exploring different approaches for music genre classification

    Antonio Jose Homsi Goulart


    Full Text Available In this letter, we present different approaches for music genre classification. The proposed techniques, which are composed of a feature extraction stage followed by a classification procedure, explore both the variations of parameters used as input and the classifier architecture. Tests were carried out with three styles of music, namely blues, classical, and lounge, which are considered informally by some musicians as being “big dividers” among music genres, showing the efficacy of the proposed algorithms and establishing a relationship between the relevance of each set of parameters for each music style and each classifier. In contrast to other works, entropies and fractal dimensions are the features adopted for the classifications.

  15. Library Classification 2020

    Harris, Christopher


    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  16. Multiple sparse representations classification

    E. Plenge (Esben); S.K. Klein (Stefan); W.J. Niessen (Wiro); E. Meijering (Erik)


    textabstractSparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In t

  17. Library Classification 2020

    Harris, Christopher


    In this article the author explores how a new library classification system might be designed using some aspects of the Dewey Decimal Classification (DDC) and ideas from other systems to create something that works for school libraries in the year 2020. By examining what works well with the Dewey Decimal System, what features should be carried…

  18. Classifier in Age classification

    B. Santhi


    Full Text Available Face is the important feature of the human beings. We can derive various properties of a human by analyzing the face. The objective of the study is to design a classifier for age using facial images. Age classification is essential in many applications like crime detection, employment and face detection. The proposed algorithm contains four phases: preprocessing, feature extraction, feature selection and classification. The classification employs two class labels namely child and Old. This study addresses the limitations in the existing classifiers, as it uses the Grey Level Co-occurrence Matrix (GLCM for feature extraction and Support Vector Machine (SVM for classification. This improves the accuracy of the classification as it outperforms the existing methods.

  19. Can the genetic code be mathematically described?

    Gonzalez, Diego L


    From a mathematical point of view, the genetic code is a surjective mapping between the set of the 64 possible three-base codons and the set of 21 elements composed of the 20 amino acids plus the Stop signal. Redundancy and degeneracy therefore follow. In analogy with the genetic code, non-power integer-number representations are also surjective mappings between sets of different cardinality and, as such, also redundant. However, none of the non-power arithmetics studied so far nor other alternative redundant representations are able to match the actual degeneracy of the genetic code. In this paper we develop a slightly more general framework that leads to the following surprising results: i) the degeneracy of the genetic code is mathematically described, ii) a new symmetry is uncovered within this degeneracy, iii) by assigning a binary string to each of the codons, their classification into definite parity classes according to the corresponding sequence of bases is made possible. This last result is particularly appealing in connection with the fact that parity coding is the basis of the simplest strategies devised for error correction in man-made digital data transmission systems.

  20. Generating Best Features for Web Page Classification

    K. Selvakuberan


    Full Text Available As the Internet provides millions of web pages for each and every search term, getting interesting and required results quickly from the Web becomes very difficult. Automatic classification of web pages into relevant categories is the current research topic which helps the search engine to get relevant results. As the web pages contain many irrelevant, infrequent and stop words that reduce the performance of the classifier, extracting or selecting representative features from the web page is an essential pre-processing step. The goal of this paper is to find minimum number of highly qualitative features by integrating feature selection techniques. We conducted experiments with various numbers of features selected by different feature selection algorithms on a well defined initial set of features and show that cfssubset evaluator combined with term frequency method gives minimal qualitative features enough to attain considerable classification accuracy.

  1. Adult attachment interviews of women from low-risk, poverty, and maltreatment risk samples: comparisons between the hostile/helpless and traditional AAI coding systems.

    Frigerio, Alessandra; Costantino, Elisabetta; Ceppi, Elisa; Barone, Lavinia


    The main aim of this study was to investigate the correlates of a Hostile-Helpless (HH) state of mind among 67 women belonging to a community sample and two different at-risk samples matched on socio-economic indicators, including 20 women from low-SES population (poverty sample) and 15 women at risk for maltreatment being monitored by the social services for the protection of juveniles (maltreatment risk sample). The Adult Attachment Interview (AAI) protocols were reliably coded blind to the samples' group status. The rates of HH classification increased in relation to the risk status of the three samples, ranging from 9% for the low-risk sample to 60% for the maltreatment risk sample to 75% for mothers in the maltreatment risk sample who actually maltreated their infants. In terms of the traditional AAI classification system, 88% of the interviews from the maltreating mothers were classified Unresolved/Cannot Classify (38%) or Preoccupied (50%). Partial overlapping between the 2 AAI coding systems was found, and discussion concerns the relevant contributions of each AAI coding system to understanding of the intergenerational transmission of maltreatment.

  2. Research on universal combinatorial coding.

    Lu, Jun; Zhang, Zhuo; Mo, Juan


    The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value.

  3. Chronobiology: relevance for tuberculosis.

    Santos, Lígia Gabrielle; Pires, Gabriel Natan; Azeredo Bittencourt, Lia Rita; Tufik, Sergio; Andersen, Monica Levy


    Despite the knowledge concerning the pathogenesis of tuberculosis, this disease remains one of the most important causes of mortality worldwide. Several risk factors are well-known, such poverty, HIV infection, and poor nutrition, among others. However, some issues that may influence tuberculosis warrant further investigation. In particular, the chronobiological aspects related to tuberculosis have garnered limited attention. In general, the interface between tuberculosis and chronobiology is manifested in four ways: variations in vitamin D bioavailability, winter conditions, associated infections, and circannual oscillations of lymphocytes activity. Moreover, tuberculosis is related to the following chronobiological factors: seasonality, latitude, photoperiod and radiation. Despite the relevance of these topics, the relationship between them has been weakly reviewed. This review aims to synthesize the studies regarding the association between tuberculosis and chronobiology, as well as urge critical discussion and highlight its applicability to health policies for tuberculosis.

  4. Greedy vs. L1 convex optimization in sparse coding

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor;


    through finding the L0-norm solution of the problem: min ||Y -D_{alpfa}||–2^2 +||alpha||_0, is crucial. Note that D refers to the dictionary and refers to the sparse codes. This L0-norm solution, however, is known as a NP-hard problem. Despite of the research achievements in some classification fields...

  5. Conservatism and Value Relevance: Evidence from the European Financial Sector


    Problem statement: We examine the levels of conservatism and value relevance existent in the financial sectors of three code-law European countries (Germany, France and Greece) and one common-law European country (UK). We investigate (a) whether conservatism exists during the last decade (1999-2008), (b) whether its level has changed over this period and (c) the impact of conservatism on the value relevance of earnings. Approach: We run regressions on two widely acclaimed models: The Basus mo...

  6. Safety Code A12

    SC Secretariat


    Please note that the Safety Code A12 (Code A12) entitled "THE SAFETY COMMISSION (SC)" is available on the web at the following url: Paper copies can also be obtained from the SC Unit Secretariat, e-mail: SC Secretariat

  7. Dress Codes for Teachers?

    Million, June


    In this article, the author discusses an e-mail survey of principals from across the country regarding whether or not their school had a formal staff dress code. The results indicate that most did not have a formal dress code, but agreed that professional dress for teachers was not only necessary, but showed respect for the school and had a…

  8. Nuremberg code turns 60

    Thieren, Michel; Mauron, Alex


    This month marks sixty years since the Nuremberg code – the basic text of modern medical ethics – was issued. The principles in this code were articulated in the context of the Nuremberg trials in 1947. We would like to use this anniversary to examine its ability to address the ethical challenges of our time.

  9. Pseudonoise code tracking loop

    Laflame, D. T. (Inventor)


    A delay-locked loop is presented for tracking a pseudonoise (PN) reference code in an incoming communication signal. The loop is less sensitive to gain imbalances, which can otherwise introduce timing errors in the PN reference code formed by the loop.

  10. Scrum Code Camps

    Pries-Heje, Jan; Pries-Heje, Lene; Dahlgaard, Bente


    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  11. Scrum Code Camps

    Pries-Heje, Jan; Pries-Heje, Lene; Dahlgaard, Bente


    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...




    Traditional approaches to neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - extracting information about an unknown time-dependent stimulus from short segments of a spike train. Here the neural code was characterized from

  13. The materiality of Code

    Soon, Winnie


    , Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...

  14. Kappa Coefficients for Circular Classifications

    Warrens, Matthijs J.; Pratiwi, Bunga C.


    Circular classifications are classification scales with categories that exhibit a certain periodicity. Since linear scales have endpoints, the standard weighted kappas used for linear scales are not appropriate for analyzing agreement between two circular classifications. A family of kappa coefficie

  15. Comparison of Danish dichotomous and BI-RADS classifications of mammographic density

    Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My


    dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. MATERIAL AND METHODS: The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive......, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. RESULTS......: Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having...

  16. Transformation invariant sparse coding

    Mørup, Morten; Schmidt, Mikkel Nørgaard


    Sparse coding is a well established principle for unsupervised learning. Traditionally, features are extracted in sparse coding in specific locations, however, often we would prefer invariant representation. This paper introduces a general transformation invariant sparse coding (TISC) model....... The model decomposes images into features invariant to location and general transformation by a set of specified operators as well as a sparse coding matrix indicating where and to what degree in the original image these features are present. The TISC model is in general overcomplete and we therefore invoke...... sparse coding to estimate its parameters. We demonstrate how the model can correctly identify components of non-trivial artificial as well as real image data. Thus, the model is capable of reducing feature redundancies in terms of pre-specified transformations improving the component identification....

  17. The SIFT Code Specification


    The specification of Software Implemented Fault Tolerance (SIFT) consists of two parts, the specifications of the SIFT models and the specifications of the SIFT PASCAL program which actually implements the SIFT system. The code specifications are the last of a hierarchy of models describing the operation of the SIFT system and are related to the SIFT models as well as the PASCAL program. These Specifications serve to link the SIFT models to the running program. The specifications are very large and detailed and closely follow the form and organization of the PASCAL code. In addition to describing each of the components of the SIFT code, the code specifications describe the assumptions of the upper SIFT models which are required to actually prove that the code will work as specified. These constraints are imposed primarily on the schedule tables.

  18. The Aesthetics of Coding

    Andersen, Christian Ulrik


    discusses code as the artist’s material and, further, formulates a critique of Cramer. The seductive magic in computer-generated art does not lie in the magical expression, but nor does it lie in the code/material/text itself. It lies in the nature of code to do something – as if it was magic......Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  19. Combustion chamber analysis code

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.


    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  20. Astrophysics Source Code Library

    Allen, Alice; Berriman, Bruce; Hanisch, Robert J; Mink, Jessica; Teuben, Peter J


    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  1. Classification of Scenes into Indoor/Outdoor

    R. Raja


    Full Text Available Effective model for scene classification is essential, to access the desired images from large scale databases. This study presents an efficient scene classification approach by integrating low level features, to reduce the semantic gap between the visual features and richness of human perception. The objective of the study is to categorize an image into indoor or outdoor scene using relevant low level features such as color and texture. The color feature from HSV color model, texture feature through GLCM and entropy computed from UV color space forms the feature vector. To support automatic scene classification, Support Vector Machine (SVM is implemented on low level features for categorizing a scene into indoor/outdoor. Since the combination of these image features exhibit a distinctive disparity between images containing indoor or outdoor scenes, the proposed method achieves better performance in terms of classification accuracy of about 92.44%. The proposed method has been evaluated on IITM- SCID2 (Scene Classification Image Database and dataset of 3442 images collected from the web.

  2. On the Organizational Dynamics of the Genetic Code

    Zhang, Zhang


    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides—adenine, thymine, guanine and cytosine—according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  3. On the organizational dynamics of the genetic code.

    Zhang, Zhang; Yu, Jun


    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides-adenine, thymine, guanine and cytosine-according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  4. Classification of current anticancer immunotherapies

    Vacchelli, Erika; Pedro, José-Manuel Bravo-San; Buqué, Aitziber; Senovilla, Laura; Baracco, Elisa Elena; Bloy, Norma; Castoldi, Francesca; Abastado, Jean-Pierre; Agostinis, Patrizia; Apte, Ron N.; Aranda, Fernando; Ayyoub, Maha; Beckhove, Philipp; Blay, Jean-Yves; Bracci, Laura; Caignard, Anne; Castelli, Chiara; Cavallo, Federica; Celis, Estaban; Cerundolo, Vincenzo; Clayton, Aled; Colombo, Mario P.; Coussens, Lisa; Dhodapkar, Madhav V.; Eggermont, Alexander M.; Fearon, Douglas T.; Fridman, Wolf H.; Fučíková, Jitka; Gabrilovich, Dmitry I.; Galon, Jérôme; Garg, Abhishek; Ghiringhelli, François; Giaccone, Giuseppe; Gilboa, Eli; Gnjatic, Sacha; Hoos, Axel; Hosmalin, Anne; Jäger, Dirk; Kalinski, Pawel; Kärre, Klas; Kepp, Oliver; Kiessling, Rolf; Kirkwood, John M.; Klein, Eva; Knuth, Alexander; Lewis, Claire E.; Liblau, Roland; Lotze, Michael T.; Lugli, Enrico; Mach, Jean-Pierre; Mattei, Fabrizio; Mavilio, Domenico; Melero, Ignacio; Melief, Cornelis J.; Mittendorf, Elizabeth A.; Moretta, Lorenzo; Odunsi, Adekunke; Okada, Hideho; Palucka, Anna Karolina; Peter, Marcus E.; Pienta, Kenneth J.; Porgador, Angel; Prendergast, George C.; Rabinovich, Gabriel A.; Restifo, Nicholas P.; Rizvi, Naiyer; Sautès-Fridman, Catherine; Schreiber, Hans; Seliger, Barbara; Shiku, Hiroshi; Silva-Santos, Bruno; Smyth, Mark J.; Speiser, Daniel E.; Spisek, Radek; Srivastava, Pramod K.; Talmadge, James E.; Tartour, Eric; Van Der Burg, Sjoerd H.; Van Den Eynde, Benoît J.; Vile, Richard; Wagner, Hermann; Weber, Jeffrey S.; Whiteside, Theresa L.; Wolchok, Jedd D.; Zitvogel, Laurence; Zou, Weiping


    During the past decades, anticancer immunotherapy has evolved from a promising therapeutic option to a robust clinical reality. Many immunotherapeutic regimens are now approved by the US Food and Drug Administration and the European Medicines Agency for use in cancer patients, and many others are being investigated as standalone therapeutic interventions or combined with conventional treatments in clinical studies. Immunotherapies may be subdivided into “passive” and “active” based on their ability to engage the host immune system against cancer. Since the anticancer activity of most passive immunotherapeutics (including tumor-targeting monoclonal antibodies) also relies on the host immune system, this classification does not properly reflect the complexity of the drug-host-tumor interaction. Alternatively, anticancer immunotherapeutics can be classified according to their antigen specificity. While some immunotherapies specifically target one (or a few) defined tumor-associated antigen(s), others operate in a relatively non-specific manner and boost natural or therapy-elicited anticancer immune responses of unknown and often broad specificity. Here, we propose a critical, integrated classification of anticancer immunotherapies and discuss the clinical relevance of these approaches. PMID:25537519

  5. Migration to the ICD-10 coding system: A primer for spine surgeons (Part 1

    Gazanfar Rahmathulla


    Full Text Available Background: On 1 October 2015, a new federally mandated system goes into effect requiring the replacement of the International Classification of Disease-version 9-Clinical Modification (ICD-9-CM with ICD-10-CM. These codes are required to be used for reimbursement and to substantiate medical necessity. ICD-10 is composite with as many as 141,000 codes, an increase of 712% when compared to ICD-9. Methods: Execution of the ICD-10 system will require significant changes in the clinical administrative and hospital-based practices. Through the transition, diminished productivity and practice revenue can be anticipated, the impacts of which the spine surgeon can minimizeby appropriate education and planning. Results: The advantages of the new system include increased clarity and more accurate definitions reflecting patient condition, information relevant to ambulatory and managed care encounters, expanded injury codes, laterality, specificity, precise data for safety and compliance reporting, data mining for research, and finally, enabling pay-for-performance programs. The disadvantages include the cost per physician, training administrative staff, revenue loss during the learning curve, confusion, the need to upgrade hardware along with software, and overall expense to the healthcare system. Conclusions: With the deadline rapidly approaching, gaps in implementation result in delayed billing, delayed or diminished reimbursements, and absence of quality and outcomes data. It is thereby essential for spine surgeons to understand their role in transitioning to this new environment. Part I of this article discusses the background, coding changes, and costs as well as reviews the salient features of ICD-10 in spine surgery

  6. Rating Correlations Between Customs Codes and Export Control Lists: Assessing the Needs and Challenges

    Chatelus, Renaud; Heine, Pete


    Correlation tables are the linchpins between the customs codes used to classify commodities in international trade and the control lists used for strategic trade control (STC) purposes. While understanding the customs classification system can help the STC community better understand strategic trade flows, better identify which trade operations require permits, and more effectively detect illegal exports, the two systems are different in scope, philosophy, content, and objectives. Many indications point to the limitations of these correlation tables, and it is important to understand the nature of the limitations and the complex underlying reasons to conceive possible improvements. As part of its Strategic Trade and Supply Chain Analytics Initiative, Argonne National Laboratory supported a study of a subset of the European Union’s TARIC correlation table. The study included development of a methodology and an approach to rating the quality and relevance of individual correlations. The study was intended as a first step to engage the STC community in deflections and initiatives to improve the conception and use of correlations, and its conclusions illustrate the scope and complex nature of the challenges to overcome. This paper presents the two classification systems, analyzes the needs for correlation tables and the complex challenges associated with them, summarizes key findings, and proposes possible ways forward.

  7. PhyloCSF: a comparative genomics method to distinguish protein coding and non-coding regions.

    Lin, Michael F; Jungreis, Irwin; Kellis, Manolis


    As high-throughput transcriptome sequencing provides evidence for novel transcripts in many species, there is a renewed need for accurate methods to classify small genomic regions as protein coding or non-coding. We present PhyloCSF, a novel comparative genomics method that analyzes a multispecies nucleotide sequence alignment to determine whether it is likely to represent a conserved protein-coding region, based on a formal statistical comparison of phylogenetic codon models. We show that PhyloCSF's classification performance in 12-species Drosophila genome alignments exceeds all other methods we compared in a previous study. We anticipate that this method will be widely applicable as the transcriptomes of many additional species, tissues and subcellular compartments are sequenced, particularly in the context of ENCODE and modENCODE, and as interest grows in long non-coding RNAs, often initially recognized by their lack of protein coding potential rather than conserved RNA secondary structures. The Objective Caml source code and executables for GNU/Linux and Mac OS X are freely available at CONTACT:;

  8. Embedded foveation image coding.

    Wang, Z; Bovik, A C


    The human visual system (HVS) is highly space-variant in sampling, coding, processing, and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. By taking advantage of this fact, it is possible to remove considerable high-frequency information redundancy from the peripheral regions and still reconstruct a perceptually good quality image. Great success has been obtained previously by a class of embedded wavelet image coding algorithms, such as the embedded zerotree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT) algorithms. Embedded wavelet coding not only provides very good compression performance, but also has the property that the bitstream can be truncated at any point and still be decoded to recreate a reasonably good quality image. In this paper, we propose an embedded foveation image coding (EFIC) algorithm, which orders the encoded bitstream to optimize foveated visual quality at arbitrary bit-rates. A foveation-based image quality metric, namely, foveated wavelet image quality index (FWQI), plays an important role in the EFIC system. We also developed a modified SPIHT algorithm to improve the coding efficiency. Experiments show that EFIC integrates foveation filtering with foveated image coding and demonstrates very good coding performance and scalability in terms of foveated image quality measurement.

  9. Fulcrum Network Codes


    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity in the net...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof.......Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...

  10. Report number codes

    Nelson, R.N. (ed.)


    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  11. The use of diagnostic coding in chiropractic practice

    Testern, Cecilie D; Hestbæk, Lise; French, Simon D


    of chiropractors about diagnostic coding and explore the use of it in a chiropractic setting. A secondary aim was to compare the diagnostic coding undertaken by chiropractors and an independent coder. METHOD: A codin exercise based on the International Classification of Primary Care version 2, PLUS extension (ICPC...... of agreement between the chiropractors and the coder was determined and Cohen's Kappa was used to determine the agreement beyond that expected by chance. RESULTS: From the interviews the three emerging themes were: 1) Advantages and disadvantages of using a clinical coding system in chiropractic practice, 2......) ICPC-2 PLUS terminology issues for chiropractic practice and 3) Implementation of a coding system into chiropractic practice. The participating chiropractors did not uniformly support or condemn the idea of using diagnostic coding. However there was a strong agreement that the terminology in ICPC-2...

  12. Application of RS Codes in Decoding QR Code

    Zhu Suxia(朱素霞); Ji Zhenzhou; Cao Zhiyan


    The QR Code is a 2-dimensional matrix code with high error correction capability. It employs RS codes to generate error correction codewords in encoding and recover errors and damages in decoding. This paper presents several QR Code's virtues, analyzes RS decoding algorithm and gives a software flow chart of decoding the QR Code with RS decoding algorithm.

  13. Evaluation Codes from an Affine Veriety Code Perspective

    Geil, Hans Olav


    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...

  14. Hyperspectral image classification based on NMF Features Selection Method

    Abe, Bolanle T.; Jordaan, J. A.


    Hyperspectral instruments are capable of collecting hundreds of images corresponding to wavelength channels for the same area on the earth surface. Due to the huge number of features (bands) in hyperspectral imagery, land cover classification procedures are computationally expensive and pose a problem known as the curse of dimensionality. In addition, higher correlation among contiguous bands increases the redundancy within the bands. Hence, dimension reduction of hyperspectral data is very crucial so as to obtain good classification accuracy results. This paper presents a new feature selection technique. Non-negative Matrix Factorization (NMF) algorithm is proposed to obtain reduced relevant features in the input domain of each class label. This aimed to reduce classification error and dimensionality of classification challenges. Indiana pines of the Northwest Indiana dataset is used to evaluate the performance of the proposed method through experiments of features selection and classification. The Waikato Environment for Knowledge Analysis (WEKA) data mining framework is selected as a tool to implement the classification using Support Vector Machines and Neural Network. The selected features subsets are subjected to land cover classification to investigate the performance of the classifiers and how the features size affects classification accuracy. Results obtained shows that performances of the classifiers are significant. The study makes a positive contribution to the problems of hyperspectral imagery by exploring NMF, SVMs and NN to improve classification accuracy. The performances of the classifiers are valuable for decision maker to consider tradeoffs in method accuracy versus method complexity.

  15. 72 FR 59307 - Termination of Desert Land Entry and Carey Act Classifications


    ... Carey Act Classifications on 120 acres of land in Owyhee County as these classifications are no longer... Moore, BLM, Owyhee Field Office, 20 1st Avenue West, Marsing, Idaho 83639, 208-896-5917. SUPPLEMENTARY... Act (FLPMA). Dated: October 15, 2007. Mark A. Lane, Owyhee Field Manager. BILLING CODE 4310-GG-P...

  16. What is new in genetics and osteogenesis imperfecta classification?

    Eugênia R. Valadares; Carneiro, Túlio B.; Santos, Paula M.; Oliveira, Ana Cristina; Zabel, Bernhard


    OBJECTIVE: Literature review of new genes related to osteogenesis imperfecta (OI) and update of its classification. SOURCES: Literature review in the PubMed and OMIM databases, followed by selection of relevant references. SUMMARY OF THE FINDINGS: In 1979, Sillence et al. developed a classification of OI subtypes based on clinical features and disease severity: OI type I, mild, common, with blue sclera; OI type II, perinatal lethal form; OI type III, severe and progressively deformin...

  17. Identifying Codes on Directed De Bruijn Graphs


    2. 15. SUBJECT TERMS Identifying Code; De Bruijn Network; Graph Theory 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER...our assumption on T and our earlier reasoning . Since x 6= y, this means that ~B(d, n) contains both directed arcs x → y and y → x. This allows us to...G) so that f(s) = g(s) for all s ∈ S, then f(v) = g(v) for all v ∈ V (G). That is, every automorphism is completely determined by its action on a

  18. Bayesian anti-sparse coding

    Elvira, Clément; Dobigeon, Nicolas


    Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as digital communications. Anti-sparse regularization can be naturally expressed through an $\\ell_{\\infty}$-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote anti-sparsity in a Gaussian linear inverse problem, yielding a fully Bayesian formulation of anti-sparse coding. Two Markov chain Monte Carlo (MCMC) algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The seco...

  19. Automatic classification of blank substrate defects

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati


    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  20. Distributed multiple description coding

    Bai, Huihui; Zhao, Yao


    This book examines distributed video coding (DVC) and multiple description coding (MDC), two novel techniques designed to address the problems of conventional image and video compression coding. Covering all fundamental concepts and core technologies, the chapters can also be read as independent and self-sufficient, describing each methodology in sufficient detail to enable readers to repeat the corresponding experiments easily. Topics and features: provides a broad overview of DVC and MDC, from the basic principles to the latest research; covers sub-sampling based MDC, quantization based MDC,

  1. Cryptography cracking codes


    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  2. Coded MapReduce

    Li, Songze; Maddah-Ali, Mohammad Ali; Avestimehr, A. Salman


    MapReduce is a commonly used framework for executing data-intensive jobs on distributed server clusters. We introduce a variant implementation of MapReduce, namely "Coded MapReduce", to substantially reduce the inter-server communication load for the shuffling phase of MapReduce, and thus accelerating its execution. The proposed Coded MapReduce exploits the repetitive mapping of data blocks at different servers to create coding opportunities in the shuffling phase to exchange (key,value) pair...

  3. Classical Holographic Codes

    Brehm, Enrico M


    In this work, we introduce classical holographic codes. These can be understood as concatenated probabilistic codes and can be represented as networks uniformly covering hyperbolic space. In particular, classical holographic codes can be interpreted as maps from bulk degrees of freedom to boundary degrees of freedom. Interestingly, they are shown to exhibit features similar to those expected from the AdS/CFT correspondence. Among these are a version of the Ryu-Takayanagi formula and intriguing properties regarding bulk reconstruction and boundary representations of bulk operations. We discuss the relation of our findings with expectations from AdS/CFT and, in particular, with recent results from quantum error correction.

  4. A complete electrical hazard classification system and its application

    Gordon, Lloyd B [Los Alamos National Laboratory; Cartelli, Laura [Los Alamos National Laboratory


    The Standard for Electrical Safety in the Workplace, NFPA 70E, and relevant OSHA electrical safety standards evolved to address the hazards of 60-Hz power that are faced primarily by electricians, linemen, and others performing facility and utility work. This leaves a substantial gap in the management of electrical hazards in Research and Development (R&D) and specialized high voltage and high power equipment. Examples include lasers, accelerators, capacitor banks, electroplating systems, induction and dielectric heating systems, etc. Although all such systems are fed by 50/60 Hz alternating current (ac) power, we find substantial use of direct current (dc) electrical energy, and the use of capacitors, inductors, batteries, and radiofrequency (RF) power. The electrical hazards of these forms of electricity and their systems are different than for 50160 Hz power. Over the past 10 years there has been an effort to develop a method of classifying all of the electrical hazards found in all types of R&D and utilization equipment. Examples of the variation of these hazards from NFPA 70E include (a) high voltage can be harmless, if the available current is sufficiently low, (b) low voltage can be harmful if the available current/power is high, (c) high voltage capacitor hazards are unique and include severe reflex action, affects on the heart, and tissue damage, and (d) arc flash hazard analysis for dc and capacitor systems are not provided in existing standards. This work has led to a comprehensive electrical hazard classification system that is based on various research conducted over the past 100 years, on analysis of such systems in R&D, and on decades of experience. Initially, national electrical safety codes required the qualified worker only to know the source voltage to determine the shock hazard. Later, as arc flash hazards were understood, the fault current and clearing time were needed. These items are still insufficient to fully characterize all types of

  5. 77 FR 32010 - Applications (Classification, Advisory, and License) and Documentation


    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF COMMERCE Bureau of Industry and Security 15 CFR Part 748 Applications (Classification, Advisory, and License) and Documentation CFR Correction 0 In Title 15 of the Code of Federal Regulations, Parts 300 to 799, revised as...

  6. A Deterministic Transport Code for Space Environment Electrons

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.


    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  7. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure


    ... process in March 2012 (77 FR 5379). When verified by a futures classification, Smith-Doxey data serves as... Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed... for the addition of an optional cotton futures classification procedure--identified and known...

  8. Pitch Based Sound Classification

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U


    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  9. Learning Apache Mahout classification

    Gupta, Ashish


    If you are a data scientist who has some experience with the Hadoop ecosystem and machine learning methods and want to try out classification on large datasets using Mahout, this book is ideal for you. Knowledge of Java is essential.

  10. Update on diabetes classification.

    Thomas, Celeste C; Philipson, Louis H


    This article highlights the difficulties in creating a definitive classification of diabetes mellitus in the absence of a complete understanding of the pathogenesis of the major forms. This brief review shows the evolving nature of the classification of diabetes mellitus. No classification scheme is ideal, and all have some overlap and inconsistencies. The only diabetes in which it is possible to accurately diagnose by DNA sequencing, monogenic diabetes, remains undiagnosed in more than 90% of the individuals who have diabetes caused by one of the known gene mutations. The point of classification, or taxonomy, of disease, should be to give insight into both pathogenesis and treatment. It remains a source of frustration that all schemes of diabetes mellitus continue to fall short of this goal.

  11. [Classification of cardiomyopathy].

    Asakura, Masanori; Kitakaze, Masafumi


    Cardiomyopathy is a group of cardiovascular diseases with poor prognosis. Some patients with dilated cardiomyopathy need heart transplantations due to severe heart failure. Some patients with hypertrophic cardiomyopathy die unexpectedly due to malignant ventricular arrhythmias. Various phenotypes of cardiomyopathies are due to the heterogeneous group of diseases. The classification of cardiomyopathies is important and indispensable in the clinical situation. However, their classification has not been established, because the causes of cardiomyopathies have not been fully elucidated. We usually use definition and classification offered by WHO/ISFC task force in 1995. Recently, several new definitions and classifications of the cardiomyopathies have been published by American Heart Association, European Society of Cardiology and Japanese Circulation Society.

  12. Carbohydrate terminology and classification

    Cummings, J H; Stephen, A M


    ...) and polysaccharides (DP> or =10). Within this classification, a number of terms are used such as mono- and disaccharides, polyols, oligosaccharides, starch, modified starch, non-starch polysaccharides, total carbohydrate, sugars, etc...

  13. Towards Multi Label Text Classification through Label Propagation

    Shweta C. Dharmadhikari


    Full Text Available Classifying text data has been an active area of research for a long time. Text document is multifaceted object and often inherently ambiguous by nature. Multi-label learning deals with such ambiguous object. Classification of such ambiguous text objects often makes task of classifier difficult while assigning relevant classes to input document. Traditional single label and multi class text classification paradigms cannot efficiently classify such multifaceted text corpus. Through our paper we are proposing a novel label propagation approach based on semi supervised learning for Multi Label Text Classification. Our proposed approach models the relationship between class labels and also effectively represents input text documents. We are using semi supervised learning technique for effective utilization of labeled and unlabeled data for classification. Our proposed approach promises better classification accuracy and handling of complexity and elaborated on the basis of standard datasets such as Enron, Slashdot and Bibtex.

  14. A New Classification Method to Overcome Over-Branching

    ZHOU Aoying(周傲英); QIAN Weining(钱卫宁); QIAN Hailei(钱海蕾); JIN Wen(金文)


    Classification is an important technique in data mining. The decision trees built by most of the existing classification algorithms commonly feature over-branching, which will lead to poor efficiency in the subsequent classification period. In this paper, we present a new value-oriented classification method, which aims at building accurately proper-sized decision trees while reducing over-branching as much as possible, based on the concepts of frequentpattern-node and exceptive-child-node. The experiments show that while using relevant analysis as pre-processing, our classification method, without loss of accuracy, can eliminate the over-branching greatly in decision trees more effectively and efficiently than other algorithms do.

  15. An Efficient Audio Classification Approach Based on Support Vector Machines

    Lhoucine Bahatti


    Full Text Available In order to achieve an audio classification aimed to identify the composer, the use of adequate and relevant features is important to improve performance especially when the classification algorithm is based on support vector machines. As opposed to conventional approaches that often use timbral features based on a time-frequency representation of the musical signal using constant window, this paper deals with a new audio classification method which improves the features extraction according the Constant Q Transform (CQT approach and includes original audio features related to the musical context in which the notes appear. The enhancement done by this work is also lay on the proposal of an optimal features selection procedure which combines filter and wrapper strategies. Experimental results show the accuracy and efficiency of the adopted approach in the binary classification as well as in the multi-class classification.

  16. Expected Classification Accuracy

    Lawrence M. Rudner


    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  17. Completion of the classification

    Strade, Helmut


    This is the last of three volumes about ""Simple Lie Algebras over Fields of Positive Characteristic""by Helmut Strade, presenting the state of the art of the structure and classification of Lie algebras over fields of positive characteristic. In this monograph the proof of the Classification Theorem presented in the first volumeis concluded.Itcollects all the important results on the topic whichcan be found only in scatteredscientific literaturso far.

  18. Twitter content classification


    This paper delivers a new Twitter content classification framework based sixteen existing Twitter studies and a grounded theory analysis of a personal Twitter history. It expands the existing understanding of Twitter as a multifunction tool for personal, profession, commercial and phatic communications with a split level classification scheme that offers broad categorization and specific sub categories for deeper insight into the real world application of the service.

  19. Nonlinear network coding based on multiplication and exponentiation in GF(2m)

    JIANG An-you; ZHU Jin-kang


    This article proposes a novel nonlinear network code in the GF(2m) finite field. Different from previous linear network codes that linearly mix multiple input flows, the proposed nonlinear network code mixes input flows through both multiplication and exponentiation in the GF(2m). Three relevant rules for selecting proper parameters for the proposed nonlinear network code are discussed, and the relationship between the power parameter and the coding coefficient K is explored. Further analysis shows that the proposed nonlinear network code is equivalent to a linear network code with deterministic coefficients.

  20. Nonterminals, homomorphisms and codings in different variations of OL-systems. II. Nondeterministic systems

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto


    Continuing the work begun in Part I of this paper, we consider now variations of nondeterministic OL-systems. The present Part II of the paper contains a systematic classification of the effect of nonterminals, codings, weak codings, nonerasing homomorphisms and homomorphisms for all basic variat...

  1. The Clawpack Community of Codes

    Mandli, K. T.; LeVeque, R. J.; Ketcheson, D.; Ahmadia, A. J.


    Clawpack, the Conservation Laws Package, has long been one of the standards for solving hyperbolic conservation laws but over the years has extended well beyond this role. Today a community of open-source codes have been developed that address a multitude of different needs including non-conservative balance laws, high-order accurate methods, and parallelism while remaining extensible and easy to use, largely by the judicious use of Python and the original Fortran codes that it wraps. This talk will present some of the recent developments in projects under the Clawpack umbrella, notably the GeoClaw and PyClaw projects. GeoClaw was originally developed as a tool for simulating tsunamis using adaptive mesh refinement but has since encompassed a large number of other geophysically relevant flows including storm surge and debris-flows. PyClaw originated as a Python version of the original Clawpack algorithms but has since been both a testing ground for new algorithmic advances in the Clawpack framework but also an easily extensible framework for solving hyperbolic balance laws. Some of these extensions include the addition of WENO high-order methods, massively parallel capabilities, and adaptive mesh refinement technologies, made possible largely by the flexibility of the Python language and community libraries such as NumPy and PETSc. Because of the tight integration with Python tecnologies, both packages have benefited also from the focus on reproducibility in the Python community, notably IPython notebooks.

  2. The fast code

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)


    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  3. VT ZIP Code Areas

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  4. Fulcrum Network Codes


    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....


    Leslie Hawthorn


      This article examines the Google Summer of Code (GSoC) program, the world's first global initiative to introduce College and University students to free/libre open source software (F/LOSS) development...

  6. Importance of Building Code

    Reshmi Banerjee


    Full Text Available A building code, or building control, is a set of rules that specify the minimum standards for constructed objects such as buildings and non building structures. The main purpose of building codes are to protect public health, safety and general welfare as they relate to the construction and occupancy of buildings and structures. The building code becomes law of a particular jurisdiction when formally enacted by the appropriate governmental or private authority. Building codes are generally intended to be applied by architects, engineers, constructors and regulators but are also used for various purposes by safety inspectors, environmental scientists, real estate developers, subcontractors, manufacturers of building products and materials, insurance companies, facility managers, tenants and others.

  7. Bandwidth efficient coding

    Anderson, John B


    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  8. Coded Random Access

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi


    , in which the structure of the access protocol can be mapped to a structure of an erasure-correcting code defined on graph. This opens the possibility to use coding theory and tools for designing efficient random access protocols, offering markedly better performance than ALOHA. Several instances of coded......The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered...... as waste. However, if the common receiver (e.g., base station) is capable to store the collision slots and use them in a transmission recovery process based on successive interference cancellation, the design space for access protocols is radically expanded. We present the paradigm of coded random access...

  9. Code Disentanglement: Initial Plan

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  10. Annotated Raptor Codes

    Mahdaviani, Kaveh; Tellambura, Chintha


    In this paper, an extension of raptor codes is introduced which keeps all the desirable properties of raptor codes, including the linear complexity of encoding and decoding per information bit, unchanged. The new design, however, improves the performance in terms of the reception rate. Our simulations show a 10% reduction in the needed overhead at the benchmark block length of 64,520 bits and with the same complexity per information bit.

  11. An upper bound on the number of errors corrected by a convolutional code

    Justesen, Jørn


    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  12. Sandia National Laboratories analysis code data base

    Peterson, C.W.


    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  13. Vulnerabilities Classification for Safe Development on Android

    Ricardo Luis D. M. Ferreira


    Full Text Available The global sales market is currently led by devices with the Android operating system. In 2015, more than 1 billion smartphones were sold, of which 81.5% were operated by the Android platform. In 2017, it is estimated that 267.78 billion applications will be downloaded from Google Play. According to Qian, 90% of applications are vulnerable, despite the recommendations of rules and standards for the safe software development. This study presents a classification of vulnerabilities, indicating the vulnerability, the safety aspect defined by the Brazilian Association of Technical Standards (Associação Brasileira de Normas Técnicas - ABNT norm NBR ISO/IEC 27002 which will be violated, which lines of code generate the vulnerability and what should be done to avoid it, and the threat agent used by each of them. This classification allows the identification of possible points of vulnerability, allowing the developer to correct the identified gaps.

  14. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Stefan Dech


    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  15. Robust Nonlinear Neural Codes

    Yang, Qianli; Pitkow, Xaq


    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  16. Scalable motion vector coding

    Barbarien, Joeri; Munteanu, Adrian; Verdicchio, Fabio; Andreopoulos, Yiannis; Cornelis, Jan P.; Schelkens, Peter


    Modern video coding applications require transmission of video data over variable-bandwidth channels to a variety of terminals with different screen resolutions and available computational power. Scalable video coding is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial domain motion compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of the state-of-the-art non-scalable H.264-codec. These codecs require scalable coding of the motion vectors in order to support a large range of bit-rates with optimal compression efficiency. Scalable motion vector coding algorithms based on the integer wavelet transform followed by embedded coding of the wavelet coefficients were recently proposed. In this paper, a new and fundamentally different scalable motion vector codec (MVC) using median-based motion vector prediction is proposed. Extensive experimental results demonstrate that the proposed MVC systematically outperforms the wavelet-based state-of-the-art solutions. To be able to take advantage of the proposed scalable MVC, a rate allocation mechanism capable of optimally dividing the available rate among texture and motion information is required. Two rate allocation strategies are proposed and compared. The proposed MVC and rate allocation schemes are incorporated into an SDMCTF-based video codec and the benefits of scalable motion vector coding are experimentally demonstrated.

  17. On Expanded Cyclic Codes

    Wu, Yingquan


    The paper has a threefold purpose. The first purpose is to present an explicit description of expanded cyclic codes defined in $\\GF(q^m)$. The proposed explicit construction of expanded generator matrix and expanded parity check matrix maintains the symbol-wise algebraic structure and thus keeps many important original characteristics. The second purpose of this paper is to identify a class of constant-weight cyclic codes. Specifically, we show that a well-known class of $q$-ary BCH codes excluding the all-zero codeword are constant-weight cyclic codes. Moreover, we show this class of codes achieve the Plotkin bound. The last purpose of the paper is to characterize expanded cyclic codes utilizing the proposed expanded generator matrix and parity check matrix. We analyze the properties of component codewords of a codeword and particularly establish the precise conditions under which a codeword can be represented by a subbasis. With the new insights, we present an improved lower bound on the minimum distance of...

  18. Quantification of artistic style through sparse coding analysis in the drawings of Pieter Bruegel the Elder

    Hughes, James M.; Graham, Daniel J.; Rockmore, Daniel N.


    Recently, statistical techniques have been used to assist art historians in the analysis of works of art. We present a novel technique for the quantification of artistic style that utilizes a sparse coding model. Originally developed in vision research, sparse coding models can be trained to represent any image space by maximizing the kurtosis of a representation of an arbitrarily selected image from that space. We apply such an analysis to successfully distinguish a set of authentic drawings by Pieter Bruegel the Elder from another set of well-known Bruegel imitations. We show that our approach, which involves a direct comparison based on a single relevant statistic, offers a natural and potentially more germane alternative to wavelet-based classification techniques that rely on more complicated statistical frameworks. Specifically, we show that our model provides a method capable of discriminating between authentic and imitation Bruegel drawings that numerically outperforms well-known existing approaches. Finally, we discuss the applications and constraints of our technique. PMID:20080588

  19. IR-360 nuclear power plant safety functions and component classification

    Yousefpour, F., E-mail: [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of); Shokri, F.; Soltani, H. [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of)


    The IR-360 nuclear power plant as a 2-loop PWR of 360 MWe power generation capacity is under design in MASNA Company. For design of the IR-360 structures, systems and components (SSCs), the codes and standards and their design requirements must be determined. It is a prerequisite to classify the IR-360 safety functions and safety grade of structures, systems and components correctly for selecting and adopting the suitable design codes and standards. This paper refers to the IAEA nuclear safety codes and standards as well as USNRC standard system to determine the IR-360 safety functions and to formulate the principles of the IR-360 component classification in accordance with the safety philosophy and feature of the IR-360. By implementation of defined classification procedures for the IR-360 SSCs, the appropriate design codes and standards are specified. The requirements of specific codes and standards are used in design process of IR-360 SSCs by design engineers of MASNA Company. In this paper, individual determination of the IR-360 safety functions and definition of the classification procedures and roles are presented. Implementation of this work which is described with example ensures the safety and reliability of the IR-360 nuclear power plant.

  20. MS4 - Multi-Scale Selector of Sequence Signatures: An alignment-free method for classification of biological sequences

    Grasseau Gilles


    Full Text Available Abstract Background While multiple alignment is the first step of usual classification schemes for biological sequences, alignment-free methods are being increasingly used as alternatives when multiple alignments fail. Subword-based combinatorial methods are popular for their low algorithmic complexity (suffix trees ... or exhaustivity (motif search, in general with fixed length word and/or number of mismatches. We developed previously a method to detect local similarities (the N-local decoding based on the occurrences of repeated subwords of fixed length, which does not impose a fixed number of mismatches. The resulting similarities are, for some "good" values of N, sufficiently relevant to form the basis of a reliable alignment-free classification. The aim of this paper is to develop a method that uses the similarities detected by N-local decoding while not imposing a fixed value of N. We present a procedure that selects for every position in the sequences an adaptive value of N, and we implement it as the MS4 classification tool. Results Among the equivalence classes produced by the N-local decodings for all N, we select a (relatively small number of "relevant" classes corresponding to variable length subwords that carry enough information to perform the classification. The parameter N, for which correct values are data-dependent and thus hard to guess, is here replaced by the average repetitivity κ of the sequences. We show that our approach yields classifications of several sets of HIV/SIV sequences that agree with the accepted taxonomy, even on usually discarded repetitive regions (like the non-coding part of LTR. Conclusions The method MS4 satisfactorily classifies a set of sequences that are notoriously hard to align. This suggests that our approach forms the basis of a reliable alignment-free classification tool. The only parameter κ of MS4 seems to give reasonable results even for its default value, which can be a great advantage for

  1. Non-Binary Polar Codes using Reed-Solomon Codes and Algebraic Geometry Codes

    Mori, Ryuhei


    Polar codes, introduced by Arikan, achieve symmetric capacity of any discrete memoryless channels under low encoding and decoding complexity. Recently, non-binary polar codes have been investigated. In this paper, we calculate error probability of non-binary polar codes constructed on the basis of Reed-Solomon matrices by numerical simulations. It is confirmed that 4-ary polar codes have significantly better performance than binary polar codes on binary-input AWGN channel. We also discuss an interpretation of polar codes in terms of algebraic geometry codes, and further show that polar codes using Hermitian codes have asymptotically good performance.

  2. Distributed Video Coding: Iterative Improvements

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...

  3. Polynomial weights and code constructions

    Massey, J; Costello, D; Justesen, Jørn


    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... that are subcodes of the binary Reed-Muller codes and can be very simply instrumented, 3) a new class of constacyclic codes that are subcodes of thep-ary "Reed-Muller codes," 4) two new classes of binary convolutional codes with large "free distance" derived from known binary cyclic codes, 5) two new classes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm....

  4. Examining Different Regions of Relevance: From Highly Relevant to Not Relevant.

    Spink, Amanda; Greisdorf, Howard; Bateman, Judy


    Proposes a useful concept of relevance as a relationship and an effect on the movement of a user through the iterative stages of their information seeking process, and that users' relevance judgments can be plotted on a Three-Dimensional Spatial Model of Relevance Level, Degree and Time. Discusses implications for the development of information…

  5. Classification system adopted for fixed cutter bits

    Winters, W.J.; Doiron, H.H.


    The drilling industry has begun adopting the 1987 International Association of Drilling Contractors' (IADC) method for classifying fixed cutter drill bits. By studying the classification codes on bit records and properly applying the new IADC fixed cutter dull grading system to recently run bits, the end-user should be able to improve the selection and usage of fixed cutter bits. Several users are developing databases for fixed cutter bits in an effort to relate field performance to some of the more prominent bit design characteristics.

  6. Product Codes for Optical Communication

    Andersen, Jakob Dahl


    Many optical communicaton systems might benefit from forward-error-correction. We present a hard-decision decoding algorithm for the "Block Turbo Codes", suitable for optical communication, which makes this coding-scheme an alternative to Reed-Solomon codes.......Many optical communicaton systems might benefit from forward-error-correction. We present a hard-decision decoding algorithm for the "Block Turbo Codes", suitable for optical communication, which makes this coding-scheme an alternative to Reed-Solomon codes....

  7. Some new ternary linear codes

    Rumen Daskalov


    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  8. Algebraic geometric codes with applications

    CHEN Hao


    The theory of linear error-correcting codes from algebraic geomet-ric curves (algebraic geometric (AG) codes or geometric Goppa codes) has been well-developed since the work of Goppa and Tsfasman, Vladut, and Zink in 1981-1982. In this paper we introduce to readers some recent progress in algebraic geometric codes and their applications in quantum error-correcting codes, secure multi-party computation and the construction of good binary codes.

  9. Concepts of Classification and Taxonomy Phylogenetic Classification

    Fraix-Burnet, D.


    Phylogenetic approaches to classification have been heavily developed in biology by bioinformaticians. But these techniques have applications in other fields, in particular in linguistics. Their main characteristics is to search for relationships between the objects or species in study, instead of grouping them by similarity. They are thus rather well suited for any kind of evolutionary objects. For nearly fifteen years, astrocladistics has explored the use of Maximum Parsimony (or cladistics) for astronomical objects like galaxies or globular clusters. In this lesson we will learn how it works.

  10. Origin and evolution of the genetic code: the universal enigma.

    Koonin, Eugene V; Novozhilov, Artem S


    The genetic code is nearly universal, and the arrangement of the codons in the standard codon table is highly nonrandom. The three main concepts on the origin and evolution of the code are the stereochemical theory, according to which codon assignments are dictated by physicochemical affinity between amino acids and the cognate codons (anticodons); the coevolution theory, which posits that the code structure coevolved with amino acid biosynthesis pathways; and the error minimization theory under which selection to minimize the adverse effect of point mutations and translation errors was the principal factor of the code's evolution. These theories are not mutually exclusive and are also compatible with the frozen accident hypothesis, that is, the notion that the standard code might have no special properties but was fixed simply because all extant life forms share a common ancestor, with subsequent changes to the code, mostly, precluded by the deleterious effect of codon reassignment. Mathematical analysis of the structure and possible evolutionary trajectories of the code shows that it is highly robust to translational misreading but there are numerous more robust codes, so the standard code potentially could evolve from a random code via a short sequence of codon series reassignments. Thus, much of the evolution that led to the standard code could be a combination of frozen accident with selection for error minimization although contributions from coevolution of the code with metabolic pathways and weak affinities between amino acids and nucleotide triplets cannot be ruled out. However, such scenarios for the code evolution are based on formal schemes whose relevance to the actual primordial evolution is uncertain. A real understanding of the code origin and evolution is likely to be attainable only in conjunction with a credible scenario for the evolution of the coding principle itself and the translation system.

  11. Interobserver variation in classification of malleolar fractures

    Verhage, S.M.; Hoogendoorn, J.M. [MC Haaglanden, Department of Surgery, The Hague (Netherlands); Secretariaat Heelkunde, MC Haaglanden, locatie Westeinde, Postbus 432, CK, The Hague (Netherlands); Rhemrev, S.J. [MC Haaglanden, Department of Surgery, The Hague (Netherlands); Keizer, S.B. [MC Haaglanden, Department of Orthopaedic Surgery, The Hague (Netherlands); Quarles van Ufford, H.M.E. [MC Haaglanden, Department of Radiology, The Hague (Netherlands)


    Classification of malleolar fractures is a matter of debate. In the ideal situation, a classification system is easy to use, shows good inter- and intraobserver agreement, and has implications for treatment or research. Interobserver study. Four observers distributed 100 X-rays to the Weber, AO and Lauge-Hansen classification. In case of a trimalleolar fracture, the size of the posterior fragment was measured. Interobserver agreement was calculated with Cohen's kappa. Agreement on the size of the posterior fragment was calculated with the intraclass correlation coefficient. Moderate agreement was found with all classification systems: the Weber (K = 0.49), AO (K = 0.45) and Lauge-Hansen (K = 0.47). Interobserver agreement on the presence of a posterior fracture was substantial (K = 0.63). Estimation of the size of the fragment showed moderate agreement (ICC = 0.57). Classification according to the classical systems showed moderate interobserver agreement, probably due to an unclear trauma mechanism or the difficult relation between the level of the fibular fracture and syndesmosis. Substantial agreement on posterior malleolar fractures is mostly due to small (<5 %) posterior fragments. A classification system that describes the presence and location of fibular fractures, presence of medial malleolar fractures or deep deltoid ligament injury, and presence of relevant and dislocated posterior malleolar fractures is more useful in the daily setting than the traditional systems. In case of a trimalleolar fracture, a CT scan is in our opinion very useful in the detection of small posterior fragments and preoperative planning. (orig.)

  12. The Efficient Coding of Speech: Cross-Linguistic Differences.

    Guevara Erra, Ramon; Gervain, Judit


    Neural coding in the auditory system has been shown to obey the principle of efficient neural coding. The statistical properties of speech appear to be particularly well matched to the auditory neural code. However, only English has so far been analyzed from an efficient coding perspective. It thus remains unknown whether such an approach is able to capture differences between the sound patterns of different languages. Here, we use independent component analysis to derive information theoretically optimal, non-redundant codes (filter populations) for seven typologically distinct languages (Dutch, English, Japanese, Marathi, Polish, Spanish and Turkish) and relate the statistical properties of these filter populations to documented differences in the speech rhythms (Analysis 1) and consonant inventories (Analysis 2) of these languages. We show that consonant class membership plays a particularly important role in shaping the statistical structure of speech in different languages, suggesting that acoustic transience, a property that discriminates consonant classes from one another, is highly relevant for efficient coding.

  13. Optical coding theory with Prime

    Kwong, Wing C


    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  14. Algebraic and stochastic coding theory

    Kythe, Dave K


    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  15. Golden Coded Multiple Beamforming

    Li, Boyu


    The Golden Code is a full-rate full-diversity space-time code, which achieves maximum coding gain for Multiple-Input Multiple-Output (MIMO) systems with two transmit and two receive antennas. Since four information symbols taken from an M-QAM constellation are selected to construct one Golden Code codeword, a maximum likelihood decoder using sphere decoding has the worst-case complexity of O(M^4), when the Channel State Information (CSI) is available at the receiver. Previously, this worst-case complexity was reduced to O(M^(2.5)) without performance degradation. When the CSI is known by the transmitter as well as the receiver, beamforming techniques that employ singular value decomposition are commonly used in MIMO systems. In the absence of channel coding, when a single symbol is transmitted, these systems achieve the full diversity order provided by the channel. Whereas this property is lost when multiple symbols are simultaneously transmitted. However, uncoded multiple beamforming can achieve the full div...

  16. Coded source neutron imaging

    Bingham, Philip R [ORNL; Santos-Villalobos, Hector J [ORNL


    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  17. Coded source neutron imaging

    Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken


    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.

  18. Speech coding code- excited linear prediction

    Bäckström, Tom


    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  19. Phase-coded pulse aperiodic transmitter coding

    I. I. Virtanen


    Full Text Available Both ionospheric and weather radar communities have already adopted the method of transmitting radar pulses in an aperiodic manner when measuring moderately overspread targets. Among the users of the ionospheric radars, this method is called Aperiodic Transmitter Coding (ATC, whereas the weather radar users have adopted the term Simultaneous Multiple Pulse-Repetition Frequency (SMPRF. When probing the ionosphere at the carrier frequencies of the EISCAT Incoherent Scatter Radar facilities, the range extent of the detectable target is typically of the order of one thousand kilometers – about seven milliseconds – whereas the characteristic correlation time of the scattered signal varies from a few milliseconds in the D-region to only tens of microseconds in the F-region. If one is interested in estimating the scattering autocorrelation function (ACF at time lags shorter than the F-region correlation time, the D-region must be considered as a moderately overspread target, whereas the F-region is a severely overspread one. Given the technical restrictions of the radar hardware, a combination of ATC and phase-coded long pulses is advantageous for this kind of target. We evaluate such an experiment under infinitely low signal-to-noise ratio (SNR conditions using lag profile inversion. In addition, a qualitative evaluation under high-SNR conditions is performed by analysing simulated data. The results show that an acceptable estimation accuracy and a very good lag resolution in the D-region can be achieved with a pulse length long enough for simultaneous E- and F-region measurements with a reasonable lag extent. The new experiment design is tested with the EISCAT Tromsø VHF (224 MHz radar. An example of a full D/E/F-region ACF from the test run is shown at the end of the paper.

  20. Myoelectric walking mode classification for transtibial amputees.

    Miller, Jason D; Beazer, Mahyo Seyedali; Hahn, Michael E


    Myoelectric control algorithms have the potential to detect an amputee's motion intent and allow the prosthetic to adapt to changes in walking mode. The development of a myoelectric walking mode classifier for transtibial amputees is outlined. Myoelectric signals from four muscles (tibialis anterior, medial gastrocnemius (MG), vastus lateralis, and biceps femoris) were recorded for five nonamputee subjects and five transtibial amputees over a variety of walking modes: level ground at three speeds, ramp ascent/descent, and stair ascent/descent. These signals were decomposed into relevant features (mean absolute value, variance, wavelength, number of slope sign changes, number of zero crossings) over three subwindows from the gait cycle and used to test the ability of classification algorithms for transtibial amputees using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Detection of all seven walking modes had an accuracy of 97.9% for the amputee group and 94.7% for the nonamputee group. Misclassifications occurred most frequently between different walking speeds due to the similar nature of the gait pattern. Stair ascent/descent had the best classification accuracy with 99.8% for the amputee group and 100.0% for the nonamputee group. Stability of the developed classifier was explored using an electrode shift disturbance for each muscle. Shifting the electrode placement of the MG had the most pronounced effect on the classification accuracy for both samples. No increase in classification accuracy was observed when using SVM compared to LDA for the current dataset.

  1. Blanket-relevant liquid metal MHD channel flows: Data base and optimization simulation development

    Evtushenko, I.A.; Kirillov, I.R.; Sidorenkov, S.I. [D.V. Efremov Inst. of Electrophysical Apparatus, St Petersburg (Russian Federation)


    The problems of generalization and integration of test, theoretical and design data relevant to liquid metal (LM) blanket are discussed in present work. First results on MHD data base and LM blanket optimization codes are presented.

  2. Nested Quantum Error Correction Codes

    Wang, Zhuo; Fan, Hen; Vedral, Vlatko


    The theory of quantum error correction was established more than a decade ago as the primary tool for fighting decoherence in quantum information processing. Although great progress has already been made in this field, limited methods are available in constructing new quantum error correction codes from old codes. Here we exhibit a simple and general method to construct new quantum error correction codes by nesting certain quantum codes together. The problem of finding long quantum error correction codes is reduced to that of searching several short length quantum codes with certain properties. Our method works for all length and all distance codes, and is quite efficient to construct optimal or near optimal codes. Two main known methods in constructing new codes from old codes in quantum error-correction theory, the concatenating and pasting, can be understood in the framework of nested quantum error correction codes.

  3. Supernova Photometric Classification Challenge

    Kessler, Richard; Jha, Saurabh; Kuhlmann, Stephen


    We have publicly released a blinded mix of simulated SNe, with types (Ia, Ib, Ic, II) selected in proportion to their expected rate. The simulation is realized in the griz filters of the Dark Energy Survey (DES) with realistic observing conditions (sky noise, point spread function and atmospheric transparency) based on years of recorded conditions at the DES site. Simulations of non-Ia type SNe are based on spectroscopically confirmed light curves that include unpublished non-Ia samples donated from the Carnegie Supernova Project (CSP), the Supernova Legacy Survey (SNLS), and the Sloan Digital Sky Survey-II (SDSS-II). We challenge scientists to run their classification algorithms and report a type for each SN. A spectroscopically confirmed subset is provided for training. The goals of this challenge are to (1) learn the relative strengths and weaknesses of the different classification algorithms, (2) use the results to improve classification algorithms, and (3) understand what spectroscopically confirmed sub-...

  4. Classification in Medical Imaging

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition......, a good metric is required to measure distance or similarity between feature points so that the classification becomes feasible. Furthermore, in order to build a successful classifier, one needs to deeply understand how classifiers work. This thesis focuses on these three aspects of classification...... to segment breast tissue and pectoral muscle area from the background in mammogram. The second focus is the choices of metric and its influence to the feasibility of a classifier, especially on k-nearest neighbors (k-NN) algorithm, with medical applications on breast cancer prediction and calcification...

  5. Classification of hand eczema

    Agner, T; Aalto-Korte, K; Andersen, K E;


    BACKGROUND: Classification of hand eczema (HE) is mandatory in epidemiological and clinical studies, and also important in clinical work. OBJECTIVES: The aim was to test a recently proposed classification system of HE in clinical practice in a prospective multicentre study. METHODS: Patients were...... HE, protein contact dermatitis/contact urticaria, hyperkeratotic endogenous eczema and vesicular endogenous eczema, respectively. An additional diagnosis was given if symptoms indicated that factors additional to the main diagnosis were of importance for the disease. RESULTS: Four hundred and twenty......%) could not be classified. 38% had one additional diagnosis and 26% had two or more additional diagnoses. Eczema on feet was found in 30% of the patients, statistically significantly more frequently associated with hyperkeratotic and vesicular endogenous eczema. CONCLUSION: We find that the classification...

  6. Acoustic classification of dwellings

    Berardi, Umberto; Rasmussen, Birgit


    Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on soun...... exchanging experiences about constructions fulfilling different classes, reducing trade barriers, and finally increasing the sound insulation of dwellings.......Schemes for the classification of dwellings according to different building performances have been proposed in the last years worldwide. The general idea behind these schemes relates to the positive impact a higher label, and thus a better performance, should have. In particular, focusing on sound...... insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...

  7. Classification problem in CBIR

    Tatiana Jaworska


    Full Text Available At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR. Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results of fuzzy rule-based classification in our CBIR. Further-more, these results are used to construct a search engine taking into account data mining.

  8. Cellular image classification

    Xu, Xiang; Lin, Feng


    This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one’s qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical...

  9. MHD Generation Code

    Frutos-Alfaro, Francisco


    A program to generate codes in Fortran and C of the full Magnetohydrodynamic equations is shown. The program used the free computer algebra system software REDUCE. This software has a package called EXCALC, which is an exterior calculus program. The advantage of this program is that it can be modified to include another complex metric or spacetime. The output of this program is modified by means of a LINUX script which creates a new REDUCE program to manipulate the MHD equations to obtain a code that can be used as a seed for a MHD code for numerical applications. As an example, we present part of output of our programs for Cartesian coordinates and how to do the discretization.

  10. Autocatalysis, information and coding.

    Wills, P R


    Autocatalytic self-construction in macromolecular systems requires the existence of a reflexive relationship between structural components and the functional operations they perform to synthesise themselves. The possibility of reflexivity depends on formal, semiotic features of the catalytic structure-function relationship, that is, the embedding of catalytic functions in the space of polymeric structures. Reflexivity is a semiotic property of some genetic sequences. Such sequences may serve as the basis for the evolution of coding as a result of autocatalytic self-organisation in a population of assignment catalysts. Autocatalytic selection is a mechanism whereby matter becomes differentiated in primitive biochemical systems. In the case of coding self-organisation, it corresponds to the creation of symbolic information. Prions are present-day entities whose replication through autocatalysis reflects aspects of biological semiotics less obvious than genetic coding.

  11. Coded Splitting Tree Protocols

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar


    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....

  12. Adjoint code generator

    CHENG Qiang; CAO JianWen; WANG Bin; ZHANG HaiBin


    The adjoint code generator (ADG) is developed to produce the adjoint codes, which are used to analytically calculate gradients and the Hessian-vector products with the costs independent of the number of the independent variables. Different from other automatic differentiation tools, the implementation of ADG has advantages of using the least program behavior decomposition method and several static dependence analysis techniques. In this paper we first address the concerned concepts and fundamentals, and then introduce the functionality and the features of ADG. In particular, we also discuss the design architecture of ADG and implementation details including the recomputation and storing strategy and several techniques for code optimization. Some experimental results in several applications are presented at the end.

  13. Code query by example

    Vaucouleur, Sebastien


    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  14. Decision forests for machine learning classification of large, noisy seafloor feature sets

    Lawson, Ed; Smith, Denson; Sofge, Donald; Elmore, Paul; Petry, Frederick


    Extremely randomized trees (ET) classifiers, an extension of random forests (RF) are applied to classification of features such as seamounts derived from bathymetry data. This data is characterized by sparse training data from by large noisy features sets such as often found in other geospatial data. A variety of feature metrics may be useful for this task and we use a large number of metrics relevant to the task of finding seamounts. The major significant results to be described include: an outstanding seamount classification accuracy of 97%; an automated process to produce the most useful classification features that are relevant to geophysical scientists (as represented by the feature metrics); demonstration that topography provides the most important data representation for classification. As well as achieving good accuracy in classification, the human-understandable set of metrics generated by the classifier that are most relevant for the results are discussed.

  15. Spread codes and spread decoding in network coding

    Manganiello, F; Gorla, E.; Rosenthal, J.


    In this paper we introduce the class of spread codes for the use in random network coding. Spread codes are based on the construction of spreads in finite projective geometry. The major contribution of the paper is an efficient decoding algorithm of spread codes up to half the minimum distance.

  16. Graph Codes with Reed-Solomon Component Codes

    Høholdt, Tom; Justesen, Jørn


    We treat a specific case of codes based on bipartite expander graphs coming from finite geometries. The code symbols are associated with the branches and the symbols connected to a given node are restricted to be codewords in a Reed-Solomon code. We give results on the parameters of the codes...

  17. User perspectives on relevance criteria

    Maglaughlin, Kelly L.; Sonnenwald, Diane H.


    matter, thought catalyst), full text (e.g., audience, novelty, type, possible content, utility), journal/publisher (e.g., novelty, main focus, perceived quality), and personal (e.g., competition, time requirements). Results further indicate that multiple criteria are used when making relevant, partially...... relevant, and not-relevant judgments, and that most criteria can have either a positive or negative contribution to the relevance of a document. The criteria most frequently mentioned by study participants were content, followed by criteria characterizing the full text document. These findings may have...... implications for relevance feedback in information retrieval systems, suggesting that systems accept and utilize multiple positive and negative relevance criteria from users. Systems designers may want to focus on supporting content criteria followed by full text criteria as these may provide the greatest cost...

  18. Relevance-driven Pragmatic Inferences



    Relevance theory, an inferential approach to pragmatics, claims that the hearer is expected to pick out the input of op-timal relevance from a mass of alternative inputs produced by the speaker in order to interpret the speaker ’s intentions. The de-gree of the relevance of an input can be assessed in terms of cognitive effects and the processing effort. The input of optimal rele-vance is the one yielding the greatest positive cognitive effect and requiring the least processing effort. This paper attempts to as-sess the degrees of the relevance of a mass of alternative inputs produced by an imaginary speaker from the perspective of her cor-responding hearer in terms of cognitive effects and the processing effort with a view to justifying the feasibility of the principle of relevance in pragmatic inferences.

  19. Automated Classification of Selected Data Elements from Free-text Diagnostic Reports for Clinical Research.

    Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra


    In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a

  20. Deep Recurrent Neural Networks for Supernovae Classification

    Charnock, Tom; Moss, Adam


    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  1. The paradox of atheoretical classification

    Hjørland, Birger


    A distinction can be made between “artificial classifications” and “natural classifications,” where artificial classifications may adequately serve some limited purposes, but natural classifications are overall most fruitful by allowing inference and thus many different purposes. There is strong...... support for the view that a natural classification should be based on a theory (and, of course, that the most fruitful theory provides the most fruitful classification). Nevertheless, atheoretical (or “descriptive”) classifications are often produced. Paradoxically, atheoretical classifications may...... be very successful. The best example of a successful “atheoretical” classification is probably the prestigious Diagnostic and Statistical Manual of Mental Disorders (DSM) since its third edition from 1980. Based on such successes one may ask: Should the claim that classifications ideally are natural...

  2. Goal relevance as a quantitative model of human task relevance.

    Tanner, James; Itti, Laurent


    The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Principles of speech coding

    Ogunfunmi, Tokunbo


    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  4. Securing mobile code.

    Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik


    If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called &apos

  5. Bosniak Classification system

    Graumann, Ole; Osther, Susanne Sloth; Karstoft, Jens;


    . Purpose: To investigate the inter- and intra-observer agreement among experienced uroradiologists when categorizing complex renal cysts according to the Bosniak classification. Material and Methods: The original categories of 100 cystic renal masses were chosen as “Gold Standard” (GS), established...... to the calculated weighted κ all readers performed “very good” for both inter-observer and intra-observer variation. Most variation was seen in cysts catagorized as Bosniak II, IIF, and III. These results show that radiologists who evaluate complex renal cysts routinely may apply the Bosniak classification...

  6. Acoustic classification of dwellings

    Berardi, Umberto; Rasmussen, Birgit


    insulation performance, national schemes for sound classification of dwellings have been developed in several European countries. These schemes define acoustic classes according to different levels of sound insulation. Due to the lack of coordination among countries, a significant diversity in terms...... of descriptors, number of classes, and class intervals occurred between national schemes. However, a proposal “acoustic classification scheme for dwellings” has been developed recently in the European COST Action TU0901 with 32 member countries. This proposal has been accepted as an ISO work item. This paper...

  7. Classification of iconic images

    Zrianina, Mariia; Kopf, Stephan


    Iconic images represent an abstract topic and use a presentation that is intuitively understood within a certain cultural context. For example, the abstract topic “global warming” may be represented by a polar bear standing alone on an ice floe. Such images are widely used in media and their automatic classification can help to identify high-level semantic concepts. This paper presents a system for the classification of iconic images. It uses a variation of the Bag of Visual Words approach wi...

  8. Classification problem in CBIR

    Tatiana Jaworska


    At present a great deal of research is being done in different aspects of Content-Based Im-age Retrieval (CBIR). Image classification is one of the most important tasks in image re-trieval that must be dealt with. The primary issue we have addressed is: how can the fuzzy set theory be used to handle crisp image data. We propose fuzzy rule-based classification of image objects. To achieve this goal we have built fuzzy rule-based classifiers for crisp data. In this paper we present the results ...

  9. Latent classification models

    Langseth, Helge; Nielsen, Thomas Dyhre


    One of the simplest, and yet most consistently well-performing setof classifiers is the \\NB models. These models rely on twoassumptions: $(i)$ All the attributes used to describe an instanceare conditionally independent given the class of that instance,and $(ii)$ all attributes follow a specific...... parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  10. Minimum Error Entropy Classification

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A


    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  11. Constructing criticality by classification

    Machacek, Erika


    This paper explores the role of expertise, the nature of criticality, and their relationship to securitisation as mineral raw materials are classified. It works with the construction of risk along the liberal logic of security to explore how "key materials" are turned into "critical materials......, legitimizing a criticality discourse.Specifically, the paper introduces a typology delineating the inferences made by the experts from their produced recommendations in the classification of rare earth element criticality. The paper argues that the classification is a specific process of constructing risk...

  12. Towards automatic classification of all WISE sources

    Kurcz, A.; Bilicki, M.; Solarz, A.; Krupa, M.; Pollo, A.; Małek, K.


    Context. The Wide-field Infrared Survey Explorer (WISE) has detected hundreds of millions of sources over the entire sky. Classifying them reliably is, however, a challenging task owing to degeneracies in WISE multicolour space and low levels of detection in its two longest-wavelength bandpasses. Simple colour cuts are often not sufficient; for satisfactory levels of completeness and purity, more sophisticated classification methods are needed. Aims: Here we aim to obtain comprehensive and reliable star, galaxy, and quasar catalogues based on automatic source classification in full-sky WISE data. This means that the final classification will employ only parameters available from WISE itself, in particular those which are reliably measured for the majority of sources. Methods: For the automatic classification we applied a supervised machine learning algorithm, support vector machines (SVM). It requires a training sample with relevant classes already identified, and we chose to use the SDSS spectroscopic dataset (DR10) for that purpose. We tested the performance of two kernels used by the classifier, and determined the minimum number of sources in the training set required to achieve stable classification, as well as the minimum dimension of the parameter space. We also tested SVM classification accuracy as a function of extinction and apparent magnitude. Thus, the calibrated classifier was finally applied to all-sky WISE data, flux-limited to 16 mag (Vega) in the 3.4 μm channel. Results: By calibrating on the test data drawn from SDSS, we first established that a polynomial kernel is preferred over a radial one for this particular dataset. Next, using three classification parameters (W1 magnitude, W1-W2 colour, and a differential aperture magnitude) we obtained very good classification efficiency in all the tests. At the bright end, the completeness for stars and galaxies reaches ~95%, deteriorating to ~80% at W1 = 16 mag, while for quasars it stays at a level of

  13. Application of the International Classification of Functioning, Disability and Health (ICF) to people with dysphagia following non-surgical head and neck cancer management.

    Nund, Rebecca L; Scarinci, Nerina A; Cartmill, Bena; Ward, Elizabeth C; Kuipers, Pim; Porceddu, Sandro V


    The International Classification of Functioning, Disability, and Health (ICF) is an internationally recognized framework which allows its user to describe the consequences of a health condition on an individual in the context of their environment. With growing recognition that dysphagia can have broad ranging physical and psychosocial impacts, the aim of this paper was to identify the ICF domains and categories that describe the full functional impact of dysphagia following non-surgical head and neck cancer (HNC) management, from the perspective of the person with dysphagia. A secondary analysis was conducted on previously published qualitative study data which explored the lived experiences of dysphagia of 24 individuals with self-reported swallowing difficulties following HNC management. Categories and sub-categories identified by the qualitative analysis were subsequently mapped to the ICF using the established linking rules to develop a set of ICF codes relevant to the impact of dysphagia following HNC management. The 69 categories and sub-categories that had emerged from the qualitative analysis were successfully linked to 52 ICF codes. The distribution of these codes across the ICF framework revealed that the components of Body Functions, Activities and Participation, and Environmental Factors were almost equally represented. The findings confirm that the ICF is a valuable framework for representing the complexity and multifaceted impact of dysphagia following HNC. This list of ICF codes, which reflect the diverse impact of dysphagia associated with HNC on the individual, can be used to guide more holistic assessment and management for this population.

  14. Prediction and classification of respiratory motion

    Lee, Suk Jin


    This book describes recent radiotherapy technologies including tools for measuring target position during radiotherapy and tracking-based delivery systems. This book presents a customized prediction of respiratory motion with clustering from multiple patient interactions. The proposed method contributes to the improvement of patient treatments by considering breathing pattern for the accurate dose calculation in radiotherapy systems. Real-time tumor-tracking, where the prediction of irregularities becomes relevant, has yet to be clinically established. The statistical quantitative modeling for irregular breathing classification, in which commercial respiration traces are retrospectively categorized into several classes based on breathing pattern are discussed as well. The proposed statistical classification may provide clinical advantages to adjust the dose rate before and during the external beam radiotherapy for minimizing the safety margin. In the first chapter following the Introduction  to this book, we...

  15. Automated classification of Hipparcos unsolved variables

    Rimoldini, L; Süveges, M; López, M; Sarro, L M; Blomme, J; De Ridder, J; Cuypers, J; Guy, L; Mowlavi, N; Lecoeur-Taïbi, I; Beck, M; Jan, A; Nienartowicz, K; Ordóñez-Blanco, D; Lebzelter, T; Eyer, L; 10.1111/j.1365-2966.2012.21752.x


    We present an automated classification of stars exhibiting periodic, non-periodic and irregular light variations. The Hipparcos catalogue of unsolved variables is employed to complement the training set of periodic variables of Dubath et al. with irregular and non-periodic representatives, leading to 3881 sources in total which describe 24 variability types. The attributes employed to characterize light-curve features are selected according to their relevance for classification. Classifier models are produced with random forests and a multistage methodology based on Bayesian networks, achieving overall misclassification rates under 12 per cent. Both classifiers are applied to predict variability types for 6051 Hipparcos variables associated with uncertain or missing types in the literature.

  16. Musical Instrument Timbres Classification with Spectral Features

    Agostini Giulio


    Full Text Available A set of features is evaluated for recognition of musical instruments out of monophonic musical signals. Aiming to achieve a compact representation, the adopted features regard only spectral characteristics of sound and are limited in number. On top of these descriptors, various classification methods are implemented and tested. Over a dataset of 1007 tones from 27 musical instruments, support vector machines and quadratic discriminant analysis show comparable results with success rates close to 70% of successful classifications. Canonical discriminant analysis never had momentous results, while nearest neighbours performed on average among the employed classifiers. Strings have been the most misclassified instrument family, while very satisfactory results have been obtained with brass and woodwinds. The most relevant features are demonstrated to be the inharmonicity, the spectral centroid, and the energy contained in the first partial.

  17. Musical Instrument Timbres Classification with Spectral Features

    Agostini, Giulio; Longari, Maurizio; Pollastri, Emanuele


    A set of features is evaluated for recognition of musical instruments out of monophonic musical signals. Aiming to achieve a compact representation, the adopted features regard only spectral characteristics of sound and are limited in number. On top of these descriptors, various classification methods are implemented and tested. Over a dataset of 1007 tones from 27 musical instruments, support vector machines and quadratic discriminant analysis show comparable results with success rates close to 70% of successful classifications. Canonical discriminant analysis never had momentous results, while nearest neighbours performed on average among the employed classifiers. Strings have been the most misclassified instrument family, while very satisfactory results have been obtained with brass and woodwinds. The most relevant features are demonstrated to be the inharmonicity, the spectral centroid, and the energy contained in the first partial.

  18. New code match strategy for wideband code division multiple access code tree management


    Orthogonal variable spreading factor channelization codes are widely used to provide variable data rates for supporting different bandwidth requirements in wideband code division multiple access (WCDMA) systems. A new code match scheme for WCDMA code tree management was proposed. The code match scheme is similar to the existing crowed-first scheme. When choosing a code for a user, the code match scheme only compares the one up layer of the allocated codes, unlike the crowed-first scheme which perhaps compares all up layers. So the operation of code match scheme is simple, and the average time delay is decreased by 5.1%. The simulation results also show that the code match strategy can decrease the average code blocking probability by 8.4%.

  19. Reed-Solomon convolutional codes

    Gluesing-Luerssen, H; Schmale, W


    In this paper we will introduce a specific class of cyclic convolutional codes. The construction is based on Reed-Solomon block codes. The algebraic parameters as well as the distance of these codes are determined. This shows that some of these codes are optimal or near optimal.

  20. An Integrated Approach to Battery Health Monitoring using Bayesian Regression, Classification and State Estimation

    National Aeronautics and Space Administration — The application of the Bayesian theory of managing uncertainty and complexity to regression and classification in the form of Relevance Vector Machine (RVM), and to...

  1. Derivation and validation of the Systemic Lupus International Collaborating Clinics classification criteria for systemic lupus erythematosus

    Petri, Michelle; Orbai, Ana-Maria; Alarcón, Graciela S


    The Systemic Lupus International Collaborating Clinics (SLICC) group revised and validated the American College of Rheumatology (ACR) systemic lupus erythematosus (SLE) classification criteria in order to improve clinical relevance, meet stringent methodology requirements, and incorporate new kno...

  2. Motivations of Code-switching among People of Different English Profi-ciency:A Sociolinguistics Survey

    GUAN Hui


    Code-switching is a linguistic behavior that arises as a result of languages coming into contact. The idea of code-switching was proposed since the 1970s and has been heatedly discussed. This study will particularly focus on the motivations for code-switching on campus, especially for the reason of college students and teachers as frequent users. The study aims to find out if there is any relevance between one’s English proficiency and motivation for code-switching.

  3. New code of conduct

    Laëtitia Pedroso


    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  4. Physical layer network coding

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki


    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  5. Corporate governance through codes

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.


    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and compositi

  6. Polar Code Validation


    SUMMARY OF POLAR ACHIEVEMENTS ..... .......... 3 3. POLAR CODE PHYSICAL MODELS ..... ............. 5 3.1 PL- ASMA Su ^"ru5 I1LS SH A...of this problem. 1.1. The Charge-2 Rocket The Charge-2 payload was launched on a Black Brant VB from White Sands Mis- sile Range in New Mexico in

  7. Corporate governance through codes

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.


    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and

  8. (Almost) practical tree codes

    Khina, Anatoly


    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  9. Corner neutronic code

    V.P. Bereznev


    An iterative solution process is used, including external iterations for the fission source and internal iterations for the scattering source. The paper presents the results of a cross-verification against the Monte Carlo MMK code [3] and on a model of the BN-800 reactor core.

  10. Ready, steady… Code!

    Anaïs Schaeffer


    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  11. Focusing Automatic Code Inspections

    Boogerd, C.J.


    Automatic Code Inspection tools help developers in early detection of defects in software. A well-known drawback of many automatic inspection approaches is that they yield too many warnings and require a clearer focus. In this thesis, we provide such focus by proposing two methods to prioritize

  12. The Improved Relevance Voxel Machine

    Ganz, Melanie; Sabuncu, Mert; Van Leemput, Koen

    The concept of sparse Bayesian learning has received much attention in the machine learning literature as a means of achieving parsimonious representations of features used in regression and classification. It is an important family of algorithms for sparse signal recovery and compressed sensing...

  13. Mirror neurons and their clinical relevance.

    Rizzolatti, Giacomo; Fabbri-Destro, Maddalena; Cattaneo, Luigi


    One of the most exciting events in neurosciences over the past few years has been the discovery of a mechanism that unifies action perception and action execution. The essence of this 'mirror' mechanism is as follows: whenever individuals observe an action being done by someone else, a set of neurons that code for that action is activated in the observers' motor system. Since the observers are aware of the outcome of their motor acts, they also understand what the other individual is doing without the need for intermediate cognitive mediation. In this Review, after discussing the most pertinent data concerning the mirror mechanism, we examine the clinical relevance of this mechanism. We first discuss the relationship between mirror mechanism impairment and some core symptoms of autism. We then outline the theoretical principles of neurorehabilitation strategies based on the mirror mechanism. We conclude by examining the relationship between the mirror mechanism and some features of the environmental dependency syndromes.

  14. Comparabilidad entre la novena y la décima revisión de la Clasificación Internacional de Enfermedades aplicada a la codificación de la causa de muerte en España Comparability between the ninth and tenth revisions of the International Classification of Diseases applied to coding causes of death in Spain

    M. Ruiz


    cuantificar el cambio en las grandes causas de muerte en España.Objective: To analyze comparability between the ninth and tenth revisions of the International Classification of Diseases (ICD applied to coding causes of death in Spain. Methods: According to the ninth and tenth revisions of the ICD, 80,084 statistical bulletins of mortality registered in 1999 were assigned the Basic Cause of Death. The statistical bulletins corresponded to the Autonomous Communities of Andalusia, Cantabria, Murcia, Navarre and the Basque Country, and the city of Barcelona. The underlying causes of death were classified into 17 groups. Simple correspondence, the Kappa index and the comparability ratio for major causes were calculated. Results: A total of 3.6% of deaths changed group due to an increase (36.4% in infectious and parasitic diseases, mainly because of the inclusion of AIDS, and a corresponding decrease due to the exclusion of endocrine, nutritional and metabolic disorders. Furthermore, myelodysplastic syndrome was moved to the category of neoplasm. The group including nervous system diseases, eye and related diseases, and ear and mastoid apophysis diseases increased (14.7% at the expense of mental and behavior disorders, due to the inclusion of senile and presenile organic psychosis. Poorly-defined entities increased (14.1% due to the inclusion of cardiac arrest and its synonyms, together with heart failure, to the detriment of diseases of the vascular system. Diseases of the respiratory system increased (4.8% due to the inclusion of respiratory failure, previously considered as a poorly defined cause. The correspondence for all causes was 96.4% and kappa's index was 94.9%. Conclusions: The introduction of ICD-10 affects the comparability of statistical series of mortality according to cause. The results of this study allow us to identify the main modifications and to quantify the changes in the major causes of mortality in Spain.

  15. 32 CFR 1645.6 - Considerations relevant to granting or denying a claim for Class 4-D.


    ... Defense SELECTIVE SERVICE SYSTEM CLASSIFICATION OF MINISTERS OF RELIGION § 1645.6 Considerations relevant... registrant is requesting classification in Class 4-D because he is a regular minister of religion or because he is a duly ordained minister of religion. (b) If the registrant claims to be a duly...

  16. Shark Teeth Classification

    Brown, Tom; Creel, Sally; Lee, Velda


    On a recent autumn afternoon at Harmony Leland Elementary in Mableton, Georgia, students in a fifth-grade science class investigated the essential process of classification--the act of putting things into groups according to some common characteristics or attributes. While they may have honed these skills earlier in the week by grouping their own…

  17. Sandwich classification theorem

    Alexey Stepanov


    Full Text Available The present note arises from the author's talk at the conference ``Ischia Group Theory 2014''. For subgroups FleN of a group G denote by Lat(F,N the set of all subgroups of N , containing F . Let D be a subgroup of G . In this note we study the lattice LL=Lat(D,G and the lattice LL ′ of subgroups of G , normalized by D . We say that LL satisfies sandwich classification theorem if LL splits into a disjoint union of sandwiches Lat(F,N G (F over all subgroups F such that the normal closure of D in F coincides with F . Here N G (F denotes the normalizer of F in G . A similar notion of sandwich classification is introduced for the lattice LL ′ . If D is perfect, i.,e. coincides with its commutator subgroup, then it turns out that sandwich classification theorem for LL and LL ′ are equivalent. We also show how to find basic subroup F of sandwiches for LL ′ and review sandwich classification theorems in algebraic groups over rings.

  18. Dynamic Latent Classification Model

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...... in the process as well as modeling dependences between attributes....

  19. Classifications in popular music

    van Venrooij, A.; Schmutz, V.; Wright, J.D.


    The categorical system of popular music, such as genre categories, is a highly differentiated and dynamic classification system. In this article we present work that studies different aspects of these categorical systems in popular music. Following the work of Paul DiMaggio, we focus on four questio

  20. Nearest convex hull classification

    G.I. Nalbantov (Georgi); P.J.F. Groenen (Patrick); J.C. Bioch (Cor)


    textabstractConsider the classification task of assigning a test object to one of two or more possible groups, or classes. An intuitive way to proceed is to assign the object to that class, to which the distance is minimal. As a distance measure to a class, we propose here to use the distance to the