WorldWideScience

Sample records for relevant classification codes

  1. A systematic literature review of automated clinical coding and classification systems.

    Science.gov (United States)

    Stanfill, Mary H; Williams, Margaret; Fenton, Susan H; Jenders, Robert A; Hersh, William R

    2010-01-01

    Clinical coding and classification processes transform natural language descriptions in clinical text into data that can subsequently be used for clinical care, research, and other purposes. This systematic literature review examined studies that evaluated all types of automated coding and classification systems to determine the performance of such systems. Studies indexed in Medline or other relevant databases prior to March 2009 were considered. The 113 studies included in this review show that automated tools exist for a variety of coding and classification purposes, focus on various healthcare specialties, and handle a wide variety of clinical document types. Automated coding and classification systems themselves are not generalizable, nor are the results of the studies evaluating them. Published research shows these systems hold promise, but these data must be considered in context, with performance relative to the complexity of the task and the desired outcome.

  2. Conceptual-driven classification for coding advise in health insurance reimbursement.

    Science.gov (United States)

    Li, Sheng-Tun; Chen, Chih-Chuan; Huang, Fernando

    2011-01-01

    With the non-stop increases in medical treatment fees, the economic survival of a hospital in Taiwan relies on the reimbursements received from the Bureau of National Health Insurance, which in turn depend on the accuracy and completeness of the content of the discharge summaries as well as the correctness of their International Classification of Diseases (ICD) codes. The purpose of this research is to enforce the entire disease classification framework by supporting disease classification specialists in the coding process. This study developed an ICD code advisory system (ICD-AS) that performed knowledge discovery from discharge summaries and suggested ICD codes. Natural language processing and information retrieval techniques based on Zipf's Law were applied to process the content of discharge summaries, and fuzzy formal concept analysis was used to analyze and represent the relationships between the medical terms identified by MeSH. In addition, a certainty factor used as reference during the coding process was calculated to account for uncertainty and strengthen the credibility of the outcome. Two sets of 360 and 2579 textual discharge summaries of patients suffering from cerebrovascular disease was processed to build up ICD-AS and to evaluate the prediction performance. A number of experiments were conducted to investigate the impact of system parameters on accuracy and compare the proposed model to traditional classification techniques including linear-kernel support vector machines. The comparison results showed that the proposed system achieves the better overall performance in terms of several measures. In addition, some useful implication rules were obtained, which improve comprehension of the field of cerebrovascular disease and give insights to the relationships between relevant medical terms. Our system contributes valuable guidance to disease classification specialists in the process of coding discharge summaries, which consequently brings benefits in

  3. Quantitative software-reliability analysis of computer codes relevant to nuclear safety

    International Nuclear Information System (INIS)

    Mueller, C.J.

    1981-12-01

    This report presents the results of the first year of an ongoing research program to determine the probability of failure characteristics of computer codes relevant to nuclear safety. An introduction to both qualitative and quantitative aspects of nuclear software is given. A mathematical framework is presented which will enable the a priori prediction of the probability of failure characteristics of a code given the proper specification of its properties. The framework consists of four parts: (1) a classification system for software errors and code failures; (2) probabilistic modeling for selected reliability characteristics; (3) multivariate regression analyses to establish predictive relationships among reliability characteristics and generic code property and development parameters; and (4) the associated information base. Preliminary data of the type needed to support the modeling and the predictions of this program are described. Illustrations of the use of the modeling are given but the results so obtained, as well as all results of code failure probabilities presented herein, are based on data which at this point are preliminary, incomplete, and possibly non-representative of codes relevant to nuclear safety

  4. Blind Signal Classification via Spare Coding

    Science.gov (United States)

    2016-04-10

    Blind Signal Classification via Sparse Coding Youngjune Gwon MIT Lincoln Laboratory gyj@ll.mit.edu Siamak Dastangoo MIT Lincoln Laboratory sia...achieve blind signal classification with no prior knowledge about signals (e.g., MCS, pulse shaping) in an arbitrary RF channel. Since modulated RF...classification method. Our results indicate that we can separate different classes of digitally modulated signals from blind sampling with 70.3% recall and 24.6

  5. 49 CFR 173.52 - Classification codes and compatibility groups of explosives.

    Science.gov (United States)

    2010-10-01

    ... containing both an explosive substance and flammable liquid or gel J 1.1J1.2J 1.3J Article containing both an... classification codes for substances and articles described in the first column of table 1. Table 2 shows the... possible classification codes for explosives. Table 1—Classification Codes Description of substances or...

  6. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  7. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  8. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    Science.gov (United States)

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  9. Breathing (and Coding?) a Bit Easier: Changes to International Classification of Disease Coding for Pulmonary Hypertension.

    Science.gov (United States)

    Mathai, Stephen C; Mathew, Sherin

    2018-04-20

    International Classification of Disease (ICD) coding system is broadly utilized by healthcare providers, hospitals, healthcare payers, and governments to track health trends and statistics at the global, national, and local levels and to provide a reimbursement framework for medical care based upon diagnosis and severity of illness. The current iteration of the ICD system, ICD-10, was implemented in 2015. While many changes to the prior ICD-9 system were included in the ICD-10 system, the newer revision failed to adequately reflect advances in the clinical classification of certain diseases such as pulmonary hypertension (PH). Recently, a proposal to modify the ICD-10 codes for PH was considered and ultimately adopted for inclusion as updates to ICD-10 coding system. While these revisions better reflect the current clinical classification of PH, in the future, further changes should be considered to improve the accuracy and ease of coding for all forms of PH. Copyright © 2018. Published by Elsevier Inc.

  10. Classification of first branchial cleft anomalies: is it clinically relevant ...

    African Journals Online (AJOL)

    Background: There are three classification systems for first branchial cleft anomalies currently in use. The Arnot, Work and Olsen classifications describe these lesions on the basis of morphology, tissue of origin and clinical appearance. However, the clinical relevance of these classifications is debated, as they may not be ...

  11. Improving the coding and classification of ambulance data through the application of International Classification of Disease 10th revision.

    Science.gov (United States)

    Cantwell, Kate; Morgans, Amee; Smith, Karen; Livingston, Michael; Dietze, Paul

    2014-02-01

    This paper aims to examine whether an adaptation of the International Classification of Disease (ICD) coding system can be applied retrospectively to final paramedic assessment data in an ambulance dataset with a view to developing more fine-grained, clinically relevant case definitions than are available through point-of-call data. Over 1.2 million case records were extracted from the Ambulance Victoria data warehouse. Data fields included dispatch code, cause (CN) and final primary assessment (FPA). Each FPA was converted to an ICD-10-AM code using word matching or best fit. ICD-10-AM codes were then converted into Major Diagnostic Categories (MDC). CN was aligned with the ICD-10-AM codes for external cause of morbidity and mortality. The most accurate results were obtained when ICD-10-AM codes were assigned using information from both FPA and CN. Comparison of cases coded as unconscious at point-of-call with the associated paramedic assessment highlighted the extra clinical detail obtained when paramedic assessment data are used. Ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Coding of ambulance data using ICD-10-AM allows for comparison of not only ambulance service users but also with other population groups. WHAT IS KNOWN ABOUT THE TOPIC? There is no reliable and standard coding and categorising system for paramedic assessment data contained in ambulance service databases. WHAT DOES THIS PAPER ADD? This study demonstrates that ambulance paramedic assessment data can be aligned with ICD-10-AM and MDC with relative ease, allowing retrospective coding of large datasets. Representation of ambulance case types using ICD-10-AM-coded information obtained after paramedic assessment is more fine grained and clinically relevant than point-of-call data, which uses caller information before ambulance attendance. WHAT ARE THE IMPLICATIONS FOR PRACTITIONERS? This paper describes

  12. The relevance of the International Classification of Functioning, Disability and Health (ICF) in monitoring and evaluating Community-based Rehabilitation (CBR).

    Science.gov (United States)

    Madden, Rosamond H; Dune, Tinashe; Lukersmith, Sue; Hartley, Sally; Kuipers, Pim; Gargett, Alexandra; Llewellyn, Gwynnyth

    2014-01-01

    To examine the relevance of the International Classification of Functioning, Disability and Health (ICF) to CBR monitoring and evaluation by investigating the relationship between the ICF and information in published CBR monitoring and evaluation reports. A three-stage literature search and analysis method was employed. Studies were identified via online database searches for peer-reviewed journal articles, and hand-searching of CBR network resources, NGO websites and specific journals. From each study "information items" were extracted; extraction consistency among authors was established. Finally, the resulting information items were coded to ICF domains and categories, with consensus on coding being achieved. Thirty-six articles relating to monitoring and evaluating CBR were selected for analysis. Approximately one third of the 2495 information items identified in these articles (788 or 32%) related to concepts of functioning, disability and environment, and could be coded to the ICF. These information items were spread across the entire ICF classification with a concentration on Activities and Participation (49% of the 788 information items) and Environmental Factors (42%). The ICF is a relevant and potentially useful framework and classification, providing building blocks for the systematic recording of information pertaining to functioning and disability, for CBR monitoring and evaluation. Implications for Rehabilitation The application of the ICF, as one of the building blocks for CBR monitoring and evaluation, is a constructive step towards an evidence-base on the efficacy and outcomes of CBR programs. The ICF can be used to provide the infrastructure for functioning and disability information to inform service practitioners and enable national and international comparisons.

  13. Pathohistological classification systems in gastric cancer: diagnostic relevance and prognostic value.

    Science.gov (United States)

    Berlth, Felix; Bollschweiler, Elfriede; Drebber, Uta; Hoelscher, Arnulf H; Moenig, Stefan

    2014-05-21

    Several pathohistological classification systems exist for the diagnosis of gastric cancer. Many studies have investigated the correlation between the pathohistological characteristics in gastric cancer and patient characteristics, disease specific criteria and overall outcome. It is still controversial as to which classification system imparts the most reliable information, and therefore, the choice of system may vary in clinical routine. In addition to the most common classification systems, such as the Laurén and the World Health Organization (WHO) classifications, other authors have tried to characterize and classify gastric cancer based on the microscopic morphology and in reference to the clinical outcome of the patients. In more than 50 years of systematic classification of the pathohistological characteristics of gastric cancer, there is no sole classification system that is consistently used worldwide in diagnostics and research. However, several national guidelines for the treatment of gastric cancer refer to the Laurén or the WHO classifications regarding therapeutic decision-making, which underlines the importance of a reliable classification system for gastric cancer. The latest results from gastric cancer studies indicate that it might be useful to integrate DNA- and RNA-based features of gastric cancer into the classification systems to establish prognostic relevance. This article reviews the diagnostic relevance and the prognostic value of different pathohistological classification systems in gastric cancer.

  14. Classification of working processes to facilitate occupational hazard coding on industrial trawlers

    DEFF Research Database (Denmark)

    Jensen, Olaf C; Stage, Søren; Noer, Preben

    2003-01-01

    BACKGROUND: Commercial fishing is an extremely dangerous economic activity. In order to more accurately describe the risks involved, a specific injury coding based on the working process was developed. METHOD: Observation on six different types of vessels was conducted and allowed a description...... and a classification of the principal working processes on all kinds of vessels and a detailed classification for industrial trawlers. In industrial trawling, fish are landed for processing purposes, for example, for the production of fish oil and fish meal. The classification was subsequently used to code...... the injuries reported to the Danish Maritime Authority over a 5-year period. RESULTS: On industrial trawlers, 374 of 394 (95%) injuries were captured by the classification. Setting out and hauling in the gear and nets were the processes with the most injuries and accounted for 58.9% of all injuries...

  15. On the relevance of spectral features for instrument classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Sigurdsson, Sigurdur; Hansen, Lars Kai

    2007-01-01

    Automatic knowledge extraction from music signals is a key component for most music organization and music information retrieval systems. In this paper, we consider the problem of instrument modelling and instrument classification from the rough audio data. Existing systems for automatic instrument...... classification operate normally on a relatively large number of features, from which those related to the spectrum of the audio signal are particularly relevant. In this paper, we confront two different models about the spectral characterization of musical instruments. The first assumes a constant envelope...

  16. T-ray relevant frequencies for osteosarcoma classification

    Science.gov (United States)

    Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.

    2006-01-01

    We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.

  17. On the classification of long non-coding RNAs

    KAUST Repository

    Ma, Lina

    2013-06-01

    Long non-coding RNAs (lncRNAs) have been found to perform various functions in a wide variety of important biological processes. To make easier interpretation of lncRNA functionality and conduct deep mining on these transcribed sequences, it is convenient to classify lncRNAs into different groups. Here, we summarize classification methods of lncRNAs according to their four major features, namely, genomic location and context, effect exerted on DNA sequences, mechanism of functioning and their targeting mechanism. In combination with the presently available function annotations, we explore potential relationships between different classification categories, and generalize and compare biological features of different lncRNAs within each category. Finally, we present our view on potential further studies. We believe that the classifications of lncRNAs as indicated above are of fundamental importance for lncRNA studies, helpful for further investigation of specific lncRNAs, for formulation of new hypothesis based on different features of lncRNA and for exploration of the underlying lncRNA functional mechanisms. © 2013 Landes Bioscience.

  18. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    Science.gov (United States)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  19. Relevance of the formal red meat classification system to the South ...

    African Journals Online (AJOL)

    Relevance of the formal red meat classification system to the South African ... to market information make them less willing to sell their animals through the formal market. ... Keywords: Communal farmers, marketing system, meat industry ...

  20. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    Science.gov (United States)

    2016-05-01

    subject to code matrices that follows the structure given by (113). [⃗ yR y⃗I ] = √ Es 2L [ GR1 −GI1 GI2 GR2 ] [ QR −QI QI QR ] [⃗ bR b⃗I ] + [⃗ nR n⃗I... QR ] [⃗ b+ b⃗− ] + [⃗ n+ n⃗− ] (115) The average likelihood for type 4 CDMA (116) is a special case of type 1 CDMA with twice the code length and...AVERAGE LIKELIHOOD METHODS OF CLASSIFICATION OF CODE DIVISION MULTIPLE ACCESS (CDMA) MAY 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE

  1. Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data (SCED). NFES 2011-801

    Science.gov (United States)

    National Forum on Education Statistics, 2011

    2011-01-01

    In this handbook, "Prior-to-Secondary School Course Classification System: School Codes for the Exchange of Data" (SCED), the National Center for Education Statistics (NCES) and the National Forum on Education Statistics have extended the existing secondary course classification system with codes and descriptions for courses offered at…

  2. Searching bioremediation patents through Cooperative Patent Classification (CPC).

    Science.gov (United States)

    Prasad, Rajendra

    2016-03-01

    Patent classification systems have traditionally evolved independently at each patent jurisdiction to classify patents handled by their examiners to be able to search previous patents while dealing with new patent applications. As patent databases maintained by them went online for free access to public as also for global search of prior art by examiners, the need arose for a common platform and uniform structure of patent databases. The diversity of different classification, however, posed problems of integrating and searching relevant patents across patent jurisdictions. To address this problem of comparability of data from different sources and searching patents, WIPO in the recent past developed what is known as International Patent Classification (IPC) system which most countries readily adopted to code their patents with IPC codes along with their own codes. The Cooperative Patent Classification (CPC) is the latest patent classification system based on IPC/European Classification (ECLA) system, developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) which is likely to become a global standard. This paper discusses this new classification system with reference to patents on bioremediation.

  3. Automatic Parallelization Tool: Classification of Program Code for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Mustafa Basthikodi

    2016-04-01

    Full Text Available Performance growth of single-core processors has come to a halt in the past decade, but was re-enabled by the introduction of parallelism in processors. Multicore frameworks along with Graphical Processing Units empowered to enhance parallelism broadly. Couples of compilers are updated to developing challenges forsynchronization and threading issues. Appropriate program and algorithm classifications will have advantage to a great extent to the group of software engineers to get opportunities for effective parallelization. In present work we investigated current species for classification of algorithms, in that related work on classification is discussed along with the comparison of issues that challenges the classification. The set of algorithms are chosen which matches the structure with different issues and perform given task. We have tested these algorithms utilizing existing automatic species extraction toolsalong with Bones compiler. We have added functionalities to existing tool, providing a more detailed characterization. The contributions of our work include support for pointer arithmetic, conditional and incremental statements, user defined types, constants and mathematical functions. With this, we can retain significant data which is not captured by original speciesof algorithms. We executed new theories into the device, empowering automatic characterization of program code.

  4. ColorPhylo: A Color Code to Accurately Display Taxonomic Classifications.

    Science.gov (United States)

    Lespinats, Sylvain; Fertil, Bernard

    2011-01-01

    Color may be very useful to visualise complex data. As far as taxonomy is concerned, color may help observing various species' characteristics in correlation with classification. However, choosing the number of subclasses to display is often a complex task: on the one hand, assigning a limited number of colors to taxa of interest hides the structure imbedded in the subtrees of the taxonomy; on the other hand, differentiating a high number of taxa by giving them specific colors, without considering the underlying taxonomy, may lead to unreadable results since relationships between displayed taxa would not be supported by the color code. In the present paper, an automatic color coding scheme is proposed to visualise the levels of taxonomic relationships displayed as overlay on any kind of data plot. To achieve this goal, a dimensionality reduction method allows displaying taxonomic "distances" onto a Euclidean two-dimensional space. The resulting map is projected onto a 2D color space (the Hue, Saturation, Brightness colorimetric space with brightness set to 1). Proximity in the taxonomic classification corresponds to proximity on the map and is therefore materialised by color proximity. As a result, each species is related to a color code showing its position in the taxonomic tree. The so called ColorPhylo displays taxonomic relationships intuitively and can be combined with any biological result. A Matlab version of ColorPhylo is available at http://sy.lespi.free.fr/ColorPhylo-homepage.html. Meanwhile, an ad-hoc distance in case of taxonomy with unknown edge lengths is proposed.

  5. Re-enacting the relevance of the moral codes of the un-shrined ...

    African Journals Online (AJOL)

    Also they maintain social justice, give cultural identity and encourage patriotism. The work will, identify the moral codes and analyze the methods of enforcing them. It will also delve into how the gods are appeased. The socio-cultural significance would be treated for a clearer view of the relevance of the moral codes of the ...

  6. Comparison study on flexible pavement design using FAA (Federal Aviation Administration) and LCN (Load Classification Number) code in Ahmad Yani international airport’s runway

    Science.gov (United States)

    Santoso, S. E.; Sulistiono, D.; Mawardi, A. F.

    2017-11-01

    FAA code for airport design has been broadly used by Indonesian Ministry of Aviation since decades ago. However, there is not much comprehensive study about its relevance and efficiency towards current situation in Indonesia. Therefore, a further comparison study on flexible pavement design for airport runway using comparable method has become essential. The main focus of this study is to compare which method between FAA and LCN that offer the most efficient and effective way in runway pavement planning. The comparative methods in this study mainly use the variety of variable approach. FAA code for instance, will use the approach on the aircraft’s maximum take-off weight and annual departure. Whilst LCN code use the variable of equivalent single wheel load and tire pressure. Based on the variables mentioned above, a further classification and rated method will be used to determine which code is best implemented. According to the analysis, it is clear that FAA method is the most effective way to plan runway design in Indonesia with consecutively total pavement thickness of 127cm and LCN method total pavement thickness of 70cm. Although, FAA total pavement is thicker that LCN its relevance towards sustainable and pristine condition in the future has become an essential aspect to consider in design and planning.

  7. Call for consistent coding in diabetes mellitus using the Royal College of General Practitioners and NHS pragmatic classification of diabetes

    Directory of Open Access Journals (Sweden)

    Simon de Lusignan

    2013-03-01

    Full Text Available Background The prevalence of diabetes is increasing with growing levels of obesity and an aging population. New practical guidelines for diabetes provide an applicable classification. Inconsistent coding of diabetes hampers the use of computerised disease registers for quality improvement, and limits the monitoring of disease trends.Objective To develop a consensus set of codes that should be used when recording diabetes diagnostic data.Methods The consensus approach was hierarchical, with a preference for diagnostic/disorder codes, to define each type of diabetes and non-diabetic hyperglycaemia, which were listed as being completely, partially or not readily mapped to available codes. The practical classification divides diabetes into type 1 (T1DM, type 2 (T2DM, genetic, other, unclassified and non-diabetic fasting hyperglycaemia. We mapped the classification to Read version 2, Clinical Terms version 3 and SNOMED CT.Results T1DMand T2DM were completely mapped to appropriate codes. However, in other areas only partial mapping is possible. Genetics is a fast-moving field and there were considerable gaps in the available labels for genetic conditions; what the classification calls ‘other’ the coding system labels ‘secondary’ diabetes. The biggest gap was the lack of a code for diabetes where the type of diabetes was uncertain. Notwithstanding these limitations we were able to develop a consensus list.Conclusions It is a challenge to develop codes that readily map to contemporary clinical concepts. However, clinicians should adopt the standard recommended codes; and audit the quality of their existing records.

  8. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  9. 78 FR 21612 - Medical Device Classification Product Codes; Guidance for Industry and Food and Drug...

    Science.gov (United States)

    2013-04-11

    ... driving force for CDRH's internal organizational structure as well. These Panels were established with the... guidance represents the Agency's current thinking on medical device classification product codes. It does...

  10. Video coding and decoding devices and methods preserving ppg relevant information

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a video encoding device (10) for encoding video data and a corresponding video decoding device, wherein during decoding PPG relevant information shall be preserved. For this purpose the video coding device (10) comprises a first encoder (20) for encoding input video

  11. An Analysis of the Relationship between IFAC Code of Ethics and CPI

    Directory of Open Access Journals (Sweden)

    Ayşe İrem Keskin

    2015-11-01

    Full Text Available Abstract Code of ethics has become a significant concept as regards to the business world. That is why occupational organizations have developed their own codes of ethics over time. In this study, primarily the compatibility classification of the accounting code of ethics belonging to the IFAC (The International Federation of Accountants is carried out on the basis of the action plans assessing the levels of usage by the 175 IFAC national accounting organizations. It is determined as a result of the classification that 60,6% of the member organizations are applying the IFAC code in general, the rest 39,4% on the other hand, is not applying the code at all. With this classification, the hypothesis propounding that “The national accounting organizations in highly corrupt countries would be less likely to adopt the IFAC ethic code than those in very clean countries,” is tested using the “Corruption Perception Index-CPI” data. It is determined that the findings support this relevant hypothesis.          

  12. A physiologically-inspired model of numerical classification based on graded stimulus coding

    Directory of Open Access Journals (Sweden)

    John Pearson

    2010-01-01

    Full Text Available In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes—quantities like time, length, and brightness that are linearly ordered—constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber’s Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task—numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli.

  13. Classification and coding of commercial fishing injuries by work processes: an experience in the Danish fresh market fishing industry

    DEFF Research Database (Denmark)

    Jensen, Olaf Chresten; Stage, Søren; Noer, Preben

    2005-01-01

    BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes was deve......BACKGROUND: Work-related injuries in commercial fishing are of concern internationally. To better identify the causes of injury, this study coded occupational injuries by working processes in commercial fishing for fresh market fish. METHODS: A classification system of the work processes...... to working with the gear and nets vary greatly in the different fishing methods. Coding of the injuries to the specific working processes allows for targeted prevention efforts....

  14. Defining pediatric traumatic brain injury using International Classification of Diseases Version 10 Codes: a systematic review.

    Science.gov (United States)

    Chan, Vincy; Thurairajah, Pravheen; Colantonio, Angela

    2015-02-04

    Although healthcare administrative data are commonly used for traumatic brain injury (TBI) research, there is currently no consensus or consistency on the International Classification of Diseases Version 10 (ICD-10) codes used to define TBI among children and youth internationally. This study systematically reviewed the literature to explore the range of ICD-10 codes that are used to define TBI in this population. The identification of the range of ICD-10 codes to define this population in administrative data is crucial, as it has implications for policy, resource allocation, planning of healthcare services, and prevention strategies. The databases MEDLINE, MEDLINE In-Process, Embase, PsychINFO, CINAHL, SPORTDiscus, and Cochrane Database of Systematic Reviews were systematically searched. Grey literature was searched using Grey Matters and Google. Reference lists of included articles were also searched for relevant studies. Two reviewers independently screened all titles and abstracts using pre-defined inclusion and exclusion criteria. A full text screen was conducted on articles that met the first screen inclusion criteria. All full text articles that met the pre-defined inclusion criteria were included for analysis in this systematic review. A total of 1,326 publications were identified through the predetermined search strategy and 32 articles/reports met all eligibility criteria for inclusion in this review. Five articles specifically examined children and youth aged 19 years or under with TBI. ICD-10 case definitions ranged from the broad injuries to the head codes (ICD-10 S00 to S09) to concussion only (S06.0). There was overwhelming consensus on the inclusion of ICD-10 code S06, intracranial injury, while codes S00 (superficial injury of the head), S03 (dislocation, sprain, and strain of joints and ligaments of head), and S05 (injury of eye and orbit) were only used by articles that examined head injury, none of which specifically examined children and

  15. Stratification and prognostic relevance of Jass’s molecular classification of colorectal cancer

    OpenAIRE

    Inti eZlobec; Inti eZlobec; Michel P Bihl; Anja eFoerster; Alex eRufle; Luigi eTerracciano; Alessandro eLugli; Alessandro eLugli

    2012-01-01

    Background: The current proposed model of colorectal tumorigenesis is based primarily on CpG island methylator phenotype (CIMP), microsatellite instability (MSI), KRAS, BRAF, and methylation status of 0-6-Methylguanine DNA Methyltransferase (MGMT) and classifies tumors into 5 subgroups. The aim of this study is to validate this molecular classification and test its prognostic relevance. Methods: 302 patients were included in this study. Molecular analysis was performed for 5 CIMP-related pro...

  16. Population-based evaluation of a suggested anatomic and clinical classification of congenital heart defects based on the International Paediatric and Congenital Cardiac Code

    Directory of Open Access Journals (Sweden)

    Goffinet François

    2011-10-01

    Full Text Available Abstract Background Classification of the overall spectrum of congenital heart defects (CHD has always been challenging, in part because of the diversity of the cardiac phenotypes, but also because of the oft-complex associations. The purpose of our study was to establish a comprehensive and easy-to-use classification of CHD for clinical and epidemiological studies based on the long list of the International Paediatric and Congenital Cardiac Code (IPCCC. Methods We coded each individual malformation using six-digit codes from the long list of IPCCC. We then regrouped all lesions into 10 categories and 23 subcategories according to a multi-dimensional approach encompassing anatomic, diagnostic and therapeutic criteria. This anatomic and clinical classification of congenital heart disease (ACC-CHD was then applied to data acquired from a population-based cohort of patients with CHD in France, made up of 2867 cases (82% live births, 1.8% stillbirths and 16.2% pregnancy terminations. Results The majority of cases (79.5% could be identified with a single IPCCC code. The category "Heterotaxy, including isomerism and mirror-imagery" was the only one that typically required more than one code for identification of cases. The two largest categories were "ventricular septal defects" (52% and "anomalies of the outflow tracts and arterial valves" (20% of cases. Conclusion Our proposed classification is not new, but rather a regrouping of the known spectrum of CHD into a manageable number of categories based on anatomic and clinical criteria. The classification is designed to use the code numbers of the long list of IPCCC but can accommodate ICD-10 codes. Its exhaustiveness, simplicity, and anatomic basis make it useful for clinical and epidemiologic studies, including those aimed at assessment of risk factors and outcomes.

  17. nRC: non-coding RNA Classifier based on structural features.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso

    2017-01-01

    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.

  18. LAMOST OBSERVATIONS IN THE KEPLER FIELD: SPECTRAL CLASSIFICATION WITH THE MKCLASS CODE

    Energy Technology Data Exchange (ETDEWEB)

    Gray, R. O. [Department of Physics and Astronomy, Appalachian State University, Boone, NC 28608 (United States); Corbally, C. J. [Vatican Observatory Research Group, Steward Observatory, Tucson, AZ 85721-0065 (United States); Cat, P. De [Royal Observatory of Belgium, Ringlaan 3, B-1180 Brussel (Belgium); Fu, J. N.; Ren, A. B. [Department of Astronomy, Beijing Normal University, 19 Avenue Xinjiekouwai, Beijing 100875 (China); Shi, J. R.; Luo, A. L.; Zhang, H. T.; Wu, Y.; Cao, Z.; Li, G. [Key Laboratory for Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China); Zhang, Y.; Hou, Y.; Wang, Y. [Nanjing Institute of Astronomical Optics and Technology, National Astronomical Observatories, Chinese Academy of Sciences, Nanjing 210042 (China)

    2016-01-15

    The LAMOST-Kepler project was designed to obtain high-quality, low-resolution spectra of many of the stars in the Kepler field with the Large Sky Area Multi Object Fiber Spectroscopic Telescope (LAMOST) spectroscopic telescope. To date 101,086 spectra of 80,447 objects over the entire Kepler field have been acquired. Physical parameters, radial velocities, and rotational velocities of these stars will be reported in other papers. In this paper we present MK spectral classifications for these spectra determined with the automatic classification code MKCLASS. We discuss the quality and reliability of the spectral types and present histograms showing the frequency of the spectral types in the main table organized according to luminosity class. Finally, as examples of the use of this spectral database, we compute the proportion of A-type stars that are Am stars, and identify 32 new barium dwarf candidates.

  19. Classification of perimenstrual headache: clinical relevance.

    Science.gov (United States)

    MacGregor, E Anne

    2012-10-01

    Although more than 50% of women with migraine report an association between migraine and menstruation, menstruation has generally considered to be no more than one of a variety of different migraine triggers. In 2004, the second edition of the International Classification of Headache Disorders introduced specific diagnostic criteria for menstrual migraine. Results from research undertaken subsequently lend support to the clinical impression that menstrual migraine should be seen as a distinct clinical entity. This paper reviews the recent research and provides specific recommendations for consideration in future editions of the classification.

  20. Feature-selective Attention in Frontoparietal Cortex: Multivoxel Codes Adjust to Prioritize Task-relevant Information.

    Science.gov (United States)

    Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra

    2017-02-01

    Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.

  1. New Site Coefficients and Site Classification System Used in Recent Building Seismic Code Provisions

    Science.gov (United States)

    Dobry, R.; Borcherdt, R.D.; Crouse, C.B.; Idriss, I.M.; Joyner, W.B.; Martin, G.R.; Power, M.S.; Rinne, E.E.; Seed, R.B.

    2000-01-01

    Recent code provisions for buildings and other structures (1994 and 1997 NEHRP Provisions, 1997 UBC) have adopted new site amplification factors and a new procedure for site classification. Two amplitude-dependent site amplification factors are specified: Fa for short periods and Fv for longer periods. Previous codes included only a long period factor S and did not provide for a short period amplification factor. The new site classification system is based on definitions of five site classes in terms of a representative average shear wave velocity to a depth of 30 m (V?? s). This definition permits sites to be classified unambiguously. When the shear wave velocity is not available, other soil properties such as standard penetration resistance or undrained shear strength can be used. The new site classes denoted by letters A - E, replace site classes in previous codes denoted by S1 - S4. Site classes A and B correspond to hard rock and rock, Site Class C corresponds to soft rock and very stiff / very dense soil, and Site Classes D and E correspond to stiff soil and soft soil. A sixth site class, F, is defined for soils requiring site-specific evaluations. Both Fa and Fv are functions of the site class, and also of the level of seismic hazard on rock, defined by parameters such as Aa and Av (1994 NEHRP Provisions), Ss and S1 (1997 NEHRP Provisions) or Z (1997 UBC). The values of Fa and Fv decrease as the seismic hazard on rock increases due to soil nonlinearity. The greatest impact of the new factors Fa and Fv as compared with the old S factors occurs in areas of low-to-medium seismic hazard. This paper summarizes the new site provisions, explains the basis for them, and discusses ongoing studies of site amplification in recent earthquakes that may influence future code developments.

  2. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    International Nuclear Information System (INIS)

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen

    2016-01-01

    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)

  3. Classification of radiological procedures

    International Nuclear Information System (INIS)

    1989-01-01

    A classification for departments in Danish hospitals which use radiological procedures. The classification codes consist of 4 digits, where the first 2 are the codes for the main groups. The first digit represents the procedure's topographical object and the second the techniques. The last 2 digits describe individual procedures. (CLS)

  4. Five-way Smoking Status Classification Using Text Hot-Spot Identification and Error-correcting Output Codes

    OpenAIRE

    Cohen, Aaron M.

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...

  5. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  6. ncRNA-class Web Tool: Non-coding RNA feature extraction and pre-miRNA classification web tool

    KAUST Repository

    Kleftogiannis, Dimitrios A.; Theofilatos, Konstantinos A.; Papadimitriou, Stergios; Tsakalidis, Athanasios K.; Likothanassis, Spiridon D.; Mavroudi, Seferina P.

    2012-01-01

    Until recently, it was commonly accepted that most genetic information is transacted by proteins. Recent evidence suggests that the majority of the genomes of mammals and other complex organisms are in fact transcribed into non-coding RNAs (ncRNAs), many of which are alternatively spliced and/or processed into smaller products. Non coding RNA genes analysis requires the calculation of several sequential, thermodynamical and structural features. Many independent tools have already been developed for the efficient calculation of such features but to the best of our knowledge there does not exist any integrative approach for this task. The most significant amount of existing work is related to the miRNA class of non-coding RNAs. MicroRNAs (miRNAs) are small non-coding RNAs that play a significant role in gene regulation and their prediction is a challenging bioinformatics problem. Non-coding RNA feature extraction and pre-miRNA classification Web Tool (ncRNA-class Web Tool) is a publicly available web tool ( http://150.140.142.24:82/Default.aspx ) which provides a user friendly and efficient environment for the effective calculation of a set of 58 sequential, thermodynamical and structural features of non-coding RNAs, plus a tool for the accurate prediction of miRNAs. © 2012 IFIP International Federation for Information Processing.

  7. Ontological function annotation of long non-coding RNAs through hierarchical multi-label classification.

    Science.gov (United States)

    Zhang, Jingpu; Zhang, Zuping; Wang, Zixiang; Liu, Yuting; Deng, Lei

    2018-05-15

    Long non-coding RNAs (lncRNAs) are an enormous collection of functional non-coding RNAs. Over the past decades, a large number of novel lncRNA genes have been identified. However, most of the lncRNAs remain function uncharacterized at present. Computational approaches provide a new insight to understand the potential functional implications of lncRNAs. Considering that each lncRNA may have multiple functions and a function may be further specialized into sub-functions, here we describe NeuraNetL2GO, a computational ontological function prediction approach for lncRNAs using hierarchical multi-label classification strategy based on multiple neural networks. The neural networks are incrementally trained level by level, each performing the prediction of gene ontology (GO) terms belonging to a given level. In NeuraNetL2GO, we use topological features of the lncRNA similarity network as the input of the neural networks and employ the output results to annotate the lncRNAs. We show that NeuraNetL2GO achieves the best performance and the overall advantage in maximum F-measure and coverage on the manually annotated lncRNA2GO-55 dataset compared to other state-of-the-art methods. The source code and data are available at http://denglab.org/NeuraNetL2GO/. leideng@csu.edu.cn. Supplementary data are available at Bioinformatics online.

  8. Validity of International Classification of Diseases (ICD) coding for dengue infections in hospital discharge records in Malaysia.

    Science.gov (United States)

    Woon, Yuan-Liang; Lee, Keng-Yee; Mohd Anuar, Siti Fatimah Zahra; Goh, Pik-Pin; Lim, Teck-Onn

    2018-04-20

    Hospitalization due to dengue illness is an important measure of dengue morbidity. However, limited studies are based on administrative database because the validity of the diagnosis codes is unknown. We validated the International Classification of Diseases, 10th revision (ICD) diagnosis coding for dengue infections in the Malaysian Ministry of Health's (MOH) hospital discharge database. This validation study involves retrospective review of available hospital discharge records and hand-search medical records for years 2010 and 2013. We randomly selected 3219 hospital discharge records coded with dengue and non-dengue infections as their discharge diagnoses from the national hospital discharge database. We then randomly sampled 216 and 144 records for patients with and without codes for dengue respectively, in keeping with their relative frequency in the MOH database, for chart review. The ICD codes for dengue were validated against lab-based diagnostic standard (NS1 or IgM). The ICD-10-CM codes for dengue had a sensitivity of 94%, modest specificity of 83%, positive predictive value of 87% and negative predictive value 92%. These results were stable between 2010 and 2013. However, its specificity decreased substantially when patients manifested with bleeding or low platelet count. The diagnostic performance of the ICD codes for dengue in the MOH's hospital discharge database is adequate for use in health services research on dengue.

  9. Munitions Classification Library

    Science.gov (United States)

    2016-04-04

    members of the community to make their own additions to any, or all, of the classification libraries . The next phase entailed data collection over less......Include area code) 04/04/2016 Final Report August 2014 - August 2015 MUNITIONS CLASSIFICATION LIBRARY Mr. Craig Murray, Parsons Dr. Thomas H. Bell, Leidos

  10. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  11. Use of the Coding Causes of Death in HIV in the classification of deaths in Northeastern Brazil.

    Science.gov (United States)

    Alves, Diana Neves; Bresani-Salvi, Cristiane Campello; Batista, Joanna d'Arc Lyra; Ximenes, Ricardo Arraes de Alencar; Miranda-Filho, Demócrito de Barros; Melo, Heloísa Ramos Lacerda de; Albuquerque, Maria de Fátima Pessoa Militão de

    2017-01-01

    Describe the coding process of death causes for people living with HIV/AIDS, and classify deaths as related or unrelated to immunodeficiency by applying the Coding Causes of Death in HIV (CoDe) system. A cross-sectional study that codifies and classifies the causes of deaths occurring in a cohort of 2,372 people living with HIV/AIDS, monitored between 2007 and 2012, in two specialized HIV care services in Pernambuco. The causes of death already codified according to the International Classification of Diseases were recoded and classified as deaths related and unrelated to immunodeficiency by the CoDe system. We calculated the frequencies of the CoDe codes for the causes of death in each classification category. There were 315 (13%) deaths during the study period; 93 (30%) were caused by an AIDS-defining illness on the Centers for Disease Control and Prevention list. A total of 232 deaths (74%) were related to immunodeficiency after application of the CoDe. Infections were the most common cause, both related (76%) and unrelated (47%) to immunodeficiency, followed by malignancies (5%) in the first group and external causes (16%), malignancies (12 %) and cardiovascular diseases (11%) in the second group. Tuberculosis comprised 70% of the immunodeficiency-defining infections. Opportunistic infections and aging diseases were the most frequent causes of death, adding multiple disease burdens on health services. The CoDe system increases the probability of classifying deaths more accurately in people living with HIV/AIDS. Descrever o processo de codificação das causas de morte em pessoas vivendo com HIV/Aids, e classificar os óbitos como relacionados ou não relacionados à imunodeficiência aplicando o sistema Coding Causes of Death in HIV (CoDe). Estudo transversal, que codifica e classifica as causas dos óbitos ocorridos em uma coorte de 2.372 pessoas vivendo com HIV/Aids acompanhadas entre 2007 e 2012 em dois serviços de atendimento especializado em HIV em

  12. Revision, uptake and coding issues related to the open access Orchard Sports Injury Classification System (OSICS versions 8, 9 and 10.1

    Directory of Open Access Journals (Sweden)

    John Orchard

    2010-10-01

    Full Text Available John Orchard1, Katherine Rae1, John Brooks2, Martin Hägglund3, Lluis Til4, David Wales5, Tim Wood61Sports Medicine at Sydney University, Sydney NSW Australia; 2Rugby Football Union, Twickenham, England, UK; 3Department of Medical and Health Sciences, Linköping University, Linköping, Sweden; 4FC Barcelona, Barcelona, Catalonia, Spain; 5Arsenal FC, Highbury, England, UK; 6Tennis Australia, Melbourne, Vic, AustraliaAbstract: The Orchard Sports Injury Classification System (OSICS is one of the world’s most commonly used systems for coding injury diagnoses in sports injury surveillance systems. Its major strengths are that it has wide usage, has codes specific to sports medicine and that it is free to use. Literature searches and stakeholder consultations were made to assess the uptake of OSICS and to develop new versions. OSICS was commonly used in the sports of football (soccer, Australian football, rugby union, cricket and tennis. It is referenced in international papers in three sports and used in four commercially available computerised injury management systems. Suggested injury categories for the major sports are presented. New versions OSICS 9 (three digit codes and OSICS 10.1 (four digit codes are presented. OSICS is a potentially helpful component of a comprehensive sports injury surveillance system, but many other components are required. Choices made in developing these components should ideally be agreed upon by groups of researchers in consensus statements.Keywords: sports injury classification, epidemiology, surveillance, coding

  13. Stratification and Prognostic Relevance of Jass’s Molecular Classification of Colorectal Cancer

    International Nuclear Information System (INIS)

    Zlobec, Inti; Bihl, Michel P.; Foerster, Anja; Rufle, Alex; Terracciano, Luigi; Lugli, Alessandro

    2012-01-01

    Background: The current proposed model of colorectal tumorigenesis is based primarily on CpG island methylator phenotype (CIMP), microsatellite instability (MSI), KRAS, BRAF, and methylation status of 0-6-Methylguanine DNA Methyltransferase (MGMT) and classifies tumors into five subgroups. The aim of this study is to validate this molecular classification and test its prognostic relevance. Methods: Three hundred two patients were included in this study. Molecular analysis was performed for five CIMP-related promoters (CRABP1, MLH1, p16INK4a, CACNA1G, NEUROG1), MGMT, MSI, KRAS, and BRAF. Methylation in at least 4 promoters or in one to three promoters was considered CIMP-high and CIMP-low (CIMP-H/L), respectively. Results: CIMP-H, CIMP-L, and CIMP-negative were found in 7.1, 43, and 49.9% cases, respectively. One hundred twenty-three tumors (41%) could not be classified into any one of the proposed molecular subgroups, including 107 CIMP-L, 14 CIMP-H, and two CIMP-negative cases. The 10 year survival rate for CIMP-high patients [22.6% (95%CI: 7–43)] was significantly lower than for CIMP-L or CIMP-negative (p = 0.0295). Only the combined analysis of BRAF and CIMP (negative versus L/H) led to distinct prognostic subgroups. Conclusion: Although CIMP status has an effect on outcome, our results underline the need for standardized definitions of low- and high-level CIMP, which clearly hinders an effective prognostic and molecular classification of colorectal cancer.

  14. Stratification and Prognostic Relevance of Jass’s Molecular Classification of Colorectal Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Zlobec, Inti [Institute of Pathology, University of Bern, Bern (Switzerland); Institute for Pathology, University Hospital Basel, Basel (Switzerland); Bihl, Michel P.; Foerster, Anja; Rufle, Alex; Terracciano, Luigi [Institute for Pathology, University Hospital Basel, Basel (Switzerland); Lugli, Alessandro, E-mail: inti.zlobec@pathology.unibe.ch [Institute of Pathology, University of Bern, Bern (Switzerland); Institute for Pathology, University Hospital Basel, Basel (Switzerland)

    2012-02-27

    Background: The current proposed model of colorectal tumorigenesis is based primarily on CpG island methylator phenotype (CIMP), microsatellite instability (MSI), KRAS, BRAF, and methylation status of 0-6-Methylguanine DNA Methyltransferase (MGMT) and classifies tumors into five subgroups. The aim of this study is to validate this molecular classification and test its prognostic relevance. Methods: Three hundred two patients were included in this study. Molecular analysis was performed for five CIMP-related promoters (CRABP1, MLH1, p16INK4a, CACNA1G, NEUROG1), MGMT, MSI, KRAS, and BRAF. Methylation in at least 4 promoters or in one to three promoters was considered CIMP-high and CIMP-low (CIMP-H/L), respectively. Results: CIMP-H, CIMP-L, and CIMP-negative were found in 7.1, 43, and 49.9% cases, respectively. One hundred twenty-three tumors (41%) could not be classified into any one of the proposed molecular subgroups, including 107 CIMP-L, 14 CIMP-H, and two CIMP-negative cases. The 10 year survival rate for CIMP-high patients [22.6% (95%CI: 7–43)] was significantly lower than for CIMP-L or CIMP-negative (p = 0.0295). Only the combined analysis of BRAF and CIMP (negative versus L/H) led to distinct prognostic subgroups. Conclusion: Although CIMP status has an effect on outcome, our results underline the need for standardized definitions of low- and high-level CIMP, which clearly hinders an effective prognostic and molecular classification of colorectal cancer.

  15. Stratification and prognostic relevance of Jass’s molecular classification of colorectal cancer

    Directory of Open Access Journals (Sweden)

    Inti eZlobec

    2012-02-01

    Full Text Available Background: The current proposed model of colorectal tumorigenesis is based primarily on CpG island methylator phenotype (CIMP, microsatellite instability (MSI, KRAS, BRAF, and methylation status of 0-6-Methylguanine DNA Methyltransferase (MGMT and classifies tumors into 5 subgroups. The aim of this study is to validate this molecular classification and test its prognostic relevance. Methods: 302 patients were included in this study. Molecular analysis was performed for 5 CIMP-related promoters (CRABP1, MLH1, p16INK4a, CACNA1G, NEUROG1, MGMT, MSI, KRAS and BRAF. Tumors were CIMP-high or CIMP-low if ≥4 and 1-3 promoters were methylated, respectively. Results: CIMP-high, CIMP-low and CIMP–negative were found in 7.1%, 43% and 49.9% cases, respectively. 123 tumors (41% could not be classified into any one of the proposed molecular subgroups, including 107 CIMP-low, 14 CIMP-high and 2 CIMP-negative cases. The 10-year survival rate for CIMP-high patients (22.6% (95%CI: 7-43 was significantly lower than for CIMP-low or CIMP-negative (p=0.0295. Only the combined analysis of BRAF and CIMP (negative versus low/high led to distinct prognostic subgroups. Conclusion: Although CIMP status has an effect on outcome, our results underline the need for standardized definitions of low- and high-level CIMP, which clearly hinders an effective prognostic and molecular classification of colorectal cancer.

  16. Should International Classification of Diseases codes be used to survey hospital-acquired pneumonia?

    Science.gov (United States)

    Wolfensberger, A; Meier, A H; Kuster, S P; Mehra, T; Meier, M-T; Sax, H

    2018-05-01

    As surveillance of hospital-acquired pneumonia (HAP) is very resource intensive, alternatives for HAP surveillance are needed urgently. This study compared HAP rates according to routine discharge diagnostic codes of the International Classification of Diseases, 10 th Revision (ICD-10; ICD-HAP) with HAP rates according to the validated surveillance definitions of the Hospitals in Europe Link for Infection Control through Surveillance (HELICS/IPSE; HELICS-HAP) by manual retrospective re-evaluation of patient records. The positive predictive value of ICD-HAP for HELICS-HAP was 0.35, and sensitivity was 0.59. Therefore, the currently available ICD-10-based routine discharge data do not allow reliable identification of patients with HAP. Copyright © 2018 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  17. SWeRF--A method for estimating the relevant fine particle fraction in bulk materials for classification and labelling purposes.

    Science.gov (United States)

    Pensis, Ingeborg; Luetzenkirchen, Frank; Friede, Bernd

    2014-05-01

    In accordance with the European regulation for classification, labelling and packaging of substances and mixtures (CLP) as well as the criteria as set out in the Globally Harmonized System (GHS), fine fraction of crystalline silica (CS) has been classified as a specific target organ toxicity, the specific organ in this case being the lung. Generic cut-off values for products containing a fine fraction of CS trigger the need for a method for the quantification of the fine fraction of CS in bulk materials. This article describes the so-called SWeRF method, the size-weighted relevant fine fraction. The SWeRF method combines the particle size distribution of a powder with probability factors from the EN 481 standard and allows the relevant fine fraction of a material to be calculated. The SWeRF method has been validated with a number of industrial minerals. This will enable manufacturers and blenders to apply the CLP and GHS criteria for the classification of mineral products containing RCS a fine fraction of CS.

  18. Coding update of the SMFM definition of low risk for cesarean delivery from ICD-9-CM to ICD-10-CM.

    Science.gov (United States)

    Armstrong, Joanne; McDermott, Patricia; Saade, George R; Srinivas, Sindhu K

    2017-07-01

    In 2015, the Society for Maternal-Fetal Medicine developed a low risk for cesarean delivery definition based on administrative claims-based diagnosis codes described by the International Classification of Diseases, Ninth Revision, Clinical Modification. The Society for Maternal-Fetal Medicine definition is a clinical enrichment of 2 available measures from the Joint Commission and the Agency for Healthcare Research and Quality measures. The Society for Maternal-Fetal Medicine measure excludes diagnosis codes that represent clinically relevant risk factors that are absolute or relative contraindications to vaginal birth while retaining diagnosis codes such as labor disorders that are discretionary risk factors for cesarean delivery. The introduction of the International Statistical Classification of Diseases, 10th Revision, Clinical Modification in October 2015 expanded the number of available diagnosis codes and enabled a greater depth and breadth of clinical description. These coding improvements further enhance the clinical validity of the Society for Maternal-Fetal Medicine definition and its potential utility in tracking progress toward the goal of safely lowering the US cesarean delivery rate. This report updates the Society for Maternal-Fetal Medicine definition of low risk for cesarean delivery using International Statistical Classification of Diseases, 10th Revision, Clinical Modification coding. Copyright © 2017. Published by Elsevier Inc.

  19. Use of a New International Classification of Health Interventions for Capturing Information on Health Interventions Relevant to People with Disabilities.

    Science.gov (United States)

    Fortune, Nicola; Madden, Richard; Almborg, Ann-Helene

    2018-01-17

    Development of the World Health Organization's International Classification of Health Interventions (ICHI) is currently underway. Once finalised, ICHI will provide a standard basis for collecting, aggregating, analysing, and comparing data on health interventions across all sectors of the health system. In this paper, we introduce the classification, describing its underlying tri-axial structure, organisation and content. We then discuss the potential value of ICHI for capturing information on met and unmet need for health interventions relevant to people with a disability, with a particular focus on interventions to support functioning and health promotion interventions. Early experiences of use of the Swedish National Classification of Social Care Interventions and Activities, which is based closely on ICHI, illustrate the value of a standard classification to support practice and collect statistical data. Testing of the ICHI beta version in a wide range of countries and contexts is now needed so that improvements can be made before it is finalised. Input from those with an interest in the health of people with disabilities and health promotion more broadly is welcomed.

  20. A New Coding System for Metabolic Disorders Demonstrates Gaps in the International Disease Classifications ICD-10 and SNOMED-CT, Which Can Be Barriers to Genotype-Phenotype Data Sharing

    NARCIS (Netherlands)

    Sollie, Annet; Sijmons, Rolf H.; Lindhout, Dick; van der Ploeg, Ans T.; Gozalbo, M. Estela Rubio; Smit, G. Peter A.; Verheijen, Frans; Waterham, Hans R.; van Weely, Sonja; Wijburg, Frits A.; Wijburg, Rudolph; Visser, Gepke

    Data sharing is essential for a better understanding of genetic disorders. Good phenotype coding plays a key role in this process. Unfortunately, the two most widely used coding systems in medicine, ICD-10 and SNOMED-CT, lack information necessary for the detailed classification and annotation of

  1. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    Science.gov (United States)

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  2. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  3. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  4. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  5. [Differentiation of coding quality in orthopaedics by special, illustration-oriented case group analysis in the G-DRG System 2005].

    Science.gov (United States)

    Schütz, U; Reichel, H; Dreinhöfer, K

    2007-01-01

    We introduce a grouping system for clinical practice which allows the separation of DRG coding in specific orthopaedic groups based on anatomic regions, operative procedures, therapeutic interventions and morbidity equivalent diagnosis groups. With this, a differentiated aim-oriented analysis of illustrated internal DRG data becomes possible. The group-specific difference of the coding quality between the DRG groups following primary coding by the orthopaedic surgeon and final coding by the medical controlling is analysed. In a consecutive series of 1600 patients parallel documentation and group-specific comparison of the relevant DRG parameters were carried out in every case after primary and final coding. Analysing the group-specific share in the additional CaseMix coding, the group "spine surgery" dominated, closely followed by the groups "arthroplasty" and "surgery due to infection, tumours, diabetes". Altogether, additional cost-weight-relevant coding was necessary most frequently in the latter group (84%), followed by group "spine surgery" (65%). In DRGs representing conservative orthopaedic treatment documented procedures had nearly no influence on the cost weight. The introduced system of case group analysis in internal DRG documentation can lead to the detection of specific problems in primary coding and cost-weight relevant changes of the case mix. As an instrument for internal process control in the orthopaedic field, it can serve as a communicative interface between an economically oriented classification of the hospital performance and a specific problem solution of the medical staff involved in the department management.

  6. A novel risk classification system for 30-day mortality in children undergoing surgery

    Science.gov (United States)

    Walter, Arianne I.; Jones, Tamekia L.; Huang, Eunice Y.; Davis, Robert L.

    2018-01-01

    A simple, objective and accurate way of grouping children undergoing surgery into clinically relevant risk groups is needed. The purpose of this study, is to develop and validate a preoperative risk classification system for postsurgical 30-day mortality for children undergoing a wide variety of operations. The National Surgical Quality Improvement Project-Pediatric participant use file data for calendar years 2012–2014 was analyzed to determine preoperative variables most associated with death within 30 days of operation (D30). Risk groups were created using classification tree analysis based on these preoperative variables. The resulting risk groups were validated using 2015 data, and applied to neonates and higher risk CPT codes to determine validity in high-risk subpopulations. A five-level risk classification was found to be most accurate. The preoperative need for ventilation, oxygen support, inotropic support, sepsis, the need for emergent surgery and a do not resuscitate order defined non-overlapping groups with observed rates of D30 that vary from 0.075% (Very Low Risk) to 38.6% (Very High Risk). When CPT codes where death was never observed are eliminated or when the system is applied to neonates, the groupings remained predictive of death in an ordinal manner. PMID:29351327

  7. Statistical analysis of coding for molecular properties in the olfactory bulb

    Directory of Open Access Journals (Sweden)

    Benjamin eAuffarth

    2011-07-01

    Full Text Available The relationship between molecular properties of odorants and neural activities is arguably one of the most important issues in olfaction and the rules governing this relationship are still not clear. In the olfactory bulb (OB, glomeruli relay olfactory information to second-order neurons which in turn project to cortical areas. We investigate relevance of odorant properties, spatial localization of glomerular coding sites, and size of coding zones in a dataset of 2-deoxyglucose images of glomeruli over the entire OB of the rat. We relate molecular properties to activation of glomeruli in the OB using a nonparametric statistical test and a support-vector machine classification study. Our method permits to systematically map the topographic representation of various classes of odorants in the OB. Our results suggest many localized coding sites for particular molecular properties and some molecular properties that could form the basis for a spatial map of olfactory information. We found that alkynes, alkanes, alkenes, and amines affect activation maps very strongly as compared to other properties and that amines, sulfur-containing compounds, and alkynes have small zones and high relevance to activation changes, while aromatics, alkanes, and carboxylics acid recruit very big zones in the dataset. Results suggest a local spatial encoding for molecular properties.

  8. KWIC Index of nuclear codes (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-01-01

    It is a KWIC Index for 254 nuclear codes in the Nuclear Code Abstracts (1975 edition). The classification of nuclear codes and the form of index are the same as those in the Computer Programme Library at Ispra, Italy. (auth.)

  9. Causes of death and associated conditions (Codac): a utilitarian approach to the classification of perinatal deaths.

    Science.gov (United States)

    Frøen, J Frederik; Pinar, Halit; Flenady, Vicki; Bahrin, Safiah; Charles, Adrian; Chauke, Lawrence; Day, Katie; Duke, Charles W; Facchinetti, Fabio; Fretts, Ruth C; Gardener, Glenn; Gilshenan, Kristen; Gordijn, Sanne J; Gordon, Adrienne; Guyon, Grace; Harrison, Catherine; Koshy, Rachel; Pattinson, Robert C; Petersson, Karin; Russell, Laurie; Saastad, Eli; Smith, Gordon C S; Torabi, Rozbeh

    2009-06-10

    A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD), although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes.We tested the Causes of Death and Associated Conditions (Codac) classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions.The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies), two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy), a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal).For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured.The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions associated with them, and the most

  10. Using Administrative Mental Health Indicators in Heart Failure Outcomes Research: Comparison of Clinical Records and International Classification of Disease Coding.

    Science.gov (United States)

    Bender, Miriam; Smith, Tyler C

    2016-01-01

    Use of mental indication in health outcomes research is of growing interest to researchers. This study, as part of a larger research program, quantified agreement between administrative International Classification of Disease (ICD-9) coding for, and "gold standard" clinician documentation of, mental health issues (MHIs) in hospitalized heart failure (HF) patients to determine the validity of mental health administrative data for use in HF outcomes research. A 13% random sample (n = 504) was selected from all unique patients (n = 3,769) hospitalized with a primary HF diagnosis at 4 San Diego County community hospitals during 2009-2012. MHI was defined as ICD-9 discharge diagnostic coding 290-319. Records were audited for clinician documentation of MHI. A total of 43% (n = 216) had mental health clinician documentation; 33% (n = 164) had ICD-9 coding for MHI. ICD-9 code bundle 290-319 had 0.70 sensitivity, 0.97 specificity, and kappa 0.69 (95% confidence interval 0.61-0.79). More specific ICD-9 MHI code bundles had kappas ranging from 0.44 to 0.82 and sensitivities ranging from 42% to 82%. Agreement between ICD-9 coding and clinician documentation for a broadly defined MHI is substantial, and can validly "rule in" MHI for hospitalized patients with heart failure. More specific MHI code bundles had fair to almost perfect agreement, with a wide range of sensitivities for identifying patients with an MHI. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Using the International Classification of Functioning, Disability, and Health to identify outcome domains for a core outcome set for aphasia: a comparison of stakeholder perspectives.

    Science.gov (United States)

    Wallace, Sarah J; Worrall, Linda; Rose, Tanya; Le Dorze, Guylaine

    2017-11-12

    This study synthesised the findings of three separate consensus processes exploring the perspectives of key stakeholder groups about important aphasia treatment outcomes. This process was conducted to generate recommendations for outcome domains to be included in a core outcome set for aphasia treatment trials. International Classification of Functioning, Disability, and Health codes were examined to identify where the groups of: (1) people with aphasia, (2) family members, (3) aphasia researchers, and (4) aphasia clinicians/managers, demonstrated congruence in their perspectives regarding important treatment outcomes. Codes were contextualized using qualitative data. Congruence across three or more stakeholder groups was evident for ICF chapters: Mental functions; Communication; and Services, systems, and policies. Quality of life was explicitly identified by clinicians/managers and researchers, while people with aphasia and their families identified outcomes known to be determinants of quality of life. Core aphasia outcomes include: language, emotional wellbeing, communication, patient-reported satisfaction with treatment and impact of treatment, and quality of life. International Classification of Functioning, Disability, and Health coding can be used to compare stakeholder perspectives and identify domains for core outcome sets. Pairing coding with qualitative data may ensure important nuances of meaning are retained. Implications for rehabilitation The outcomes measured in treatment research should be relevant to stakeholders and support health care decision making. Core outcome sets (agreed, minimum set of outcomes, and outcome measures) are increasingly being used to ensure the relevancy and consistency of the outcomes measured in treatment studies. Important aphasia treatment outcomes span all components of the International Classification of Functioning, Disability, and Health. Stakeholders demonstrated congruence in the identification of important

  12. Revision, uptake and coding issues related to the open access Orchard Sports Injury Classification System (OSICS) versions 8, 9 and 10.1

    Science.gov (United States)

    Orchard, John; Rae, Katherine; Brooks, John; Hägglund, Martin; Til, Lluis; Wales, David; Wood, Tim

    2010-01-01

    The Orchard Sports Injury Classification System (OSICS) is one of the world’s most commonly used systems for coding injury diagnoses in sports injury surveillance systems. Its major strengths are that it has wide usage, has codes specific to sports medicine and that it is free to use. Literature searches and stakeholder consultations were made to assess the uptake of OSICS and to develop new versions. OSICS was commonly used in the sports of football (soccer), Australian football, rugby union, cricket and tennis. It is referenced in international papers in three sports and used in four commercially available computerised injury management systems. Suggested injury categories for the major sports are presented. New versions OSICS 9 (three digit codes) and OSICS 10.1 (four digit codes) are presented. OSICS is a potentially helpful component of a comprehensive sports injury surveillance system, but many other components are required. Choices made in developing these components should ideally be agreed upon by groups of researchers in consensus statements. PMID:24198559

  13. The accuracy of International Classification of Diseases coding for dental problems not associated with trauma in a hospital emergency department.

    Science.gov (United States)

    Figueiredo, Rafael L F; Singhal, Sonica; Dempster, Laura; Hwang, Stephen W; Quinonez, Carlos

    2015-01-01

    Emergency department (ED) visits for nontraumatic dental conditions (NTDCs) may be a sign of unmet need for dental care. The objective of this study was to determine the accuracy of the International Classification of Diseases codes (ICD-10-CA) for ED visits for NTDC. ED visits in 2008-2099 at one hospital in Toronto were identified if the discharge diagnosis in the administrative database system was an ICD-10-CA code for a NTDC (K00-K14). A random sample of 100 visits was selected, and the medical records for these visits were reviewed by a dentist. The description of the clinical signs and symptoms were evaluated, and a diagnosis was assigned. This diagnosis was compared with the diagnosis assigned by the physician and the code assigned to the visit. The 100 ED visits reviewed were associated with 16 different ICD-10-CA codes for NTDC. Only 2 percent of these visits were clearly caused by trauma. The code K0887 (toothache) was the most frequent diagnostic code (31 percent). We found 43.3 percent disagreement on the discharge diagnosis reported by the physician, and 58.0 percent disagreement on the code in the administrative database assigned by the abstractor, compared with what it was suggested by the dentist reviewing the chart. There are substantial discrepancies between the ICD-10-CA diagnosis assigned in administrative databases and the diagnosis assigned by a dentist reviewing the chart retrospectively. However, ICD-10-CA codes can be used to accurately identify ED visits for NTDC. © 2015 American Association of Public Health Dentistry.

  14. Identifying Adverse Events Using International Classification of Diseases, Tenth Revision Y Codes in Korea: A Cross-sectional Study

    Directory of Open Access Journals (Sweden)

    Minsu Ock

    2018-01-01

    Full Text Available Objectives The use of administrative data is an affordable alternative to conducting a difficult large-scale medical-record review to estimate the scale of adverse events. We identified adverse events from 2002 to 2013 on the national level in Korea, using International Classification of Diseases, tenth revision (ICD-10 Y codes. Methods We used data from the National Health Insurance Service-National Sample Cohort (NHIS-NSC. We relied on medical treatment databases to extract information on ICD-10 Y codes from each participant in the NHIS-NSC. We classified adverse events in the ICD-10 Y codes into 6 types: those related to drugs, transfusions, and fluids; those related to vaccines and immunoglobulin; those related to surgery and procedures; those related to infections; those related to devices; and others. Results Over 12 years, a total of 20 817 adverse events were identified using ICD-10 Y codes, and the estimated total adverse event rate was 0.20%. Between 2002 and 2013, the total number of such events increased by 131.3%, from 1366 in 2002 to 3159 in 2013. The total rate increased by 103.9%, from 0.17% in 2002 to 0.35% in 2013. Events related to drugs, transfusions, and fluids were the most common (19 446, 93.4%, followed by those related to surgery and procedures (1209, 5.8% and those related to vaccines and immunoglobulin (72, 0.3%. Conclusions Based on a comparison with the results of other studies, the total adverse event rate in this study was significantly underestimated. Improving coding practices for ICD-10 Y codes is necessary to precisely monitor the scale of adverse events in Korea.

  15. Classification, disease, and diagnosis.

    Science.gov (United States)

    Jutel, Annemarie

    2011-01-01

    Classification shapes medicine and guides its practice. Understanding classification must be part of the quest to better understand the social context and implications of diagnosis. Classifications are part of the human work that provides a foundation for the recognition and study of illness: deciding how the vast expanse of nature can be partitioned into meaningful chunks, stabilizing and structuring what is otherwise disordered. This article explores the aims of classification, their embodiment in medical diagnosis, and the historical traditions of medical classification. It provides a brief overview of the aims and principles of classification and their relevance to contemporary medicine. It also demonstrates how classifications operate as social framing devices that enable and disable communication, assert and refute authority, and are important items for sociological study.

  16. HCPB TBM thermo mechanical design: Assessment with respect codes and standards and DEMO relevancy

    International Nuclear Information System (INIS)

    Cismondi, F.; Kecskes, S.; Aiello, G.

    2011-01-01

    In the frame of the activities of the European TBM Consortium of Associates the Helium Cooled Pebble Bed Test Blanket Module (HCPB-TBM) is developed in Karlsruhe Institute of Technology (KIT). After performing detailed thermal and fluid dynamic analyses of the preliminary HCPB TBM design, the thermo mechanical behaviour of the TBM under typical ITER loads has to be assessed. A synthesis of the different design options proposed has been realized building two different assemblies of the HCPB-TBM: these two assemblies and the analyses performed on them are presented in this paper. Finite Element thermo-mechanical analyses of two detailed 1/4 scaled models of the HCPB-TBM assemblies proposed have been performed, with the aim of verifying the accordance of the mechanical behaviour with the criteria of the design codes and standards. The structural design limits specified in the codes and standard are discussed in relation with the EUROFER available data and possible damage modes. Solutions to improve the weak structural points of the present design are identified and the DEMO relevancy of the present thermal and structural design parameters is discussed.

  17. Causes of death and associated conditions (Codac – a utilitarian approach to the classification of perinatal deaths

    Directory of Open Access Journals (Sweden)

    Harrison Catherine

    2009-06-01

    Full Text Available Abstract A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD, although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes. We tested the Causes of Death and Associated Conditions (Codac classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions. The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies, two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy, a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal. For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured. The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions

  18. Causes of death and associated conditions (Codac) – a utilitarian approach to the classification of perinatal deaths

    Science.gov (United States)

    Frøen, J Frederik; Pinar, Halit; Flenady, Vicki; Bahrin, Safiah; Charles, Adrian; Chauke, Lawrence; Day, Katie; Duke, Charles W; Facchinetti, Fabio; Fretts, Ruth C; Gardener, Glenn; Gilshenan, Kristen; Gordijn, Sanne J; Gordon, Adrienne; Guyon, Grace; Harrison, Catherine; Koshy, Rachel; Pattinson, Robert C; Petersson, Karin; Russell, Laurie; Saastad, Eli; Smith, Gordon CS; Torabi, Rozbeh

    2009-01-01

    A carefully classified dataset of perinatal mortality will retain the most significant information on the causes of death. Such information is needed for health care policy development, surveillance and international comparisons, clinical services and research. For comparability purposes, we propose a classification system that could serve all these needs, and be applicable in both developing and developed countries. It is developed to adhere to basic concepts of underlying cause in the International Classification of Diseases (ICD), although gaps in ICD prevent classification of perinatal deaths solely on existing ICD codes. We tested the Causes of Death and Associated Conditions (Codac) classification for perinatal deaths in seven populations, including two developing country settings. We identified areas of potential improvements in the ability to retain existing information, ease of use and inter-rater agreement. After revisions to address these issues we propose Version II of Codac with detailed coding instructions. The ten main categories of Codac consist of three key contributors to global perinatal mortality (intrapartum events, infections and congenital anomalies), two crucial aspects of perinatal mortality (unknown causes of death and termination of pregnancy), a clear distinction of conditions relevant only to the neonatal period and the remaining conditions are arranged in the four anatomical compartments (fetal, cord, placental and maternal). For more detail there are 94 subcategories, further specified in 577 categories in the full version. Codac is designed to accommodate both the main cause of death as well as two associated conditions. We suggest reporting not only the main cause of death, but also the associated relevant conditions so that scenarios of combined conditions and events are captured. The appropriately applied Codac system promises to better manage information on causes of perinatal deaths, the conditions associated with them, and the

  19. A Crosswalk of Mineral Commodity End Uses and North American Industry Classification System (NAICS) codes

    Science.gov (United States)

    Barry, James J.; Matos, Grecia R.; Menzie, W. David

    2015-09-14

    This crosswalk is based on the premise that there is a connection between the way mineral commodities are used and how this use is reflected in the economy. Raw mineral commodities are the basic materials from which goods, finished products, or intermediate materials are manufactured or made. Mineral commodities are vital to the development of the U.S. economy and they impact nearly every industrial segment of the economy, representing 12.2 percent of the U.S. gross domestic product (GDP) in 2010 (U.S. Bureau of Economic Analysis, 2014). In an effort to better understand the distribution of mineral commodities in the economy, the U.S. Geological Survey (USGS) attempts to link the end uses of mineral commodities to the corresponding North American Industry Classification System (NAICS) codes.

  20. Benchmark of coupling codes (ALOHA, TOPLHA and GRILL3D) with ITER-relevant Lower Hybrid antenna

    International Nuclear Information System (INIS)

    Milanesio, D.; Hillairet, J.; Panaccione, L.; Maggiora, R.; Artaud, J.F.; Bae, Y.S.; Barbera, A.M.A.; Belo, J.; Berger-By, G.; Bernard, J.M.; Cara, Ph.; Cardinali, A.; Castaldo, C.; Ceccuzzi, S.; Cesario, R.; Decker, J.; Delpech, L.; Ekedahl, A.; Garcia, J.; Garibaldi, P.

    2011-01-01

    In order to assist the design of the future ITER Lower Hybrid launcher, coupling codes ALOHA, from CEA/IRFM, TOPLHA, from Politecnico di Torino, and GRILL3D, developed by Dr. Mikhail Irzak (A.F. Ioffe Physico-Technical Institute, St. Petersburg, Russia) and operated by ENEA Frascati, have been compared with the updated (six modules with four active waveguides per module) Passive-Active Multi-junction (PAM) Lower Hybrid antennas. Both ALOHA and GRILL3D formulate the problem in terms of rectangular waveguides modes, while TOPLHA is based on boundary-value problem with the adoption of a triangular cell-mesh to represent the relevant waveguides surfaces. Several plasma profiles, with varying edge density and density increase, have been adopted to provide a complete description of the simulated launcher in terms of reflection coefficient, computed at the beginning of each LH module, and of power spectra. Good agreement can be observed among codes for all the simulated profiles.

  1. Comparison of Danish dichotomous and BI-RADS classifications of mammographic density.

    Science.gov (United States)

    Hodge, Rebecca; Hellmann, Sophie Sell; von Euler-Chelpin, My; Vejborg, Ilse; Andersen, Zorana Jovanovic

    2014-06-01

    In the Copenhagen mammography screening program from 1991 to 2001, mammographic density was classified either as fatty or mixed/dense. This dichotomous mammographic density classification system is unique internationally, and has not been validated before. To compare the Danish dichotomous mammographic density classification system from 1991 to 2001 with the density BI-RADS classifications, in an attempt to validate the Danish classification system. The study sample consisted of 120 mammograms taken in Copenhagen in 1991-2001, which tested false positive, and which were in 2012 re-assessed and classified according to the BI-RADS classification system. We calculated inter-rater agreement between the Danish dichotomous mammographic classification as fatty or mixed/dense and the four-level BI-RADS classification by the linear weighted Kappa statistic. Of the 120 women, 32 (26.7%) were classified as having fatty and 88 (73.3%) as mixed/dense mammographic density, according to Danish dichotomous classification. According to BI-RADS density classification, 12 (10.0%) women were classified as having predominantly fatty (BI-RADS code 1), 46 (38.3%) as having scattered fibroglandular (BI-RADS code 2), 57 (47.5%) as having heterogeneously dense (BI-RADS 3), and five (4.2%) as having extremely dense (BI-RADS code 4) mammographic density. The inter-rater variability assessed by weighted kappa statistic showed a substantial agreement (0.75). The dichotomous mammographic density classification system utilized in early years of Copenhagen's mammographic screening program (1991-2001) agreed well with the BI-RADS density classification system.

  2. Tumor taxonomy for the developmental lineage classification of neoplasms

    International Nuclear Information System (INIS)

    Berman, Jules J

    2004-01-01

    The new 'Developmental lineage classification of neoplasms' was described in a prior publication. The classification is simple (the entire hierarchy is described with just 39 classifiers), comprehensive (providing a place for every tumor of man), and consistent with recent attempts to characterize tumors by cytogenetic and molecular features. A taxonomy is a list of the instances that populate a classification. The taxonomy of neoplasia attempts to list every known term for every known tumor of man. The taxonomy provides each concept with a unique code and groups synonymous terms under the same concept. A Perl script validated successive drafts of the taxonomy ensuring that: 1) each term occurs only once in the taxonomy; 2) each term occurs in only one tumor class; 3) each concept code occurs in one and only one hierarchical position in the classification; and 4) the file containing the classification and taxonomy is a well-formed XML (eXtensible Markup Language) document. The taxonomy currently contains 122,632 different terms encompassing 5,376 neoplasm concepts. Each concept has, on average, 23 synonyms. The taxonomy populates 'The developmental lineage classification of neoplasms,' and is available as an XML file, currently 9+ Megabytes in length. A representation of the classification/taxonomy listing each term followed by its code, followed by its full ancestry, is available as a flat-file, 19+ Megabytes in length. The taxonomy is the largest nomenclature of neoplasms, with more than twice the number of neoplasm names found in other medical nomenclatures, including the 2004 version of the Unified Medical Language System, the Systematized Nomenclature of Medicine Clinical Terminology, the National Cancer Institute's Thesaurus, and the International Classification of Diseases Oncolology version. This manuscript describes a comprehensive taxonomy of neoplasia that collects synonymous terms under a unique code number and assigns each

  3. A full computation-relevant topological dynamics classification of elementary cellular automata.

    Science.gov (United States)

    Schüle, Martin; Stoop, Ruedi

    2012-12-01

    Cellular automata are both computational and dynamical systems. We give a complete classification of the dynamic behaviour of elementary cellular automata (ECA) in terms of fundamental dynamic system notions such as sensitivity and chaoticity. The "complex" ECA emerge to be sensitive, but not chaotic and not eventually weakly periodic. Based on this classification, we conjecture that elementary cellular automata capable of carrying out complex computations, such as needed for Turing-universality, are at the "edge of chaos."

  4. Relevant test set using feature selection algorithm for early detection ...

    African Journals Online (AJOL)

    The objective of feature selection is to find the most relevant features for classification. Thus, the dimensionality of the information will be reduced and may improve classification's accuracy. This paper proposed a minimum set of relevant questions that can be used for early detection of dyslexia. In this research, we ...

  5. Classification and prediction of river network ephemerality and its relevance for waterborne disease epidemiology

    Science.gov (United States)

    Perez-Saez, Javier; Mande, Theophile; Larsen, Joshua; Ceperley, Natalie; Rinaldo, Andrea

    2017-12-01

    The transmission of waterborne diseases hinges on the interactions between hydrology and ecology of hosts, vectors and parasites, with the long-term absence of water constituting a strict lower bound. However, the link between spatio-temporal patterns of hydrological ephemerality and waterborne disease transmission is poorly understood and difficult to account for. The use of limited biophysical and hydroclimate information from otherwise data scarce regions is therefore needed to characterize, classify, and predict river network ephemerality in a spatially explicit framework. Here, we develop a novel large-scale ephemerality classification and prediction methodology based on monthly discharge data, water and energy availability, and remote-sensing measures of vegetation, that is relevant to epidemiology, and maintains a mechanistic link to catchment hydrologic processes. Specifically, with reference to the context of Burkina Faso in sub-Saharan Africa, we extract a relevant set of catchment covariates that include the aridity index, annual runoff estimation using the Budyko framework, and hysteretical relations between precipitation and vegetation. Five ephemerality classes, from permanent to strongly ephemeral, are defined from the duration of 0-flow periods that also accounts for the sensitivity of river discharge to the long-lasting drought of the 70's-80's in West Africa. Using such classes, a gradient-boosted tree-based prediction yielded three distinct geographic regions of ephemerality. Importantly, we observe a strong epidemiological association between our predictions of hydrologic ephemerality and the known spatial patterns of schistosomiasis, an endemic parasitic waterborne disease in which infection occurs with human-water contact, and requires aquatic snails as an intermediate host. The general nature of our approach and its relevance for predicting the hydrologic controls on schistosomiasis occurrence provides a pathway for the explicit inclusion of

  6. Classification Using Markov Blanket for Feature Selection

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Luo, Jian

    2009-01-01

    Selecting relevant features is in demand when a large data set is of interest in a classification task. It produces a tractable number of features that are sufficient and possibly improve the classification performance. This paper studies a statistical method of Markov blanket induction algorithm...... for filtering features and then applies a classifier using the Markov blanket predictors. The Markov blanket contains a minimal subset of relevant features that yields optimal classification performance. We experimentally demonstrate the improved performance of several classifiers using a Markov blanket...... induction as a feature selection method. In addition, we point out an important assumption behind the Markov blanket induction algorithm and show its effect on the classification performance....

  7. Classification of quantum phases and topology of logical operators in an exactly solved model of quantum codes

    International Nuclear Information System (INIS)

    Yoshida, Beni

    2011-01-01

    Searches for possible new quantum phases and classifications of quantum phases have been central problems in physics. Yet, they are indeed challenging problems due to the computational difficulties in analyzing quantum many-body systems and the lack of a general framework for classifications. While frustration-free Hamiltonians, which appear as fixed point Hamiltonians of renormalization group transformations, may serve as representatives of quantum phases, it is still difficult to analyze and classify quantum phases of arbitrary frustration-free Hamiltonians exhaustively. Here, we address these problems by sharpening our considerations to a certain subclass of frustration-free Hamiltonians, called stabilizer Hamiltonians, which have been actively studied in quantum information science. We propose a model of frustration-free Hamiltonians which covers a large class of physically realistic stabilizer Hamiltonians, constrained to only three physical conditions; the locality of interaction terms, translation symmetries and scale symmetries, meaning that the number of ground states does not grow with the system size. We show that quantum phases arising in two-dimensional models can be classified exactly through certain quantum coding theoretical operators, called logical operators, by proving that two models with topologically distinct shapes of logical operators are always separated by quantum phase transitions.

  8. The structure of dual Grassmann codes

    DEFF Research Database (Denmark)

    Beelen, Peter; Pinero, Fernando

    2016-01-01

    In this article we study the duals of Grassmann codes, certain codes coming from the Grassmannian variety. Exploiting their structure, we are able to count and classify all their minimum weight codewords. In this classification the lines lying on the Grassmannian variety play a central role. Rela...

  9. Remote-Handled Transuranic Content Codes

    International Nuclear Information System (INIS)

    2001-01-01

    The Remote-Handled Transuranic (RH-TRU) Content Codes (RH-TRUCON) document represents the development of a uniform content code system for RH-TRU waste to be transported in the 72-Bcask. It will be used to convert existing waste form numbers, content codes, and site-specific identification codes into a system that is uniform across the U.S. Department of Energy (DOE) sites.The existing waste codes at the sites can be grouped under uniform content codes without any lossof waste characterization information. The RH-TRUCON document provides an all-encompassing description for each content code and compiles this information for all DOE sites. Compliance with waste generation, processing, and certification procedures at the sites (outlined in this document foreach content code) ensures that prohibited waste forms are not present in the waste. The content code gives an overall description of the RH-TRU waste material in terms of processes and packaging, as well as the generation location. This helps to provide cradle-to-grave traceability of the waste material so that the various actions required to assess its qualification as payload for the 72-B cask can be performed. The content codes also impose restrictions and requirements on the manner in which a payload can be assembled. The RH-TRU Waste Authorized Methods for Payload Control (RH-TRAMPAC), Appendix 1.3.7 of the 72-B Cask Safety Analysis Report (SAR), describes the current governing procedures applicable for the qualification of waste as payload for the 72-B cask. The logic for this classification is presented in the 72-B Cask SAR. Together, these documents (RH-TRUCON, RH-TRAMPAC, and relevant sections of the 72-B Cask SAR) present the foundation and justification for classifying RH-TRU waste into content codes. Only content codes described in thisdocument can be considered for transport in the 72-B cask. Revisions to this document will be madeas additional waste qualifies for transport. Each content code uniquely

  10. System for selecting relevant information for decision support.

    Science.gov (United States)

    Kalina, Jan; Seidl, Libor; Zvára, Karel; Grünfeldová, Hana; Slovák, Dalibor; Zvárová, Jana

    2013-01-01

    We implemented a prototype of a decision support system called SIR which has a form of a web-based classification service for diagnostic decision support. The system has the ability to select the most relevant variables and to learn a classification rule, which is guaranteed to be suitable also for high-dimensional measurements. The classification system can be useful for clinicians in primary care to support their decision-making tasks with relevant information extracted from any available clinical study. The implemented prototype was tested on a sample of patients in a cardiological study and performs an information extraction from a high-dimensional set containing both clinical and gene expression data.

  11. Multi-information fusion sparse coding with preserving local structure for hyperspectral image classification

    Science.gov (United States)

    Wei, Xiaohui; Zhu, Wen; Liao, Bo; Gu, Changlong; Li, Weibiao

    2017-10-01

    The key question of sparse coding (SC) is how to exploit the information that already exists to acquire the robust sparse representations (SRs) of distinguishing different objects for hyperspectral image (HSI) classification. We propose a multi-information fusion SC framework, which fuses the spectral, spatial, and label information in the same level, to solve the above question. In particular, pixels from disjointed spatial clusters, which are obtained by cutting the given HSI in space, are individually and sparsely encoded. Then, due to the importance of spatial structure, graph- and hypergraph-based regularizers are enforced to motivate the obtained representations smoothness and to preserve the local consistency for each spatial cluster. The latter simultaneously considers the spectrum, spatial, and label information of multiple pixels that have a great probability with the same label. Finally, a linear support vector machine is selected as the final classifier with the learned SRs as input. Experiments conducted on three frequently used real HSIs show that our methods can achieve satisfactory results compared with other state-of-the-art methods.

  12. Cracking the code of oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Philippe G Schyns

    2011-05-01

    Full Text Available Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response--that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brain.

  13. Administrative database concerns: accuracy of International Classification of Diseases, Ninth Revision coding is poor for preoperative anemia in patients undergoing spinal fusion.

    Science.gov (United States)

    Golinvaux, Nicholas S; Bohl, Daniel D; Basques, Bryce A; Grauer, Jonathan N

    2014-11-15

    Cross-sectional study. To objectively evaluate the ability of International Classification of Diseases, Ninth Revision (ICD-9) codes, which are used as the foundation for administratively coded national databases, to identify preoperative anemia in patients undergoing spinal fusion. National database research in spine surgery continues to rise. However, the validity of studies based on administratively coded data, such as the Nationwide Inpatient Sample, are dependent on the accuracy of ICD-9 coding. Such coding has previously been found to have poor sensitivity to conditions such as obesity and infection. A cross-sectional study was performed at an academic medical center. Hospital-reported anemia ICD-9 codes (those used for administratively coded databases) were directly compared with the chart-documented preoperative hematocrits (true laboratory values). A patient was deemed to have preoperative anemia if the preoperative hematocrit was less than the lower end of the normal range (36.0% for females and 41.0% for males). The study included 260 patients. Of these, 37 patients (14.2%) were anemic; however, only 10 patients (3.8%) received an "anemia" ICD-9 code. Of the 10 patients coded as anemic, 7 were anemic by definition, whereas 3 were not, and thus were miscoded. This equates to an ICD-9 code sensitivity of 0.19, with a specificity of 0.99, and positive and negative predictive values of 0.70 and 0.88, respectively. This study uses preoperative anemia to demonstrate the potential inaccuracies of ICD-9 coding. These results have implications for publications using databases that are compiled from ICD-9 coding data. Furthermore, the findings of the current investigation raise concerns regarding the accuracy of additional comorbidities. Although administrative databases are powerful resources that provide large sample sizes, it is crucial that we further consider the quality of the data source relative to its intended purpose.

  14. Application of Cocktail method in vegetation classification

    Directory of Open Access Journals (Sweden)

    Hamed Asadi

    2016-09-01

    Full Text Available This study intends to assess the application of Cocktail method in the classification of large vegetation databases. For this purpose, Buxus hyrcana dataset consisted of 442 relevés with 89 species were used and by the modified TWINSPAN. For running the Cocktail method, first primarily classification was done by modified TWINSPAN, and by performing phi analysis in the groups resulted five species were selected which had the highest fidelity value. Then sociological species groups were formed by examining co-occurrence of these 5 species with other species in the database. 21 plant communities belongs to 6 variant, 17 sub associations, 11 associations, 4 alliance, 1 order and 1 class were recognized by assigning 379 releves to the sociological species groups by using logical formulas. Also, 63 releves by the logical formula were not assigned to any sociological species groups, by FPFI index were assigned to the sociological species groups which had the most index value. According to 91% classification agreement with Brown-Blanquet classification and Cocktail classification, we suggest Cocktail method to vegetation scientists as an efficient alternative of Braun-Blanquet method to classify large vegetation databases.

  15. Diagnosis of periodontal diseases using different classification ...

    African Journals Online (AJOL)

    The codes created for risk factors, periodontal data, and radiographically bone loss were formed as a matrix structure and regarded as inputs for the classification unit. A total of six periodontal conditions was the outputs of the classification unit. The accuracy of the suggested methods was compared according to their ...

  16. FACET CLASSIFICATIONS OF E-LEARNING TOOLS

    Directory of Open Access Journals (Sweden)

    Olena Yu. Balalaieva

    2013-12-01

    Full Text Available The article deals with the classification of e-learning tools based on the facet method, which suggests the separation of the parallel set of objects into independent classification groups; at the same time it is not assumed rigid classification structure and pre-built finite groups classification groups are formed by a combination of values taken from the relevant facets. An attempt to systematize the existing classification of e-learning tools from the standpoint of classification theory is made for the first time. Modern Ukrainian and foreign facet classifications of e-learning tools are described; their positive and negative features compared to classifications based on a hierarchical method are analyzed. The original author's facet classification of e-learning tools is proposed.

  17. Computer codes used in particle accelerator design: First edition

    International Nuclear Information System (INIS)

    1987-01-01

    This paper contains a listing of more than 150 programs that have been used in the design and analysis of accelerators. Given on each citation are person to contact, classification of the computer code, publications describing the code, computer and language runned on, and a short description of the code. Codes are indexed by subject, person to contact, and code acronym

  18. Information gathering for CLP classification

    Directory of Open Access Journals (Sweden)

    Ida Marcello

    2011-01-01

    Full Text Available Regulation 1272/2008 includes provisions for two types of classification: harmonised classification and self-classification. The harmonised classification of substances is decided at Community level and a list of harmonised classifications is included in the Annex VI of the classification, labelling and packaging Regulation (CLP. If a chemical substance is not included in the harmonised classification list it must be self-classified, based on available information, according to the requirements of Annex I of the CLP Regulation. CLP appoints that the harmonised classification will be performed for carcinogenic, mutagenic or toxic to reproduction substances (CMR substances and for respiratory sensitisers category 1 and for other hazard classes on a case-by-case basis. The first step of classification is the gathering of available and relevant information. This paper presents the procedure for gathering information and to obtain data. The data quality is also discussed.

  19. Positive Predictive Values of International Classification of Diseases, 10th Revision Coding Algorithms to Identify Patients With Autosomal Dominant Polycystic Kidney Disease

    Directory of Open Access Journals (Sweden)

    Vinusha Kalatharan

    2016-12-01

    Full Text Available Background: International Classification of Diseases, 10th Revision codes (ICD-10 for autosomal dominant polycystic kidney disease (ADPKD is used within several administrative health care databases. It is unknown whether these codes identify patients who meet strict clinical criteria for ADPKD. Objective: The objective of this study is (1 to determine whether different ICD-10 coding algorithms identify adult patients who meet strict clinical criteria for ADPKD as assessed through medical chart review and (2 to assess the number of patients identified with different ADPKD coding algorithms in Ontario. Design: Validation study of health care database codes, and prevalence. Setting: Ontario, Canada. Patients: For the chart review, 201 adult patients with hospital encounters between April 1, 2002, and March 31, 2014, assigned either ICD-10 codes Q61.2 or Q61.3. Measurements: This study measured positive predictive value of the ICD-10 coding algorithms and the number of Ontarians identified with different coding algorithms. Methods: We manually reviewed a random sample of medical charts in London, Ontario, Canada, and determined whether or not ADPKD was present according to strict clinical criteria. Results: The presence of either ICD-10 code Q61.2 or Q61.3 in a hospital encounter had a positive predictive value of 85% (95% confidence interval [CI], 79%-89% and identified 2981 Ontarians (0.02% of the Ontario adult population. The presence of ICD-10 code Q61.2 in a hospital encounter had a positive predictive value of 97% (95% CI, 86%-100% and identified 394 adults in Ontario (0.003% of the Ontario adult population. Limitations: (1 We could not calculate other measures of validity; (2 the coding algorithms do not identify patients without hospital encounters; and (3 coding practices may differ between hospitals. Conclusions: Most patients with ICD-10 code Q61.2 or Q61.3 assigned during their hospital encounters have ADPKD according to the clinical

  20. Supernova Photometric Lightcurve Classification

    Science.gov (United States)

    Zaidi, Tayeb; Narayan, Gautham

    2016-01-01

    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  1. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  2. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  3. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    Science.gov (United States)

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Land Cover and Land Use Classification with TWOPAC: towards Automated Processing for Pixel- and Object-Based Image Classification

    Directory of Open Access Journals (Sweden)

    Stefan Dech

    2012-09-01

    Full Text Available We present a novel and innovative automated processing environment for the derivation of land cover (LC and land use (LU information. This processing framework named TWOPAC (TWinned Object and Pixel based Automated classification Chain enables the standardized, independent, user-friendly, and comparable derivation of LC and LU information, with minimized manual classification labor. TWOPAC allows classification of multi-spectral and multi-temporal remote sensing imagery from different sensor types. TWOPAC enables not only pixel-based classification, but also allows classification based on object-based characteristics. Classification is based on a Decision Tree approach (DT for which the well-known C5.0 code has been implemented, which builds decision trees based on the concept of information entropy. TWOPAC enables automatic generation of the decision tree classifier based on a C5.0-retrieved ascii-file, as well as fully automatic validation of the classification output via sample based accuracy assessment.Envisaging the automated generation of standardized land cover products, as well as area-wide classification of large amounts of data in preferably a short processing time, standardized interfaces for process control, Web Processing Services (WPS, as introduced by the Open Geospatial Consortium (OGC, are utilized. TWOPAC’s functionality to process geospatial raster or vector data via web resources (server, network enables TWOPAC’s usability independent of any commercial client or desktop software and allows for large scale data processing on servers. Furthermore, the components of TWOPAC were built-up using open source code components and are implemented as a plug-in for Quantum GIS software for easy handling of the classification process from the user’s perspective.

  5. The Importance of Classification to Business Model Research

    OpenAIRE

    Susan Lambert

    2015-01-01

    Purpose: To bring to the fore the scientific significance of classification and its role in business model theory building. To propose a method by which existing classifications of business models can be analyzed and new ones developed. Design/Methodology/Approach: A review of the scholarly literature relevant to classifications of business models is presented along with a brief overview of classification theory applicable to business model research. Existing business model classification...

  6. Classification differences and maternal mortality

    DEFF Research Database (Denmark)

    Salanave, B; Bouvier-Colle, M H; Varnoux, N

    1999-01-01

    OBJECTIVES: To compare the ways maternal deaths are classified in national statistical offices in Europe and to evaluate the ways classification affects published rates. METHODS: Data on pregnancy-associated deaths were collected in 13 European countries. Cases were classified by a European panel....... This change was substantial in three countries (P statistical offices appeared to attribute fewer deaths to obstetric causes. In the other countries, no differences were detected. According to official published data, the aggregated maternal mortality rate for participating countries was 7.7 per...... of experts into obstetric or non-obstetric causes. An ICD-9 code (International Classification of Diseases) was attributed to each case. These were compared to the codes given in each country. Correction indices were calculated, giving new estimates of maternal mortality rates. SUBJECTS: There were...

  7. The Classification of Complementary Information Set Codes of Lengths 14 and 16

    OpenAIRE

    Freibert, Finley

    2012-01-01

    In the paper "A new class of codes for Boolean masking of cryptographic computations," Carlet, Gaborit, Kim, and Sol\\'{e} defined a new class of rate one-half binary codes called \\emph{complementary information set} (or CIS) codes. The authors then classified all CIS codes of length less than or equal to 12. CIS codes have relations to classical Coding Theory as they are a generalization of self-dual codes. As stated in the paper, CIS codes also have important practical applications as they m...

  8. Compilation of the nuclear codes available in CTA

    International Nuclear Information System (INIS)

    D'Oliveira, A.B.; Moura Neto, C. de; Amorim, E.S. do; Ferreira, W.J.

    1979-07-01

    The present work is a compilation of some nuclear codes available in the Divisao de Estudos Avancados of the Instituto de Atividades Espaciais, (EAV/IAE/CTA). The codes are organized as the classification given by the Argonne National Laboratory. In each code are given: author, institution of origin, abstract, programming language and existent bibliography. (Author) [pt

  9. On the Organizational Dynamics of the Genetic Code

    KAUST Repository

    Zhang, Zhang

    2011-06-07

    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides—adenine, thymine, guanine and cytosine—according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  10. On the Organizational Dynamics of the Genetic Code

    KAUST Repository

    Zhang, Zhang; Yu, Jun

    2011-01-01

    The organization of the canonical genetic code needs to be thoroughly illuminated. Here we reorder the four nucleotides—adenine, thymine, guanine and cytosine—according to their emergence in evolution, and apply the organizational rules to devising an algebraic representation for the canonical genetic code. Under a framework of the devised code, we quantify codon and amino acid usages from a large collection of 917 prokaryotic genome sequences, and associate the usages with its intrinsic structure and classification schemes as well as amino acid physicochemical properties. Our results show that the algebraic representation of the code is structurally equivalent to a content-centric organization of the code and that codon and amino acid usages under different classification schemes were correlated closely with GC content, implying a set of rules governing composition dynamics across a wide variety of prokaryotic genome sequences. These results also indicate that codons and amino acids are not randomly allocated in the code, where the six-fold degenerate codons and their amino acids have important balancing roles for error minimization. Therefore, the content-centric code is of great usefulness in deciphering its hitherto unknown regularities as well as the dynamics of nucleotide, codon, and amino acid compositions.

  11. On application of CFD codes to problems of nuclear reactor safety

    International Nuclear Information System (INIS)

    Muehlbauer, Petr

    2005-01-01

    The 'Exploratory Meeting of Experts to Define an Action Plan on the Application of Computational Fluid Dynamics (CFD) Codes to Nuclear Reactor Safety Problems' held in May 2002 at Aix-en-Province, France, recommended formation of writing groups to report the need of guidelines for use and assessment of CFD in single-phase nuclear reactor safety problems, and on recommended extensions to CFD codes to meet the needs of two-phase problems in nuclear reactor safety. This recommendations was supported also by Working Group on the Analysis and Management of Accidents and led to formation oaf three Writing Groups. The first writing Group prepared a summary of existing best practice guidelines for single phase CFD analysis and made a recommendation on the need for nuclear reactor safety specific guidelines. The second Writing Group selected those nuclear reactor safety applications for which understanding requires or is significantly enhanced by single-phase CFD analysis, and proposed a methodology for establishing assesment matrices relevant to nuclear reactor safety applications. The third writing group performed a classification of nuclear reactor safety problems where extension of CFD to two-phase flow may bring real benefit, a classification of different modeling approaches, and specification and analysis of needs in terms of physical and numerical assessments. This presentation provides a review of these activities with the most important conclusions and recommendations (Authors)

  12. Injuries of the Medial Clavicle: A Cohort Analysis in a Level-I-Trauma-Center. Concomitant Injuries. Management. Classification.

    Science.gov (United States)

    Bakir, Mustafa Sinan; Merschin, David; Unterkofler, Jan; Guembel, Denis; Langenbach, Andreas; Ekkernkamp, Axel; Schulz-Drost, Stefan

    2017-01-01

    Introduction: Although shoulder girdle injuries are frequent, those of the medial clavicle are widely unexplored. An applied classification is less used just as a standard management. Methods: A retrospective analysis of medial clavicle injuries (MCI) during a 5-year-term in a Level-1-Trauma-Center. We analyzed amongst others concomitant injuries, therapy strategies and the classification following the AO standards. Results: 19 (2.5%) out of 759 clavicula injuries were medial ones (11 A, 6 B and 2 C-Type fractures) thereunder 27,8% were displaced and thus operatively treated Locked plate osteosynthesis was employed in unstable fractures and a reconstruction of the ligaments at the sternoclavicular joint (SCJ) in case of their disruption. 84,2% of the patients sustained relevant concomitant injuries. Numerous midshaft fractures were miscoded as medial fracture, which limited the study population. Conclusions: MCI resulted from high impact mechanisms of injury, often with relevant dislocation and concomitant injuries. Concerning medial injury's complexity, treatment should occur in specialized hospitals. Unstable fractures and injuries of the SCJ ligaments should be considered for operative treatment. Midshaft fractures should be clearly distinguished from the medial ones in ICD-10-coding. Further studies are required also regarding a subtyping of the AO classification for medial clavicle fractures including ligamental injuries. Celsius.

  13. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  14. IR-360 nuclear power plant safety functions and component classification

    International Nuclear Information System (INIS)

    Yousefpour, F.; Shokri, F.; Soltani, H.

    2010-01-01

    The IR-360 nuclear power plant as a 2-loop PWR of 360 MWe power generation capacity is under design in MASNA Company. For design of the IR-360 structures, systems and components (SSCs), the codes and standards and their design requirements must be determined. It is a prerequisite to classify the IR-360 safety functions and safety grade of structures, systems and components correctly for selecting and adopting the suitable design codes and standards. This paper refers to the IAEA nuclear safety codes and standards as well as USNRC standard system to determine the IR-360 safety functions and to formulate the principles of the IR-360 component classification in accordance with the safety philosophy and feature of the IR-360. By implementation of defined classification procedures for the IR-360 SSCs, the appropriate design codes and standards are specified. The requirements of specific codes and standards are used in design process of IR-360 SSCs by design engineers of MASNA Company. In this paper, individual determination of the IR-360 safety functions and definition of the classification procedures and roles are presented. Implementation of this work which is described with example ensures the safety and reliability of the IR-360 nuclear power plant.

  15. IR-360 nuclear power plant safety functions and component classification

    Energy Technology Data Exchange (ETDEWEB)

    Yousefpour, F., E-mail: fyousefpour@snira.co [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of); Shokri, F.; Soltani, H. [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of)

    2010-10-15

    The IR-360 nuclear power plant as a 2-loop PWR of 360 MWe power generation capacity is under design in MASNA Company. For design of the IR-360 structures, systems and components (SSCs), the codes and standards and their design requirements must be determined. It is a prerequisite to classify the IR-360 safety functions and safety grade of structures, systems and components correctly for selecting and adopting the suitable design codes and standards. This paper refers to the IAEA nuclear safety codes and standards as well as USNRC standard system to determine the IR-360 safety functions and to formulate the principles of the IR-360 component classification in accordance with the safety philosophy and feature of the IR-360. By implementation of defined classification procedures for the IR-360 SSCs, the appropriate design codes and standards are specified. The requirements of specific codes and standards are used in design process of IR-360 SSCs by design engineers of MASNA Company. In this paper, individual determination of the IR-360 safety functions and definition of the classification procedures and roles are presented. Implementation of this work which is described with example ensures the safety and reliability of the IR-360 nuclear power plant.

  16. On the classification of structures, systems and components of nuclear research and test reactors

    International Nuclear Information System (INIS)

    Mattar Neto, Miguel

    2009-01-01

    The classification of structures, systems and components of nuclear reactors is a relevant issue related to their design because it is directly associated with their safety functions. There is an important statement regarding quality standards and records that says Structures, systems, and components important to safety shall be designed, fabricated, erected, and tested to quality standards commensurate with the importance of the safety functions to be performed. The definition of the codes, standards and technical requirements applied to the nuclear reactor design, fabrication, inspection and tests may be seen as the main result from this statement. There are well established guides to classify structures, systems and components for nuclear power reactors such as the Pressurized Water Reactors but one can not say the same for nuclear research and test reactors. The nuclear reactors safety functions are those required to the safe reactor operation, the safe reactor shutdown and continued safe conditions, the response to anticipated transients, the response to potential accidents and the control of radioactive material. So, it is proposed in this paper an approach to develop the classification of structures, systems and components of these reactors based on their intended safety functions in order to define the applicable set of codes, standards and technical requirements. (author)

  17. 75 FR 78213 - Proposed Information Collection; Comment Request; 2012 Economic Census Classification Report for...

    Science.gov (United States)

    2010-12-15

    ... 8-digit North American Industry Classification System (NAICS) based code for use in the 2012... classification due to changes in NAICS for 2012. Collecting this classification information will ensure the... the reporting burden on sampled sectors. Proper NAICS classification data ensures high quality...

  18. Interval prediction for graded multi-label classification

    CERN Document Server

    Lastra, Gerardo; Bahamonde, Antonio

    2014-01-01

    Multi-label was introduced as an extension of multi-class classification. The aim is to predict a set of classes (called labels in this context) instead of a single one, namely the set of relevant labels. If membership to the set of relevant labels is defined to a certain degree, the learning task is called graded multi-label classification. These learning tasks can be seen as a set of ordinal classifications. Hence, recommender systems can be considered as multi-label classification tasks. In this paper, we present a new type of nondeterministic learner that, for each instance, tries to predict at the same time the true grade for each label. When the classification is uncertain for a label, however, the hypotheses predict a set of consecutive grades, i.e., an interval. The goal is to keep the set of predicted grades as small as possible; while still containing the true grade. We shall see that these classifiers take advantage of the interrelations of labels. The result is that, with quite narrow intervals, i...

  19. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  20. A Proposal for Cardiac Arrhythmia Classification using Complexity Measures

    Directory of Open Access Journals (Sweden)

    AROTARITEI, D.

    2017-08-01

    Full Text Available Cardiovascular diseases are one of the major problems of humanity and therefore one of their component, arrhythmia detection and classification drawn an increased attention worldwide. The presence of randomness in discrete time series, like those arising in electrophysiology, is firmly connected with computational complexity measure. This connection can be used, for instance, in the analysis of RR-intervals of electrocardiographic (ECG signal, coded as binary string, to detect and classify arrhythmia. Our approach uses three algorithms (Lempel-Ziv, Sample Entropy and T-Code to compute the information complexity applied and a classification tree to detect 13 types of arrhythmia with encouraging results. To overcome the computational effort required for complexity calculus, a cloud computing solution with executable code deployment is also proposed.

  1. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  2. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  3. Quality improvement of International Classification of Diseases, 9th revision, diagnosis coding in radiation oncology: single-institution prospective study at University of California, San Francisco.

    Science.gov (United States)

    Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E

    2015-01-01

    Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem

  4. Evaluation of the Waste Isolation Pilot Plant classification of systems, structures and components

    International Nuclear Information System (INIS)

    1985-07-01

    A review of the classification system for systems, structures, and components at the Waste Isolation Pilot Plant (WIPP) was performed using the WIPP Safety Analysis Report (SAR) and Bechtel document D-76-D-03 as primary source documents. The regulations of the US Nuclear Regulatory Commission (NRC) covering ''Disposal of High level Radioactive Wastes in Geologic Repositories,'' 10 CFR 60, and the regulations relevant to nuclear power plant siting and construction (10 CFR 50, 51, 100) were used as standards to evaluate the WIPP design classification system, although it is recognized that the US Department of Energy (DOE) is not required to comply with these NRC regulations in the design and construction of WIPP. The DOE General Design Criteria Manual (DOE Order 6430.1) and the Safety Analysis and Review System for AL Operation document (AL 54f81.1A) were reviewed in part. This report includes a discussion of the historical basis for nuclear power plant requirements, a review of WIPP and nuclear power plant classification bases, and a comparison of the codes and standards applicable to each quality level. Observations made during the review of the WIPP SAR are noted in the text of this reoport. The conclusions reached by this review are: WIPP classification methodology is comparable to corresponding nuclear power procedures. The classification levels assigned to WIPP systems are qualitatively the same as those assigned to nuclear power plant systems

  5. Association of Postoperative Readmissions With Surgical Quality Using a Delphi Consensus Process to Identify Relevant Diagnosis Codes.

    Science.gov (United States)

    Mull, Hillary J; Graham, Laura A; Morris, Melanie S; Rosen, Amy K; Richman, Joshua S; Whittle, Jeffery; Burns, Edith; Wagner, Todd H; Copeland, Laurel A; Wahl, Tyler; Jones, Caroline; Hollis, Robert H; Itani, Kamal M F; Hawn, Mary T

    2018-04-18

    Postoperative readmission data are used to measure hospital performance, yet the extent to which these readmissions reflect surgical quality is unknown. To establish expert consensus on whether reasons for postoperative readmission are associated with the quality of surgery in the index admission. In a modified Delphi process, a panel of 14 experts in medical and surgical readmissions comprising physicians and nonphysicians from Veterans Affairs (VA) and private-sector institutions reviewed 30-day postoperative readmissions from fiscal years 2008 through 2014 associated with inpatient surgical procedures performed at a VA medical center between October 1, 2007, and September 30, 2014. The consensus process was conducted from January through May 2017. Reasons for readmission were grouped into categories based on International Classification of Diseases, Ninth Revision (ICD-9) diagnosis codes. Panelists were given the proportion of readmissions coded by each reason and median (interquartile range) days to readmission. They answered the question, "Does the readmission reason reflect possible surgical quality of care problems in the index admission?" on a scale of 1 (never related) to 5 (directly related) in 3 rounds of consensus building. The consensus process was completed in May 2017 and data were analyzed in June 2017. Consensus on proportion of ICD-9-coded readmission reasons that reflected quality of surgical procedure. In 3 Delphi rounds, the 14 panelists achieved consensus on 50 reasons for readmission; 12 panelists also completed group telephone calls between rounds 1 and 2. Readmissions with diagnoses of infection, sepsis, pneumonia, hemorrhage/hematoma, anemia, ostomy complications, acute renal failure, fluid/electrolyte disorders, or venous thromboembolism were considered associated with surgical quality and accounted for 25 521 of 39 664 readmissions (64% of readmissions; 7.5% of 340 858 index surgical procedures). The proportion of readmissions

  6. On the classification of long non-coding RNAs

    KAUST Repository

    Ma, Lina; Bajic, Vladimir B.; Zhang, Zhang

    2013-01-01

    Long non-coding RNAs (lncRNAs) have been found to perform various functions in a wide variety of important biological processes. To make easier interpretation of lncRNA functionality and conduct deep mining on these transcribed sequences

  7. Validation of a new classification for periprosthetic shoulder fractures.

    Science.gov (United States)

    Kirchhoff, Chlodwig; Beirer, Marc; Brunner, Ulrich; Buchholz, Arne; Biberthaler, Peter; Crönlein, Moritz

    2018-06-01

    Successful treatment of periprosthetic shoulder fractures depends on the right strategy, starting with a well-structured classification of the fracture. Unfortunately, clinically relevant factors for treatment planning are missing in the pre-existing classifications. Therefore, the aim of the present study was to describe a new specific classification system for periprosthetic shoulder fractures including a structured treatment algorithm for this important fragility fracture issue. The classification was established, focussing on five relevant items, naming the prosthesis type, the fracture localisation, the rotator cuff status, the anatomical fracture region and the stability of the implant. After considering each single item, the individual treatment concept can be assessed in one last step. To evaluate the introduced classification, a retrospective analysis of pre- and post-operative data of patients, treated with periprosthetic shoulder fractures, was conducted by two board certified trauma surgery consultants. The data of 19 patients (8 male, 11 female) with a mean age of 74 ± five years have been analysed in our study. The suggested treatment algorithm was proven to be reliable, detected by good clinical outcome in 15 of 16 (94%) cases, where the suggested treatment was maintained. Only one case resulted in poor outcome due to post-operative wound infection and had to be revised. The newly developed six-step classification is easy to utilise and extends the pre-existing classification systems in terms of clinically-relevant information. This classification should serve as a simple tool for the surgeon to consider the optimal treatment for his patients.

  8. Comparison and analysis for item classifications between AP1000 and traditional PWR

    International Nuclear Information System (INIS)

    Luo Shuiyun; Liu Xiaoyan

    2012-01-01

    The comparison and analysis for the safety classification, seismic category, code classification and QA classification between AP1000 and traditional PWR were presented. The safety could be guaranteed and the construction and manufacture costs could be cut down since all sorts of AP1000 classifications. It is suggested that the QA classification and the QA requirements correspond to the national conditions should be drafted in the process of AP1000 domestication. (authors)

  9. Land Cover - Minnesota Land Cover Classification System

    Data.gov (United States)

    Minnesota Department of Natural Resources — Land cover data set based on the Minnesota Land Cover Classification System (MLCCS) coding scheme. This data was produced using a combination of aerial photograph...

  10. Changing patient classification system for hospital reimbursement in Romania.

    Science.gov (United States)

    Radu, Ciprian-Paul; Chiriac, Delia Nona; Vladescu, Cristian

    2010-06-01

    To evaluate the effects of the change in the diagnosis-related group (DRG) system on patient morbidity and hospital financial performance in the Romanian public health care system. Three variables were assessed before and after the classification switch in July 2007: clinical outcomes, the case mix index, and hospital budgets, using the database of the National School of Public Health and Health Services Management, which contains data regularly received from hospitals reimbursed through the Romanian DRG scheme (291 in 2009). The lack of a Romanian system for the calculation of cost-weights imposed the necessity to use an imported system, which was criticized by some clinicians for not accurately reflecting resource consumption in Romanian hospitals. The new DRG classification system allowed a more accurate clinical classification. However, it also exposed a lack of physicians' knowledge on diagnosing and coding procedures, which led to incorrect coding. Consequently, the reported hospital morbidity changed after the DRG switch, reflecting an increase in the national case-mix index of 25% in 2009 (compared with 2007). Since hospitals received the same reimbursement over the first two years after the classification switch, the new DRG system led them sometimes to change patients' diagnoses in order to receive more funding. Lack of oversight of hospital coding and reporting to the national reimbursement scheme allowed the increase in the case-mix index. The complexity of the new classification system requires more resources (human and financial), better monitoring and evaluation, and improved legislation in order to achieve better hospital resource allocation and more efficient patient care.

  11. A European classification of services for long-term care—the EU-project eDESDE-LTC

    Science.gov (United States)

    Weber, Germain; Brehmer, Barbara; Zeilinger, Elisabeth; Salvador-Carulla, Luis

    2009-01-01

    Purpose and theory The eDESDE-LTC project aims at developing an operational system for coding, mapping and comparing services for long-term care (LTC) across EU. The projects strategy is to improve EU listing and access to relevant sources of healthcare information via development of SEMANTIC INTER-OPERABILITY in eHEALTH (coding and listing of services for LTC); to increase access to relevant sources of information on LTC services, and to improve linkages between national and regional websites; to foster cooperation with international organizations (OECD). Methods This operational system will include a standard classification of main types of care for persons with LTC needs and an instrument for mapping and standard description of services. These instruments are based on previous classification systems for mental health services (ESMS), disabilities services (DESDE) and ageing services (DESDAE). A Delphi panel made by seven partners developed a DESDE-LTC beta version, which was translated into six languages. The feasibility of DESDE-LTC is tested in six countries using national focal groups. Then the final version will be developed by the Delphi panel, a webpage, training material and course will be carried out. Results and conclusions The eDESDE-LTC system will be piloted in two EU countries (Spain and Bulgaria). Evaluation will focus primarily on usability and impact analysis. Discussion The added value of this project is related to the right of “having access to high-quality healthcare when and where it is needed” by EU citizens. Due to semantic variability and service complexity, existing national listings of services do not provide an adequate framework for patient mobility.

  12. Research on quality assurance classification methodology for domestic AP1000 nuclear power projects

    International Nuclear Information System (INIS)

    Bai Jinhua; Jiang Huijie; Li Jingyan

    2012-01-01

    To meet the quality assurance classification requirements of domestic nuclear safety codes and standards, this paper analyzes the quality assurance classification methodology of domestic AP1000 nuclear power projects at present, and proposes the quality assurance classification methodology for subsequent AP1000 nuclear power projects. (authors)

  13. Identification of ICD Codes Suggestive of Child Maltreatment

    Science.gov (United States)

    Schnitzer, Patricia G.; Slusher, Paula L.; Kruse, Robin L.; Tarleton, Molly M.

    2011-01-01

    Objective: In order to be reimbursed for the care they provide, hospitals in the United States are required to use a standard system to code all discharge diagnoses: the International Classification of Disease, 9th Revision, Clinical Modification (ICD-9). Although ICD-9 codes specific for child maltreatment exist, they do not identify all…

  14. Visual search asymmetries within color-coded and intensity-coded displays.

    Science.gov (United States)

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  15. Lossless Compression of Classification-Map Data

    Science.gov (United States)

    Hua, Xie; Klimesh, Matthew

    2009-01-01

    A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification- map data more effectively than do general-purpose image-data-compression algorithms. Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data for example, a type of vegetation, a mineral, or a body of water at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

  16. Classification of Strawberry Fruit Shape by Machine Learning

    Science.gov (United States)

    Ishikawa, T.; Hayashi, A.; Nagamatsu, S.; Kyutoku, Y.; Dan, I.; Wada, T.; Oku, K.; Saeki, Y.; Uto, T.; Tanabata, T.; Isobe, S.; Kochi, N.

    2018-05-01

    Shape is one of the most important traits of agricultural products due to its relationships with the quality, quantity, and value of the products. For strawberries, the nine types of fruit shape were defined and classified by humans based on the sampler patterns of the nine types. In this study, we tested the classification of strawberry shapes by machine learning in order to increase the accuracy of the classification, and we introduce the concept of computerization into this field. Four types of descriptors were extracted from the digital images of strawberries: (1) the Measured Values (MVs) including the length of the contour line, the area, the fruit length and width, and the fruit width/length ratio; (2) the Ellipse Similarity Index (ESI); (3) Elliptic Fourier Descriptors (EFDs), and (4) Chain Code Subtraction (CCS). We used these descriptors for the classification test along with the random forest approach, and eight of the nine shape types were classified with combinations of MVs + CCS + EFDs. CCS is a descriptor that adds human knowledge to the chain codes, and it showed higher robustness in classification than the other descriptors. Our results suggest machine learning's high ability to classify fruit shapes accurately. We will attempt to increase the classification accuracy and apply the machine learning methods to other plant species.

  17. A new coding system for metabolic disorders demonstrates gaps in the international disease classifications ICD-10 and SNOMED-CT, which can be barriers to genotype-phenotype data sharing.

    Science.gov (United States)

    Sollie, Annet; Sijmons, Rolf H; Lindhout, Dick; van der Ploeg, Ans T; Rubio Gozalbo, M Estela; Smit, G Peter A; Verheijen, Frans; Waterham, Hans R; van Weely, Sonja; Wijburg, Frits A; Wijburg, Rudolph; Visser, Gepke

    2013-07-01

    Data sharing is essential for a better understanding of genetic disorders. Good phenotype coding plays a key role in this process. Unfortunately, the two most widely used coding systems in medicine, ICD-10 and SNOMED-CT, lack information necessary for the detailed classification and annotation of rare and genetic disorders. This prevents the optimal registration of such patients in databases and thus data-sharing efforts. To improve care and to facilitate research for patients with metabolic disorders, we developed a new coding system for metabolic diseases with a dedicated group of clinical specialists. Next, we compared the resulting codes with those in ICD and SNOMED-CT. No matches were found in 76% of cases in ICD-10 and in 54% in SNOMED-CT. We conclude that there are sizable gaps in the SNOMED-CT and ICD coding systems for metabolic disorders. There may be similar gaps for other classes of rare and genetic disorders. We have demonstrated that expert groups can help in addressing such coding issues. Our coding system has been made available to the ICD and SNOMED-CT organizations as well as to the Orphanet and HPO organizations for further public application and updates will be published online (www.ddrmd.nl and www.cineas.org). © 2013 WILEY PERIODICALS, INC.

  18. Results from the Veterans Health Administration ICD-10-CM/PCS Coding Pilot Study.

    Science.gov (United States)

    Weems, Shelley; Heller, Pamela; Fenton, Susan H

    2015-01-01

    The Veterans Health Administration (VHA) of the US Department of Veterans Affairs has been preparing for the October 1, 2015, conversion to the International Classification of Diseases, Tenth Revision, Clinical Modification and Procedural Coding System (ICD-10-CM/PCS) for more than four years. The VHA's Office of Informatics and Analytics ICD-10 Program Management Office established an ICD-10 Learning Lab to explore expected operational challenges. This study was conducted to determine the effects of the classification system conversion on coding productivity. ICD codes are integral to VHA business processes and are used for purposes such as clinical studies, performance measurement, workload capture, cost determination, Veterans Equitable Resource Allocation (VERA) determination, morbidity and mortality classification, indexing of hospital records by disease and operations, data storage and retrieval, research purposes, and reimbursement. The data collection for this study occurred in multiple VHA sites across several months using standardized methods. It is commonly accepted that coding productivity will decrease with the implementation of ICD-10-CM/PCS. The findings of this study suggest that the decrease will be more significant for inpatient coding productivity (64.5 percent productivity decrease) than for ambulatory care coding productivity (6.7 percent productivity decrease). This study reveals the following important points regarding ICD-10-CM/PCS coding productivity: 1. Ambulatory care ICD-10-CM coding productivity is not expected to decrease as significantly as inpatient ICD-10-CM/PCS coding productivity. 2. Coder training and type of record (inpatient versus outpatient) affect coding productivity. 3. Inpatient coding productivity is decreased when a procedure requiring ICD-10-PCS coding is present. It is highly recommended that organizations perform their own analyses to determine the effects of ICD-10-CM/PCS implementation on coding productivity.

  19. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification.

    Science.gov (United States)

    Taghanaki, Saeid Asgari; Kawahara, Jeremy; Miles, Brandon; Hamarneh, Ghassan

    2017-07-01

    Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Types of Lexicographical Information Needs and their Relevance for Information Science

    Directory of Open Access Journals (Sweden)

    Bergenholtz, Henning

    2017-09-01

    Full Text Available In some situations, you need information in order to solve a problem that has occurred. In information science, user needs are often described through very specific examples rather than through a classification of situation types in which information needs occur. Furthermore, information science often describes general human needs, typically with a reference to Maslow's classification of needs (1954, instead of actual information needs. Lexicography has also focused on information needs, but has developed a more abstract classification of types of information needs, though (until more recent research into lexicographical functions with a particular interest in linguistic uncertainties and the lack of knowledge and skills in relation to one or several languages. In this article, we suggest a classification of information needs in which a tripartition has been made according to the different types of situations: communicative needs, cognitive needs, and operative needs. This is a classification that is relevant and useful in general in our modern information society and therefore also relevant for information science, including lexicography.

  1. A Computer Oriented Scheme for Coding Chemicals in the Field of Biomedicine.

    Science.gov (United States)

    Bobka, Marilyn E.; Subramaniam, J.B.

    The chemical coding scheme of the Medical Coding Scheme (MCS), developed for use in the Comparative Systems Laboratory (CSL), is outlined and evaluated in this report. The chemical coding scheme provides a classification scheme and encoding method for drugs and chemical terms. Using the scheme complicated chemical structures may be expressed…

  2. [Coding in general practice-Will the ICD-11 be a step forward?

    Science.gov (United States)

    Kühlein, Thomas; Virtanen, Martti; Claus, Christoph; Popert, Uwe; van Boven, Kees

    2018-07-01

    Primary care physicians in Germany don't benefit from coding diagnoses-they are coding for the needs of others. For coding, they mostly are using either the thesaurus of the German Institute of Medical Documentation and Information (DIMDI) or self-made cheat-sheets. Coding quality is low but seems to be sufficient for the main use case of the resulting data, which is the morbidity adjusted risk compensation scheme that distributes financial resources between the many German health insurance companies.Neither the International Classification of Diseases and Health Related Problems (ICD-10) nor the German thesaurus as an interface terminology are adequate for coding in primary care. The ICD-11 itself will not recognizably be a step forward from the perspective of primary care. At least the browser database format will be advantageous. An implementation into the 182 different electronic health records (EHR) on the German market would probably standardize the coding process and make code finding easier. This method of coding would still be more cumbersome than the current coding with self-made cheat-sheets.The first steps towards a useful official cheat-sheet for primary care have been taken, awaiting implementation and evaluation. The International Classification of Primary Care (ICPC-2) already provides an adequate classification standard for primary care that can also be used in combination with ICD-10. A new version of ICPC (ICPC-3) is under development. As the ICPC-2 has already been integrated into the foundation layer of ICD-11 it might easily become the future standard for coding in primary care. Improving communication between the different EHR would make taking over codes from other healthcare providers possible. Another opportunity to improve the coding quality might be creating use cases for the resulting data for the primary care physicians themselves.

  3. An algorithm for the arithmetic classification of multilattices.

    Science.gov (United States)

    Indelicato, Giuliana

    2013-01-01

    A procedure for the construction and the classification of monoatomic multilattices in arbitrary dimension is developed. The algorithm allows one to determine the location of the points of all monoatomic multilattices with a given symmetry, or to determine whether two assigned multilattices are arithmetically equivalent. This approach is based on ideas from integral matrix theory, in particular the reduction to the Smith normal form, and can be coded to provide a classification software package.

  4. Classifications of Patterned Hair Loss: A Review.

    Science.gov (United States)

    Gupta, Mrinal; Mysore, Venkataram

    2016-01-01

    Patterned hair loss is the most common cause of hair loss seen in both the sexes after puberty. Numerous classification systems have been proposed by various researchers for grading purposes. These systems vary from the simpler systems based on recession of the hairline to the more advanced multifactorial systems based on the morphological and dynamic parameters that affect the scalp and the hair itself. Most of these preexisting systems have certain limitations. Currently, the Hamilton-Norwood classification system for males and the Ludwig system for females are most commonly used to describe patterns of hair loss. In this article, we review the various classification systems for patterned hair loss in both the sexes. Relevant articles were identified through searches of MEDLINE and EMBASE. Search terms included but were not limited to androgenic alopecia classification, patterned hair loss classification, male pattern baldness classification, and female pattern hair loss classification. Further publications were identified from the reference lists of the reviewed articles.

  5. Classifications of patterned hair loss: a review

    Directory of Open Access Journals (Sweden)

    Mrinal Gupta

    2016-01-01

    Full Text Available Patterned hair loss is the most common cause of hair loss seen in both the sexes after puberty. Numerous classification systems have been proposed by various researchers for grading purposes. These systems vary from the simpler systems based on recession of the hairline to the more advanced multifactorial systems based on the morphological and dynamic parameters that affect the scalp and the hair itself. Most of these preexisting systems have certain limitations. Currently, the Hamilton-Norwood classification system for males and the Ludwig system for females are most commonly used to describe patterns of hair loss. In this article, we review the various classification systems for patterned hair loss in both the sexes. Relevant articles were identified through searches of MEDLINE and EMBASE. Search terms included but were not limited to androgenic alopecia classification, patterned hair loss classification, male pattern baldness classification, and female pattern hair loss classification. Further publications were identified from the reference lists of the reviewed articles.

  6. Developing a contributing factor classification scheme for Rasmussen's AcciMap: Reliability and validity evaluation.

    Science.gov (United States)

    Goode, N; Salmon, P M; Taylor, N Z; Lenné, M G; Finch, C F

    2017-10-01

    One factor potentially limiting the uptake of Rasmussen's (1997) Accimap method by practitioners is the lack of a contributing factor classification scheme to guide accident analyses. This article evaluates the intra- and inter-rater reliability and criterion-referenced validity of a classification scheme developed to support the use of Accimap by led outdoor activity (LOA) practitioners. The classification scheme has two levels: the system level describes the actors, artefacts and activity context in terms of 14 codes; the descriptor level breaks the system level codes down into 107 specific contributing factors. The study involved 11 LOA practitioners using the scheme on two separate occasions to code a pre-determined list of contributing factors identified from four incident reports. Criterion-referenced validity was assessed by comparing the codes selected by LOA practitioners to those selected by the method creators. Mean intra-rater reliability scores at the system (M = 83.6%) and descriptor (M = 74%) levels were acceptable. Mean inter-rater reliability scores were not consistently acceptable for both coding attempts at the system level (M T1  = 68.8%; M T2  = 73.9%), and were poor at the descriptor level (M T1  = 58.5%; M T2  = 64.1%). Mean criterion referenced validity scores at the system level were acceptable (M T1  = 73.9%; M T2  = 75.3%). However, they were not consistently acceptable at the descriptor level (M T1  = 67.6%; M T2  = 70.8%). Overall, the results indicate that the classification scheme does not currently satisfy reliability and validity requirements, and that further work is required. The implications for the design and development of contributing factors classification schemes are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Formalization of the classification pattern: survey of classification modeling in information systems engineering.

    Science.gov (United States)

    Partridge, Chris; de Cesare, Sergio; Mitchell, Andrew; Odell, James

    2018-01-01

    Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the "one and the many." Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor's work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus

  8. A search for symmetries in the genetic code

    International Nuclear Information System (INIS)

    Hornos, J.E.M.; Hornos, Y.M.M.

    1991-01-01

    A search for symmetries based on the classification theorem of Cartan for the compact simple Lie algebras is performed to verify to what extent the genetic code is a manifestation of some underlying symmetry. An exact continuous symmetry group cannot be found to reproduce the present, universal code. However a unique approximate symmetry group is compatible with codon assignment for the fundamental amino acids and the termination codon. In order to obtain the actual genetic code, the symmetry must be slightly broken. (author). 27 refs, 3 figs, 6 tabs

  9. Parents' Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life

    DEFF Research Database (Denmark)

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    : Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers......AIM: To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. METHOD...... of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after...

  10. Survey of Codes Employing Nuclear Damage Assessment

    Science.gov (United States)

    1977-10-01

    surveyed codes were com- DO 73Mu 1473 ETN OF 1NOVSSSOLETE UNCLASSIFIED 1 SECURITY CLASSIFICATION OF THIS f AGE (Wh*11 Date Efntered)S<>-~C. I UNCLASSIFIED...level and above) TALLEY/TOTEM not nuclear TARTARUS too highly aggregated (battalion level and above) UNICORN highly aggregated force allocation code...vulnerability data can bq input by the user as he receives them, and there is the abil ’ity to replay any situation using hindsight. The age of target

  11. An Incremental Classification Algorithm for Mining Data with Feature Space Heterogeneity

    Directory of Open Access Journals (Sweden)

    Yu Wang

    2014-01-01

    Full Text Available Feature space heterogeneity often exists in many real world data sets so that some features are of different importance for classification over different subsets. Moreover, the pattern of feature space heterogeneity might dynamically change over time as more and more data are accumulated. In this paper, we develop an incremental classification algorithm, Supervised Clustering for Classification with Feature Space Heterogeneity (SCCFSH, to address this problem. In our approach, supervised clustering is implemented to obtain a number of clusters such that samples in each cluster are from the same class. After the removal of outliers, relevance of features in each cluster is calculated based on their variations in this cluster. The feature relevance is incorporated into distance calculation for classification. The main advantage of SCCFSH lies in the fact that it is capable of solving a classification problem with feature space heterogeneity in an incremental way, which is favorable for online classification tasks with continuously changing data. Experimental results on a series of data sets and application to a database marketing problem show the efficiency and effectiveness of the proposed approach.

  12. Beyond crosswalks: reliability of exposure assessment following automated coding of free-text job descriptions for occupational epidemiology.

    Science.gov (United States)

    Burstyn, Igor; Slutsky, Anton; Lee, Derrick G; Singer, Alison B; An, Yuan; Michael, Yvonne L

    2014-05-01

    Epidemiologists typically collect narrative descriptions of occupational histories because these are less prone than self-reported exposures to recall bias of exposure to a specific hazard. However, the task of coding these narratives can be daunting and prohibitively time-consuming in some settings. The aim of this manuscript is to evaluate the performance of a computer algorithm to translate the narrative description of occupational codes into standard classification of jobs (2010 Standard Occupational Classification) in an epidemiological context. The fundamental question we address is whether exposure assignment resulting from manual (presumed gold standard) coding of the narratives is materially different from that arising from the application of automated coding. We pursued our work through three motivating examples: assessment of physical demands in Women's Health Initiative observational study, evaluation of predictors of exposure to coal tar pitch volatiles in the US Occupational Safety and Health Administration's (OSHA) Integrated Management Information System, and assessment of exposure to agents known to cause occupational asthma in a pregnancy cohort. In these diverse settings, we demonstrate that automated coding of occupations results in assignment of exposures that are in reasonable agreement with results that can be obtained through manual coding. The correlation between physical demand scores based on manual and automated job classification schemes was reasonable (r = 0.5). The agreement between predictive probability of exceeding the OSHA's permissible exposure level for polycyclic aromatic hydrocarbons, using coal tar pitch volatiles as a surrogate, based on manual and automated coding of jobs was modest (Kendall rank correlation = 0.29). In the case of binary assignment of exposure to asthmagens, we observed that fair to excellent agreement in classifications can be reached, depending on presence of ambiguity in assigned job classification

  13. Classification of high-resolution multi-swath hyperspectral data using Landsat 8 surface reflectance data as a calibration target and a novel histogram based unsupervised classification technique to determine natural classes from biophysically relevant fit parameters

    Science.gov (United States)

    McCann, C.; Repasky, K. S.; Morin, M.; Lawrence, R. L.; Powell, S. L.

    2016-12-01

    Compact, cost-effective, flight-based hyperspectral imaging systems can provide scientifically relevant data over large areas for a variety of applications such as ecosystem studies, precision agriculture, and land management. To fully realize this capability, unsupervised classification techniques based on radiometrically-calibrated data that cluster based on biophysical similarity rather than simply spectral similarity are needed. An automated technique to produce high-resolution, large-area, radiometrically-calibrated hyperspectral data sets based on the Landsat surface reflectance data product as a calibration target was developed and applied to three subsequent years of data covering approximately 1850 hectares. The radiometrically-calibrated data allows inter-comparison of the temporal series. Advantages of the radiometric calibration technique include the need for minimal site access, no ancillary instrumentation, and automated processing. Fitting the reflectance spectra of each pixel using a set of biophysically relevant basis functions reduces the data from 80 spectral bands to 9 parameters providing noise reduction and data compression. Examination of histograms of these parameters allows for determination of natural splitting into biophysical similar clusters. This method creates clusters that are similar in terms of biophysical parameters, not simply spectral proximity. Furthermore, this method can be applied to other data sets, such as urban scenes, by developing other physically meaningful basis functions. The ability to use hyperspectral imaging for a variety of important applications requires the development of data processing techniques that can be automated. The radiometric-calibration combined with the histogram based unsupervised classification technique presented here provide one potential avenue for managing big-data associated with hyperspectral imaging.

  14. Hyperspectral Image Classification Based on the Combination of Spatial-spectral Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    YANG Zhaoxia

    2015-07-01

    Full Text Available In order to avoid the problem of being over-dependent on high-dimensional spectral feature in the traditional hyperspectral image classification, a novel approach based on the combination of spatial-spectral feature and sparse representation is proposed in this paper. Firstly, we extract the spatial-spectral feature by reorganizing the local image patch with the first d principal components(PCs into a vector representation, followed by a sorting scheme to make the vector invariant to local image rotation. Secondly, we learn the dictionary through a supervised method, and use it to code the features from test samples afterwards. Finally, we embed the resulting sparse feature coding into the support vector machine(SVM for hyperspectral image classification. Experiments using three hyperspectral data show that the proposed method can effectively improve the classification accuracy comparing with traditional classification methods.

  15. Classification of parotidectomy: a proposed modification to the European Salivary Gland Society classification system.

    Science.gov (United States)

    Wong, Wai Keat; Shetty, Subhaschandra

    2017-08-01

    Parotidectomy remains the mainstay of treatment for both benign and malignant lesions of the parotid gland. There exists a wide range of possible surgical options in parotidectomy in terms of extent of parotid tissue removed. There is increasing need for uniformity of terminology resulting from growing interest in modifications of the conventional parotidectomy. It is, therefore, of paramount importance for a standardized classification system in describing extent of parotidectomy. Recently, the European Salivary Gland Society (ESGS) proposed a novel classification system for parotidectomy. The aim of this study is to evaluate this system. A classification system proposed by the ESGS was critically re-evaluated and modified to increase its accuracy and its acceptability. Modifications mainly focused on subdividing Levels I and II into IA, IB, IIA, and IIB. From June 2006 to June 2016, 126 patients underwent 130 parotidectomies at our hospital. The classification system was tested in that cohort of patient. While the ESGS classification system is comprehensive, it does not cover all possibilities. The addition of Sublevels IA, IB, IIA, and IIB may help to address some of the clinical situations seen and is clinically relevant. We aim to test the modified classification system for partial parotidectomy to address some of the challenges mentioned.

  16. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    Science.gov (United States)

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  17. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2017-01-01

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images

  18. Nomenclature for congenital and paediatric cardiac disease: the International Paediatric and Congenital Cardiac Code (IPCCC) and the Eleventh Iteration of the International Classification of Diseases (ICD-11).

    Science.gov (United States)

    Franklin, Rodney C G; Béland, Marie J; Colan, Steven D; Walters, Henry L; Aiello, Vera D; Anderson, Robert H; Bailliard, Frédérique; Boris, Jeffrey R; Cohen, Meryl S; Gaynor, J William; Guleserian, Kristine J; Houyel, Lucile; Jacobs, Marshall L; Juraszek, Amy L; Krogmann, Otto N; Kurosawa, Hiromi; Lopez, Leo; Maruszewski, Bohdan J; St Louis, James D; Seslar, Stephen P; Srivastava, Shubhika; Stellin, Giovanni; Tchervenkov, Christo I; Weinberg, Paul M; Jacobs, Jeffrey P

    2017-12-01

    An internationally approved and globally used classification scheme for the diagnosis of CHD has long been sought. The International Paediatric and Congenital Cardiac Code (IPCCC), which was produced and has been maintained by the International Society for Nomenclature of Paediatric and Congenital Heart Disease (the International Nomenclature Society), is used widely, but has spawned many "short list" versions that differ in content depending on the user. Thus, efforts to have a uniform identification of patients with CHD using a single up-to-date and coordinated nomenclature system continue to be thwarted, even if a common nomenclature has been used as a basis for composing various "short lists". In an attempt to solve this problem, the International Nomenclature Society has linked its efforts with those of the World Health Organization to obtain a globally accepted nomenclature tree for CHD within the 11th iteration of the International Classification of Diseases (ICD-11). The International Nomenclature Society has submitted a hierarchical nomenclature tree for CHD to the World Health Organization that is expected to serve increasingly as the "short list" for all communities interested in coding for congenital cardiology. This article reviews the history of the International Classification of Diseases and of the IPCCC, and outlines the process used in developing the ICD-11 congenital cardiac disease diagnostic list and the definitions for each term on the list. An overview of the content of the congenital heart anomaly section of the Foundation Component of ICD-11, published herein in its entirety, is also included. Future plans for the International Nomenclature Society include linking again with the World Health Organization to tackle procedural nomenclature as it relates to cardiac malformations. By doing so, the Society will continue its role in standardising nomenclature for CHD across the globe, thereby promoting research and better outcomes for fetuses

  19. Towards Product Lining Model-Driven Development Code Generators

    OpenAIRE

    Roth, Alexander; Rumpe, Bernhard

    2015-01-01

    A code generator systematically transforms compact models to detailed code. Today, code generation is regarded as an integral part of model-driven development (MDD). Despite its relevance, the development of code generators is an inherently complex task and common methodologies and architectures are lacking. Additionally, reuse and extension of existing code generators only exist on individual parts. A systematic development and reuse based on a code generator product line is still in its inf...

  20. ICF-based classification and measurement of functioning.

    Science.gov (United States)

    Stucki, G; Kostanjsek, N; Ustün, B; Cieza, A

    2008-09-01

    If we aim towards a comprehensive understanding of human functioning and the development of comprehensive programs to optimize functioning of individuals and populations we need to develop suitable measures. The approval of the International Classification, Disability and Health (ICF) in 2001 by the 54th World Health Assembly as the first universally shared model and classification of functioning, disability and health marks, therefore an important step in the development of measurement instruments and ultimately for our understanding of functioning, disability and health. The acceptance and use of the ICF as a reference framework and classification has been facilitated by its development in a worldwide, comprehensive consensus process and the increasing evidence regarding its validity. However, the broad acceptance and use of the ICF as a reference framework and classification will also depend on the resolution of conceptual and methodological challenges relevant for the classification and measurement of functioning. This paper therefore describes first how the ICF categories can serve as building blocks for the measurement of functioning and then the current state of the development of ICF based practical tools and international standards such as the ICF Core Sets. Finally it illustrates how to map the world of measures to the ICF and vice versa and the methodological principles relevant for the transformation of information obtained with a clinical test or a patient-oriented instrument to the ICF as well as the development of ICF-based clinical and self-reported measurement instruments.

  1. Relevance and Effectiveness of the WHO Global Code Practice on the International Recruitment of Health Personnel--Ethical and Systems Perspectives.

    Science.gov (United States)

    Brugha, Ruairí; Crowe, Sophie

    2015-05-20

    The relevance and effectiveness of the World Health Organization's (WHO's) Global Code of Practice on the International Recruitment of Health Personnel is being reviewed in 2015. The Code, which is a set of ethical norms and principles adopted by the World Health Assembly (WHA) in 2010, urges members states to train and retain the health personnel they need, thereby limiting demand for international migration, especially from the under-staffed health systems in low- and middle-income countries. Most countries failed to submit a first report in 2012 on implementation of the Code, including those source countries whose health systems are most under threat from the recruitment of their doctors and nurses, often to work in 4 major destination countries: the United States, United Kingdom, Canada and Australia. Political commitment by source country Ministers of Health needs to have been achieved at the May 2015 WHA to ensure better reporting by these countries on Code implementation for it to be effective. This paper uses ethics and health systems perspectives to analyse some of the drivers of international recruitment. The balance of competing ethics principles, which are contained in the Code's articles, reflects a tension that was evident during the drafting of the Code between 2007 and 2010. In 2007-2008, the right of health personnel to migrate was seen as a preeminent principle by US representatives on the Global Council which co-drafted the Code. Consensus on how to balance competing ethical principles--giving due recognition on the one hand to the obligations of health workers to the countries that trained them and the need for distributive justice given the global inequities of health workforce distribution in relation to need, and the right to migrate on the other hand--was only possible after President Obama took office in January 2009. It is in the interests of all countries to implement the Global Code and not just those that are losing their health

  2. Classification of Polarimetric SAR Data Using Dictionary Learning

    DEFF Research Database (Denmark)

    Vestergaard, Jacob Schack; Nielsen, Allan Aasbjerg; Dahl, Anders Lindbjerg

    2012-01-01

    This contribution deals with classification of multilook fully polarimetric synthetic aperture radar (SAR) data by learning a dictionary of crop types present in the Foulum test site. The Foulum test site contains a large number of agricultural fields, as well as lakes, forests, natural vegetation......, grasslands and urban areas, which make it ideally suited for evaluation of classification algorithms. Dictionary learning centers around building a collection of image patches typical for the classification problem at hand. This requires initial manual labeling of the classes present in the data and is thus...... a method for supervised classification. Sparse coding of these image patches aims to maintain a proficient number of typical patches and associated labels. Data is consecutively classified by a nearest neighbor search of the dictionary elements and labeled with probabilities of each class. Each dictionary...

  3. Positive predictive values of the International Classification of Disease, 10th edition diagnoses codes for diverticular disease in the Danish National Registry of Patients

    Directory of Open Access Journals (Sweden)

    Rune Erichsen

    2010-10-01

    Full Text Available Rune Erichsen1, Lisa Strate2, Henrik Toft Sørensen1, John A Baron31Department of Clinical Epidemiology, Aarhus University Hospital, Denmark; 2Division of Gastroenterology, University of Washington, Seattle, WA, USA; 3Departments of Medicine and of Community and Family Medicine, Dartmouth Medical School, NH, USAObjective: To investigate the accuracy of diagnostic coding for diverticular disease in the Danish National Registry of Patients (NRP.Study design and setting: At Aalborg Hospital, Denmark, with a catchment area of 640,000 inhabitants, we identified 100 patients recorded in the NRP with a diagnosis of diverticular disease (International Classification of Disease codes, 10th revision [ICD-10] K572–K579 during the 1999–2008 period. We assessed the positive predictive value (PPV as a measure of the accuracy of discharge codes for diverticular disease using information from discharge abstracts and outpatient notes as the reference standard.Results: Of the 100 patients coded with diverticular disease, 49 had complicated diverticular disease, whereas 51 had uncomplicated diverticulosis. For the overall diagnosis of diverticular disease (K57, the PPV was 0.98 (95% confidence intervals [CIs]: 0.93, 0.99. For the more detailed subgroups of diagnosis indicating the presence or absence of complications (K573–K579 the PPVs ranged from 0.67 (95% CI: 0.09, 0.99 to 0.92 (95% CI: 0.52, 1.00. The diagnosis codes did not allow accurate identification of uncomplicated disease or any specific complication. However, the combined ICD-10 codes K572, K574, and K578 had a PPV of 0.91 (95% CI: 0.71, 0.99 for any complication.Conclusion: The diagnosis codes in the NRP can be used to identify patients with diverticular disease in general; however, they do not accurately discern patients with uncomplicated diverticulosis or with specific diverticular complications.Keywords: diverticulum, colon, diverticulitis, validation studies

  4. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... solutions. Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  5. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    Science.gov (United States)

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.

    2017-01-01

    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  6. The First AO Classification System for Fractures of the Craniomaxillofacial Skeleton: Rationale, Methodological Background, Developmental Process, and Objectives.

    Science.gov (United States)

    Audigé, Laurent; Cornelius, Carl-Peter; Di Ieva, Antonio; Prein, Joachim

    2014-12-01

    Validated trauma classification systems are the sole means to provide the basis for reliable documentation and evaluation of patient care, which will open the gateway to evidence-based procedures and healthcare in the coming years. With the support of AO Investigation and Documentation, a classification group was established to develop and evaluate a comprehensive classification system for craniomaxillofacial (CMF) fractures. Blueprints for fracture classification in the major constituents of the human skull were drafted and then evaluated by a multispecialty group of experienced CMF surgeons and a radiologist in a structured process during iterative agreement sessions. At each session, surgeons independently classified the radiological imaging of up to 150 consecutive cases with CMF fractures. During subsequent review meetings, all discrepancies in the classification outcome were critically appraised for clarification and improvement until consensus was reached. The resulting CMF classification system is structured in a hierarchical fashion with three levels of increasing complexity. The most elementary level 1 simply distinguishes four fracture locations within the skull: mandible (code 91), midface (code 92), skull base (code 93), and cranial vault (code 94). Levels 2 and 3 focus on further defining the fracture locations and for fracture morphology, achieving an almost individual mapping of the fracture pattern. This introductory article describes the rationale for the comprehensive AO CMF classification system, discusses the methodological framework, and provides insight into the experiences and interactions during the evaluation process within the core groups. The details of this system in terms of anatomy and levels are presented in a series of focused tutorials illustrated with case examples in this special issue of the Journal.

  7. Monte Carlo codes use in neutron therapy; Application de codes Monte Carlo en neutrontherapie

    Energy Technology Data Exchange (ETDEWEB)

    Paquis, P.; Mokhtari, F.; Karamanoukian, D. [Hopital Pasteur, 06 - Nice (France); Pignol, J.P. [Hopital du Hasenrain, 68 - Mulhouse (France); Cuendet, P. [CEA Centre d' Etudes de Saclay, 91 - Gif-sur-Yvette (France). Direction des Reacteurs Nucleaires; Fares, G.; Hachem, A. [Faculte des Sciences, 06 - Nice (France); Iborra, N. [Centre Antoine-Lacassagne, 06 - Nice (France)

    1998-04-01

    Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)

  8. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Directory of Open Access Journals (Sweden)

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  9. Nonterminals, homomorphisms and codings in different variations of OL-systems. II. Nondeterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto

    1974-01-01

    Continuing the work begun in Part I of this paper, we consider now variations of nondeterministic OL-systems. The present Part II of the paper contains a systematic classification of the effect of nonterminals, codings, weak codings, nonerasing homomorphisms and homomorphisms for all basic variat...

  10. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  11. Consideration of the Construction Code for TBM-body in ASME BPVC

    International Nuclear Information System (INIS)

    Kim, Dongjun; Kim, Yunjae; Kim, Suk Kwon; Park, Sung Dae; Lee, Dong Won

    2016-01-01

    In this paper, ASME code is briefly introduced, and the TBM-body is classified for selecting the ASME section. With the classification of TBM-body, the appropriate section is determined. Helium Cooled Ceramic Reflector (HCCR) Test Blanket System (TBS) has been designed to research on the functions of breeding blanket by KO TBM team. The functions has three subjects as 1) Tritium breeding, 2) Heat conversion and extraction, and 3) Neutron and Gamma-ray shielding. For the process of design, it is needed to select the appropriate construction code as the design criteria. ITER Organization (IO) has proposed that RCC-MR Edition 2007 ver. shall be used for TBM-shield. Because the TBM-shield is connected to the vacuum boundary. For the other part of TBM-set, TBM-body, there is no constraint on the selected code, and the manufacturer can appropriately select the construction code to apply design and fabrication parts. KO TBM Team has considered whether it is appropriate to choose any code for TBM-body. One of the things is ASME code. The advantage of ASME choice is suitable to the domestic status. In the domestic nuclear plant, ASME or KEPIC code is used as regulatory requirements. Based on this, it is possible to prepare a domestic fusion plant regulatory. In this paper, the construction code of TBM-body was determined in ASME BPVC. For the determination of code, the structure of ASME BPVC was introduced and the classification for TBM-body was conducted by the ITER criteria. And the operation conditions of TBM-body that contained creep and irradiation effects was considered to determine the construction code

  12. Consideration of the Construction Code for TBM-body in ASME BPVC

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dongjun; Kim, Yunjae [Korea Univ., Seoul (Korea, Republic of); Kim, Suk Kwon; Park, Sung Dae; Lee, Dong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this paper, ASME code is briefly introduced, and the TBM-body is classified for selecting the ASME section. With the classification of TBM-body, the appropriate section is determined. Helium Cooled Ceramic Reflector (HCCR) Test Blanket System (TBS) has been designed to research on the functions of breeding blanket by KO TBM team. The functions has three subjects as 1) Tritium breeding, 2) Heat conversion and extraction, and 3) Neutron and Gamma-ray shielding. For the process of design, it is needed to select the appropriate construction code as the design criteria. ITER Organization (IO) has proposed that RCC-MR Edition 2007 ver. shall be used for TBM-shield. Because the TBM-shield is connected to the vacuum boundary. For the other part of TBM-set, TBM-body, there is no constraint on the selected code, and the manufacturer can appropriately select the construction code to apply design and fabrication parts. KO TBM Team has considered whether it is appropriate to choose any code for TBM-body. One of the things is ASME code. The advantage of ASME choice is suitable to the domestic status. In the domestic nuclear plant, ASME or KEPIC code is used as regulatory requirements. Based on this, it is possible to prepare a domestic fusion plant regulatory. In this paper, the construction code of TBM-body was determined in ASME BPVC. For the determination of code, the structure of ASME BPVC was introduced and the classification for TBM-body was conducted by the ITER criteria. And the operation conditions of TBM-body that contained creep and irradiation effects was considered to determine the construction code.

  13. An analysis of feature relevance in the classification of astronomical transients with machine learning methods

    Science.gov (United States)

    D'Isanto, A.; Cavuoti, S.; Brescia, M.; Donalek, C.; Longo, G.; Riccio, G.; Djorgovski, S. G.

    2016-04-01

    The exploitation of present and future synoptic (multiband and multi-epoch) surveys requires an extensive use of automatic methods for data processing and data interpretation. In this work, using data extracted from the Catalina Real Time Transient Survey (CRTS), we investigate the classification performance of some well tested methods: Random Forest, MultiLayer Perceptron with Quasi Newton Algorithm and K-Nearest Neighbours, paying special attention to the feature selection phase. In order to do so, several classification experiments were performed. Namely: identification of cataclysmic variables, separation between galactic and extragalactic objects and identification of supernovae.

  14. Exploring different approaches for music genre classification

    Directory of Open Access Journals (Sweden)

    Antonio Jose Homsi Goulart

    2012-07-01

    Full Text Available In this letter, we present different approaches for music genre classification. The proposed techniques, which are composed of a feature extraction stage followed by a classification procedure, explore both the variations of parameters used as input and the classifier architecture. Tests were carried out with three styles of music, namely blues, classical, and lounge, which are considered informally by some musicians as being “big dividers” among music genres, showing the efficacy of the proposed algorithms and establishing a relationship between the relevance of each set of parameters for each music style and each classifier. In contrast to other works, entropies and fractal dimensions are the features adopted for the classifications.

  15. Diagnosis of periodontal diseases using different classification ...

    African Journals Online (AJOL)

    2014-11-29

    Nov 29, 2014 ... Nigerian Journal of Clinical Practice • May-Jun 2015 • Vol 18 • Issue 3 ... Materials and Methods: A total of 150 patients was divided into two groups such as training ... functions from training data and DT learning is one of ... were represented as numerical codings for classification ..... tool within dentistry.

  16. Subordinate-level object classification reexamined.

    Science.gov (United States)

    Biederman, I; Subramaniam, S; Bar, M; Kalocsai, P; Fiser, J

    1999-01-01

    The classification of a table as round rather than square, a car as a Mazda rather than a Ford, a drill bit as 3/8-inch rather than 1/4-inch, and a face as Tom have all been regarded as a single process termed "subordinate classification." Despite the common label, the considerable heterogeneity of the perceptual processing required to achieve such classifications requires, minimally, a more detailed taxonomy. Perceptual information relevant to subordinate-level shape classifications can be presumed to vary on continua of (a) the type of distinctive information that is present, nonaccidental or metric, (b) the size of the relevant contours or surfaces, and (c) the similarity of the to-be-discriminated features, such as whether a straight contour has to be distinguished from a contour of low curvature versus high curvature. We consider three, relatively pure cases. Case 1 subordinates may be distinguished by a representation, a geon structural description (GSD), specifying a nonaccidental characterization of an object's large parts and the relations among these parts, such as a round table versus a square table. Case 2 subordinates are also distinguished by GSDs, except that the distinctive GSDs are present at a small scale in a complex object so the location and mapping of the GSDs are contingent on an initial basic-level classification, such as when we use a logo to distinguish various makes of cars. Expertise for Cases 1 and 2 can be easily achieved through specification, often verbal, of the GSDs. Case 3 subordinates, which have furnished much of the grist for theorizing with "view-based" template models, require fine metric discriminations. Cases 1 and 2 account for the overwhelming majority of shape-based basic- and subordinate-level object classifications that people can and do make in their everyday lives. These classifications are typically made quickly, accurately, and with only modest costs of viewpoint changes. Whereas the activation of an array of

  17. Nevada Administrative Code for Special Education Programs.

    Science.gov (United States)

    Nevada State Dept. of Education, Carson City. Special Education Branch.

    This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and…

  18. EXTRACELLULAR VESICLES: CLASSIFICATION, FUNCTIONS AND CLINICAL RELEVANCE

    Directory of Open Access Journals (Sweden)

    A. V. Oberemko

    2014-12-01

    Full Text Available This review presents a generalized definition of vesicles as bilayer extracellular organelles of all celular forms of life: not only eu-, but also prokaryotic. The structure and composition of extracellular vesicles, history of research, nomenclature, their impact on life processes in health and disease are discussed. Moreover, vesicles may be useful as clinical instruments for biomarkers, and they are promising as biotechnological drug. However, many questions in this area are still unresolved and need to be addressed in the future. The most interesting from the point of view of practical health care represents a direction to study the effect of exosomes and microvesicles in the development and progression of a particular disease, the possibility of adjusting the pathological process by means of extracellular vesicles of a particular type, acting as an active ingredient. Relevant is the further elucidation of the role and importance of exosomes to the surrounding cells, tissues and organs at the molecular level, the prospects for the use of non-cellular vesicles as biomarkers of disease.

  19. Psychological and social problems in primary care patients - general practitioners' assessment and classification.

    Science.gov (United States)

    Rosendal, Marianne; Vedsted, Peter; Christensen, Kaj Sparle; Moth, Grete

    2013-03-01

    To estimate the frequency of psychological and social classification codes employed by general practitioners (GPs) and to explore the extent to which GPs ascribed health problems to biomedical, psychological, or social factors. A cross-sectional survey based on questionnaire data from GPs. Setting. Danish primary care. 387 GPs and their face-to-face contacts with 5543 patients. GPs registered consecutive patients on registration forms including reason for encounter, diagnostic classification of main problem, and a GP assessment of biomedical, psychological, and social factors' influence on the contact. The GP-stated reasons for encounter largely overlapped with their classification of the managed problem. Using the International Classification of Primary Care (ICPC-2-R), GPs classified 600 (11%) patients with psychological problems and 30 (0.5%) with social problems. Both codes for problems/complaints and specific disorders were used as the GP's diagnostic classification of the main problem. Two problems (depression and acute stress reaction/adjustment disorder) accounted for 51% of all psychological classifications made. GPs generally emphasized biomedical aspects of the contacts. Psychological aspects were given greater importance in follow-up consultations than in first-episode consultations, whereas social factors were rarely seen as essential to the consultation. Psychological problems are frequently seen and managed in primary care and most are classified within a few diagnostic categories. Social matters are rarely considered or classified.

  20. Biogeographic classification of the Caspian Sea

    DEFF Research Database (Denmark)

    Fendereski, F.; Vogt, M.; Payne, Mark

    2014-01-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information deriv...... confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics....... from space and in-situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions...... in phytoplankton phenology, with differences in the date of bloom initiation, its duration and amplitude between ecoregions. A first qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 27 different species of plankton, fish and benthic invertebrate also...

  1. Cost reducing code implementation strategies

    International Nuclear Information System (INIS)

    Kurtz, Randall L.; Griswold, Michael E.; Jones, Gary C.; Daley, Thomas J.

    1995-01-01

    Sargent and Lundy's Code consulting experience reveals a wide variety of approaches toward implementing the requirements of various nuclear Codes Standards. This paper will describe various Code implementation strategies which assure that Code requirements are fully met in a practical and cost-effective manner. Applications to be discussed includes the following: new construction; repair, replacement and modifications; assessments and life extensions. Lessons learned and illustrative examples will be included. Preferred strategies and specific recommendations will also be addressed. Sargent and Lundy appreciates the opportunity provided by the Korea Atomic Industrial Forum and Korean Nuclear Society to share our ideas and enhance global cooperation through the exchange of information and views on relevant topics

  2. Biogeographic classification of the Caspian Sea

    Science.gov (United States)

    Fendereski, F.; Vogt, M.; Payne, M. R.; Lachkar, Z.; Gruber, N.; Salmanmahiny, A.; Hosseini, S. A.

    2014-11-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information derived from space and in situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions using the hierarchical agglomerative clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total suspended matter (TSM) and its seasonal variation (DTSM). The classification results reveal a robust separation between the northern and the middle/southern basins as well as a separation of the shallow nearshore waters from those offshore. The observed patterns in ecoregions can be attributed to differences in climate and geochemical factors such as distance from river, water depth and currents. A comparison of the annual and monthly mean Chl a concentrations between the different ecoregions shows significant differences (one-way ANOVA, P qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 25 different species of plankton, fish and benthic invertebrate also confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics.

  3. ViCTree: An automated framework for taxonomic classification from protein sequences.

    Science.gov (United States)

    Modha, Sejal; Thanki, Anil; Cotmore, Susan F; Davison, Andrew J; Hughes, Joseph

    2018-02-20

    The increasing rate of submission of genetic sequences into public databases is providing a growing resource for classifying the organisms that these sequences represent. To aid viral classification, we have developed ViCTree, which automatically integrates the relevant sets of sequences in NCBI GenBank and transforms them into an interactive maximum likelihood phylogenetic tree that can be updated automatically. ViCTree incorporates ViCTreeView, which is a JavaScript-based visualisation tool that enables the tree to be explored interactively in the context of pairwise distance data. To demonstrate utility, ViCTree was applied to subfamily Densovirinae of family Parvoviridae. This led to the identification of six new species of insect virus. ViCTree is open-source and can be run on any Linux- or Unix-based computer or cluster. A tutorial, the documentation and the source code are available under a GPL3 license, and can be accessed at http://bioinformatics.cvr.ac.uk/victree_web/. sejal.modha@glasgow.ac.uk.

  4. Classification of non-simple C*-algebras of real rank zero

    DEFF Research Database (Denmark)

    Arklint, Sara Esther

    for real rank zero graph algebras over primitive ideal spaces that admit classification. As a consequence of completeness of filtered K-theory combined with this range result, one can conclude that real rank zero extensions of stabilized Cuntz-Krieger algebras are stabilized Cuntz-Krieger algebras......This thesis deals with classification of nonsimple C-algebras of real rank zero, and whether filtered K-theory is a suitable invariant for this purpose. As a consequence of the result of E. Kirchberg for purely infinite, nuclear C-algebras with a finite primitive ideal space, it suffices to lift...... the thesis is the following: is it possible to achieve the desired classification result for arbitrary finite primitive ideal spaces by restricting to C-algebras of real rank zero that possibly satisfy further restrictions on K-theory? The thesis consists of an account of the relevant theory and the relevant...

  5. Acute pancreatitis: international classification and nomenclature

    International Nuclear Information System (INIS)

    Bollen, T.L.

    2016-01-01

    The incidence of acute pancreatitis (AP) is increasing and it is associated with a major healthcare concern. New insights in the pathophysiology, better imaging techniques, and novel treatment options for complicated AP prompted the update of the 1992 Atlanta Classification. Updated nomenclature for pancreatic collections based on imaging criteria is proposed. Adoption of the newly Revised Classification of Acute Pancreatitis 2012 by radiologists should help standardise reports and facilitate accurate conveyance of relevant findings to referring physicians involved in the care of patients with AP. This review will clarify the nomenclature of pancreatic collections in the setting of AP.

  6. Recommendations for the classification of HIV associated neuromanifestations in the German DRG system.

    Science.gov (United States)

    Evers, Stefan; Fiori, W; Brockmeyer, N; Arendt, G; Husstedt, I-W

    2005-09-12

    HIV associated neuromanifestations are of growing importance in the in-patient treatment of HIV infected patients. In Germany, all in-patients have to be coded according to the ICD-10 classification and the German DRG-system. We present recommendations how to code the different primary and secondary neuromanifestations of HIV infection. These recommendations are based on the commentary of the German DRG procedures and are aimed to establish uniform coding of neuromanifestations.

  7. On the automated assessment of nuclear reactor systems code accuracy

    International Nuclear Information System (INIS)

    Kunz, Robert F.; Kasmala, Gerald F.; Mahaffy, John H.; Murray, Christopher J.

    2002-01-01

    An automated code assessment program (ACAP) has been developed to provide quantitative comparisons between nuclear reactor systems (NRS) code results and experimental measurements. The tool provides a suite of metrics for quality of fit to specific data sets, and the means to produce one or more figures of merit (FOM) for a code, based on weighted averages of results from the batch execution of a large number of code-experiment and code-code data comparisons. Accordingly, this tool has the potential to significantly streamline the verification and validation (V and V) processes in NRS code development environments which are characterized by rapidly evolving software, many contributing developers and a large and growing body of validation data. In this paper, a survey of data conditioning and analysis techniques is summarized which focuses on their relevance to NRS code accuracy assessment. A number of methods are considered for their applicability to the automated assessment of the accuracy of NRS code simulations. A variety of data types and computational modeling methods are considered from a spectrum of mathematical and engineering disciplines. The goal of the survey was to identify needs, issues and techniques to be considered in the development of an automated code assessment procedure, to be used in United States Nuclear Regulatory Commission (NRC) advanced thermal-hydraulic T/H code consolidation efforts. The ACAP software was designed based in large measure on the findings of this survey. An overview of this tool is summarized and several NRS data applications are provided. The paper is organized as follows: The motivation for this work is first provided by background discussion that summarizes the relevance of this subject matter to the nuclear reactor industry. Next, the spectrum of NRS data types are classified into categories, in order to provide a basis for assessing individual comparison methods. Then, a summary of the survey is provided, where each

  8. Hyperspectral Image Classification Using Discriminative Dictionary Learning

    International Nuclear Information System (INIS)

    Zongze, Y; Hao, S; Kefeng, J; Huanxin, Z

    2014-01-01

    The hyperspectral image (HSI) processing community has witnessed a surge of papers focusing on the utilization of sparse prior for effective HSI classification. In sparse representation based HSI classification, there are two phases: sparse coding with an over-complete dictionary and classification. In this paper, we first apply a novel fisher discriminative dictionary learning method, which capture the relative difference in different classes. The competitive selection strategy ensures that atoms in the resulting over-complete dictionary are the most discriminative. Secondly, motivated by the assumption that spatially adjacent samples are statistically related and even belong to the same materials (same class), we propose a majority voting scheme incorporating contextual information to predict the category label. Experiment results show that the proposed method can effectively strengthen relative discrimination of the constructed dictionary, and incorporating with the majority voting scheme achieve generally an improved prediction performance

  9. Benchmark problems for radiological assessment codes. Final report

    International Nuclear Information System (INIS)

    Mills, M.; Vogt, D.; Mann, B.

    1983-09-01

    This report describes benchmark problems to test computer codes used in the radiological assessment of high-level waste repositories. The problems presented in this report will test two types of codes. The first type of code calculates the time-dependent heat generation and radionuclide inventory associated with a high-level waste package. Five problems have been specified for this code type. The second code type addressed in this report involves the calculation of radionuclide transport and dose-to-man. For these codes, a comprehensive problem and two subproblems have been designed to test the relevant capabilities of these codes for assessing a high-level waste repository setting

  10. A comparative evaluation of sequence classification programs

    Directory of Open Access Journals (Sweden)

    Bazinet Adam L

    2012-05-01

    Full Text Available Abstract Background A fundamental problem in modern genomics is to taxonomically or functionally classify DNA sequence fragments derived from environmental sampling (i.e., metagenomics. Several different methods have been proposed for doing this effectively and efficiently, and many have been implemented in software. In addition to varying their basic algorithmic approach to classification, some methods screen sequence reads for ’barcoding genes’ like 16S rRNA, or various types of protein-coding genes. Due to the sheer number and complexity of methods, it can be difficult for a researcher to choose one that is well-suited for a particular analysis. Results We divided the very large number of programs that have been released in recent years for solving the sequence classification problem into three main categories based on the general algorithm they use to compare a query sequence against a database of sequences. We also evaluated the performance of the leading programs in each category on data sets whose taxonomic and functional composition is known. Conclusions We found significant variability in classification accuracy, precision, and resource consumption of sequence classification programs when used to analyze various metagenomics data sets. However, we observe some general trends and patterns that will be useful to researchers who use sequence classification programs.

  11. Coding the Complexity of Activity in Video Recordings

    DEFF Research Database (Denmark)

    Harter, Christopher Daniel; Otrel-Cass, Kathrin

    2017-01-01

    This paper presents a theoretical approach to coding and analyzing video data on human interaction and activity, using principles found in cultural historical activity theory. The systematic classification or coding of information contained in video data on activity can be arduous and time...... Bødker’s in 1996, three possible areas of expansion to Susanne Bødker’s method for analyzing video data were found. Firstly, a technological expansion due to contemporary developments in sophisticated analysis software, since the mid 1990’s. Secondly, a conceptual expansion, where the applicability...... of using Activity Theory outside of the context of human–computer interaction, is assessed. Lastly, a temporal expansion, by facilitating an organized method for tracking the development of activities over time, within the coding and analysis of video data. To expand on the above areas, a prototype coding...

  12. Partial Least Squares with Structured Output for Modelling the Metabolomics Data Obtained from Complex Experimental Designs: A Study into the Y-Block Coding

    Directory of Open Access Journals (Sweden)

    Yun Xu

    2016-10-01

    Full Text Available Partial least squares (PLS is one of the most commonly used supervised modelling approaches for analysing multivariate metabolomics data. PLS is typically employed as either a regression model (PLS-R or a classification model (PLS-DA. However, in metabolomics studies it is common to investigate multiple, potentially interacting, factors simultaneously following a specific experimental design. Such data often cannot be considered as a “pure” regression or a classification problem. Nevertheless, these data have often still been treated as a regression or classification problem and this could lead to ambiguous results. In this study, we investigated the feasibility of designing a hybrid target matrix Y that better reflects the experimental design than simple regression or binary class membership coding commonly used in PLS modelling. The new design of Y coding was based on the same principle used by structural modelling in machine learning techniques. Two real metabolomics datasets were used as examples to illustrate how the new Y coding can improve the interpretability of the PLS model compared to classic regression/classification coding.

  13. Tax reliefs in the Russian Federation, their definition, types and classification

    OpenAIRE

    Natalia Soloveva

    2012-01-01

    The present article analyzes the definition of tax allowances that is fixed in Tax Code of the Russian Federation and classification of tax allowances into tax exceptions, tax abatements and tax discharges. The article also covers the author's classification of tax allowances into direct and indirect ones, according to economic benefits obtained by taxpayers as a result of using tax allowances. In the conclusion, the author determines an exhaustive list of tax allowances in the Russian tax le...

  14. Laryngeal Cysts in Adults: Simplifying Classification and Management.

    Science.gov (United States)

    Heyes, Richard; Lott, David G

    2017-12-01

    Objective Laryngeal cysts may occur at any mucosa-lined location within the larynx and account for 5% to 10% of nonmalignant laryngeal lesions. A number of proposed classifications for laryngeal cysts exist; however, no previously published classification aims to guide management. This review analyzes contemporary laryngeal cyst management and proposes a framework for the terminology and management of cystic lesions in the larynx. Data Sources PubMed/Medline. Review Methods A primary literature search of the entire Medline database was performed for all titles of publications pertaining to laryngeal cysts and reviewed for relevance. Full manuscripts were reviewed per the relevance of their titles and abstracts, and selection into this review was according to their clinical and scientific relevance. Conclusion Laryngeal cysts have been associated with rapid-onset epiglottitis, dyspnea, stridor, and death; therefore, they should not be considered of little significance. Symptoms are varied and nonspecific. Laryngoscopy is the primary initial diagnostic tool. Cross-sectional imaging may be required, and future use of endolaryngeal ultrasound and optical coherence tomography may revolutionize practice. Where possible, cysts should be completely excised, and there is growing evidence that a transoral approach is superior to transcervical excision for nearly all cysts. Histology provides definitive diagnosis, and oncocytic cysts require close follow-up. Implications for Practice A new classification system is proposed that increases clarity in terminology, with the aim of better preparing surgeons and authors for future advances in the understanding and management of laryngeal cysts.

  15. Development and ITER relevant application of a user friendly interface (TEM) for use with the TMAP4 code

    International Nuclear Information System (INIS)

    Tanaka, M.R.; Fong, C.; Kalyanam, K.M.; Sood, S.K.; Delisle, M.; Natalizio, A.

    1995-01-01

    The Tritium Enclosure Model (TEM) has been developed as a user friendly interface to facilitate the application of the previously validated, verified and ITER approved TMAP4 Code. TEM (and TMAP4) dynamically analyzes the movement of tritium through structures, between structures and adjoining enclosures. Credible ITER relevant accident scenarios were developed and analyzed. The analyses considered the scenario with the cleanup system active or inactive, with and without the surface interactions. For surface interaction cases, the epoxy characteristics reported in the TMAP4 User Manual were used. Typical applications for TEM are the estimation of time-dependent tritium inventories in enclosures, as well as emissions to the environment following an accidental spill into any set of enclosures connected to cleanup systems. This paper outlines the various features of TEM and reports on the application of TEM to determine environmental source terms for the ITER Fuel Cycle and Cooling Systems, under chronic and accidental tritium releases. 3 refs., 2 figs., 1 tab

  16. Migration to the ICD-10 coding system: A primer for spine surgeons (Part 1).

    Science.gov (United States)

    Rahmathulla, Gazanfar; Deen, H Gordon; Dokken, Judith A; Pirris, Stephen M; Pichelmann, Mark A; Nottmeier, Eric W; Reimer, Ronald; Wharen, Robert E

    2014-01-01

    On 1 October 2015, a new federally mandated system goes into effect requiring the replacement of the International Classification of Disease-version 9-Clinical Modification (ICD-9-CM) with ICD-10-CM. These codes are required to be used for reimbursement and to substantiate medical necessity. ICD-10 is composite with as many as 141,000 codes, an increase of 712% when compared to ICD-9. Execution of the ICD-10 system will require significant changes in the clinical administrative and hospital-based practices. Through the transition, diminished productivity and practice revenue can be anticipated, the impacts of which the spine surgeon can minimizeby appropriate education and planning. The advantages of the new system include increased clarity and more accurate definitions reflecting patient condition, information relevant to ambulatory and managed care encounters, expanded injury codes, laterality, specificity, precise data for safety and compliance reporting, data mining for research, and finally, enabling pay-for-performance programs. The disadvantages include the cost per physician, training administrative staff, revenue loss during the learning curve, confusion, the need to upgrade hardware along with software, and overall expense to the healthcare system. With the deadline rapidly approaching, gaps in implementation result in delayed billing, delayed or diminished reimbursements, and absence of quality and outcomes data. It is thereby essential for spine surgeons to understand their role in transitioning to this new environment. Part I of this article discusses the background, coding changes, and costs as well as reviews the salient features of ICD-10 in spine surgery.

  17. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    Science.gov (United States)

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  18. Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding

    Science.gov (United States)

    Moody, Daniela; Wohlberg, Brendt

    2018-01-02

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  19. The ability of current statistical classifications to separate services and manufacturing

    DEFF Research Database (Denmark)

    Christensen, Jesper Lindgaard

    2013-01-01

    This paper explores the performance of current statistical classification systems in classifying firms and, in particular, their ability to distinguish between firms that provide services and firms that provide manufacturing. We find that a large share of firms, almost 20%, are not classified...... as expected based on a comparison of their statements of activities with the assigned industry codes. This result is robust to analyses on different levels of aggregation and is validated in an additional survey. It is well known from earlier literature that industry classification systems are not perfect....... This paper provides a quantification of the flaws in classifications of firms. Moreover, it is explained why the classifications of firms are imprecise. The increasing complexity of production, inertia in changes to statistical systems and the increasing integration of manufacturing products and services...

  20. Bio-geographic classification of the Caspian Sea

    Science.gov (United States)

    Fendereski, F.; Vogt, M.; Payne, M. R.; Lachkar, Z.; Gruber, N.; Salmanmahiny, A.; Hosseini, S. A.

    2014-03-01

    Like other inland seas, the Caspian Sea (CS) has been influenced by climate change and anthropogenic disturbance during recent decades, yet the scientific understanding of this water body remains poor. In this study, an eco-geographical classification of the CS based on physical information derived from space and in-situ data is developed and tested against a set of biological observations. We used a two-step classification procedure, consisting of (i) a data reduction with self-organizing maps (SOMs) and (ii) a synthesis of the most relevant features into a reduced number of marine ecoregions using the Hierarchical Agglomerative Clustering (HAC) method. From an initial set of 12 potential physical variables, 6 independent variables were selected for the classification algorithm, i.e., sea surface temperature (SST), bathymetry, sea ice, seasonal variation of sea surface salinity (DSSS), total suspended matter (TSM) and its seasonal variation (DTSM). The classification results reveal a robust separation between the northern and the middle/southern basins as well as a separation of the shallow near-shore waters from those off-shore. The observed patterns in ecoregions can be attributed to differences in climate and geochemical factors such as distance from river, water depth and currents. A comparison of the annual and monthly mean Chl a concentrations between the different ecoregions shows significant differences (Kruskal-Wallis rank test, P qualitative evaluation of differences in community composition based on recorded presence-absence patterns of 27 different species of plankton, fish and benthic invertebrate also confirms the relevance of the ecoregions as proxies for habitats with common biological characteristics.

  1. Classification of hand eczema

    DEFF Research Database (Denmark)

    Agner, T; Aalto-Korte, K; Andersen, K E

    2015-01-01

    BACKGROUND: Classification of hand eczema (HE) is mandatory in epidemiological and clinical studies, and also important in clinical work. OBJECTIVES: The aim was to test a recently proposed classification system of HE in clinical practice in a prospective multicentre study. METHODS: Patients were...... recruited from nine different tertiary referral centres. All patients underwent examination by specialists in dermatology and were checked using relevant allergy testing. Patients were classified into one of the six diagnostic subgroups of HE: allergic contact dermatitis, irritant contact dermatitis, atopic...... system investigated in the present study was useful, being able to give an appropriate main diagnosis for 89% of HE patients, and for another 7% when using two main diagnoses. The fact that more than half of the patients had one or more additional diagnoses illustrates that HE is a multifactorial disease....

  2. Review of codes, standards, and regulations for natural gas locomotives.

    Science.gov (United States)

    2014-06-01

    This report identified, collected, and summarized relevant international codes, standards, and regulations with potential : applicability to the use of natural gas as a locomotive fuel. Few international or country-specific codes, standards, and regu...

  3. Tax reliefs in the Russian Federation, their definition, types and classification

    Directory of Open Access Journals (Sweden)

    Natalia Soloveva

    2012-12-01

    Full Text Available The present article analyzes the definition of tax allowances that is fixed in Tax Code of the Russian Federation and classification of tax allowances into tax exceptions, tax abatements and tax discharges. The article also covers the author's classification of tax allowances into direct and indirect ones, according to economic benefits obtained by taxpayers as a result of using tax allowances. In the conclusion, the author determines an exhaustive list of tax allowances in the Russian tax legislation.

  4. IRSN Code of Ethics and Professional Conduct. Annex VII [TSO Mission Statement and Code of Ethics

    International Nuclear Information System (INIS)

    2018-01-01

    IRSN has adopted, in 2013, a Code of Ethics and Professional Conduct, the contents of which are summarized. As a preamble, it is indicated that the Code, which was adopted in 2013 by the Ethics Commission of IRSN and the Board of IRSN, complies with relevant constitutional and legal requirements. The introduction to the Code presents the role and missions of IRSN in the French system, as well as the various conditions and constraints that frame its action, in particular with respect to ethical issues. It states that the Code sets principles and establishes guidance for addressing these constraints and resolving conflicts that may arise, thus constituting references for the Institute and its staff, and helping IRSN’s partners in their interaction with the Institute. The stipulations of the Code are organized in four articles, reproduced and translated.

  5. Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.

    Science.gov (United States)

    Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel

    2015-05-15

    We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Nonlinear programming for classification problems in machine learning

    Science.gov (United States)

    Astorino, Annabella; Fuduli, Antonio; Gaudioso, Manlio

    2016-10-01

    We survey some nonlinear models for classification problems arising in machine learning. In the last years this field has become more and more relevant due to a lot of practical applications, such as text and web classification, object recognition in machine vision, gene expression profile analysis, DNA and protein analysis, medical diagnosis, customer profiling etc. Classification deals with separation of sets by means of appropriate separation surfaces, which is generally obtained by solving a numerical optimization model. While linear separability is the basis of the most popular approach to classification, the Support Vector Machine (SVM), in the recent years using nonlinear separating surfaces has received some attention. The objective of this work is to recall some of such proposals, mainly in terms of the numerical optimization models. In particular we tackle the polyhedral, ellipsoidal, spherical and conical separation approaches and, for some of them, we also consider the semisupervised versions.

  7. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  8. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  9. Computer codes for designing proton linear accelerators

    International Nuclear Information System (INIS)

    Kato, Takao

    1992-01-01

    Computer codes for designing proton linear accelerators are discussed from the viewpoint of not only designing but also construction and operation of the linac. The codes are divided into three categories according to their purposes: 1) design code, 2) generation and simulation code, and 3) electric and magnetic fields calculation code. The role of each category is discussed on the basis of experience at KEK (the design of the 40-MeV proton linac and its construction and operation, and the design of the 1-GeV proton linac). We introduce our recent work relevant to three-dimensional calculation and supercomputer calculation: 1) tuning of MAFIA (three-dimensional electric and magnetic fields calculation code) for supercomputer, 2) examples of three-dimensional calculation of accelerating structures by MAFIA, 3) development of a beam transport code including space charge effects. (author)

  10. Lattice-Like Total Perfect Codes

    Directory of Open Access Journals (Sweden)

    Araujo Carlos

    2014-02-01

    Full Text Available A contribution is made to the classification of lattice-like total perfect codes in integer lattices Λn via pairs (G, Φ formed by abelian groups G and homomorphisms Φ: Zn → G. A conjecture is posed that the cited contribution covers all possible cases. A related conjecture on the unfinished work on open problems on lattice-like perfect dominating sets in Λn with induced components that are parallel paths of length > 1 is posed as well.

  11. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  12. Binary Stochastic Representations for Large Multi-class Classification

    KAUST Repository

    Gerald, Thomas

    2017-10-23

    Classification with a large number of classes is a key problem in machine learning and corresponds to many real-world applications like tagging of images or textual documents in social networks. If one-vs-all methods usually reach top performance in this context, these approaches suffer of a high inference complexity, linear w.r.t. the number of categories. Different models based on the notion of binary codes have been proposed to overcome this limitation, achieving in a sublinear inference complexity. But they a priori need to decide which binary code to associate to which category before learning using more or less complex heuristics. We propose a new end-to-end model which aims at simultaneously learning to associate binary codes with categories, but also learning to map inputs to binary codes. This approach called Deep Stochastic Neural Codes (DSNC) keeps the sublinear inference complexity but do not need any a priori tuning. Experimental results on different datasets show the effectiveness of the approach w.r.t. baseline methods.

  13. The Search for Symmetries in the Genetic Code:

    Science.gov (United States)

    Antoneli, Fernando; Forger, Michael; Hornos, José Eduardo M.

    We give a full classification of the possible schemes for obtaining the distribution of multiplets observed in the standard genetic code by symmetry breaking in the context of finite groups, based on an extended notion of partial symmetry breaking that incorporates the intuitive idea of "freezing" first proposed by Francis Crick, which is given a precise mathematical meaning.

  14. Compensatory neurofuzzy model for discrete data classification in biomedical

    Science.gov (United States)

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  15. Classification of flipped SU(5) heterotic-string vacua

    Energy Technology Data Exchange (ETDEWEB)

    Faraggi, Alon E., E-mail: alon.faraggi@liv.ac.uk [Department of Mathematical Sciences, University of Liverpool, Liverpool L69 7ZL (United Kingdom); Rizos, John, E-mail: irizos@uoi.gr [Department of Physics, University of Ioannina, GR45110 Ioannina (Greece); Sonmez, Hasan, E-mail: Hasan.Sonmez@liv.ac.uk [Department of Mathematical Sciences, University of Liverpool, Liverpool L69 7ZL (United Kingdom)

    2014-09-15

    We extend the classification of free fermionic heterotic-string vacua to models in which the SO(10) GUT symmetry is reduced at the string level to the flipped SU(5) subgroup. In our classification method the set of boundary condition basis vectors is fixed and the enumeration of string vacua is obtained in terms of the Generalised GSO (GGSO) projection coefficients entering the one-loop partition function. We derive algebraic expressions for the GGSO projections for all the physical states appearing in the sectors generated by the set of basis vectors. This enables the programming of the entire spectrum analysis in a computer code. For that purpose we developed two independent codes, based on FORTRAN95 and JAVA, and all results presented are confirmed by the two independent routines. We perform a statistical sampling in the space of 2{sup 44}∼10{sup 13} flipped SU(5) vacua, and scan up to 10{sup 12} GGSO configurations. Contrary to the corresponding Pati–Salam classification results, we do not find exophobic flipped SU(5) vacua with an odd number of generations. We study the structure of exotic states appearing in the three generation models, that additionally contain a viable Higgs spectrum, and demonstrate the existence of models in which all the exotic states are confined by a hidden sector non-Abelian gauge symmetry, as well as models that may admit the racetrack mechanism.

  16. Mapping a classification system to architectural education

    DEFF Research Database (Denmark)

    Hermund, Anders; Klint, Lars; Rostrup, Nicolai

    2015-01-01

    This paper examines to what extent a new classification system, Cuneco Classification System, CCS, proves useful in the education of architects, and to what degree the aim of an architectural education, rather based on an arts and crafts approach than a polytechnic approach, benefits from...... the distinct terminology of the classification system. The method used to examine the relationship between education, practice and the CCS bifurcates in a quantitative and a qualitative exploration: Quantitative comparison of the curriculum with the students’ own descriptions of their studies through...... a questionnaire survey among 88 students in graduate school. Qualitative interviews with a handful of practicing architects, to be able to cross check the relevance of the education with the profession. The examination indicates the need of a new definition, in addition to the CCS’s scale, covering the earliest...

  17. Classification of Hyperspectral Images Using Kernel Fully Constrained Least Squares

    Directory of Open Access Journals (Sweden)

    Jianjun Liu

    2017-11-01

    Full Text Available As a widely used classifier, sparse representation classification (SRC has shown its good performance for hyperspectral image classification. Recent works have highlighted that it is the collaborative representation mechanism under SRC that makes SRC a highly effective technique for classification purposes. If the dimensionality and the discrimination capacity of a test pixel is high, other norms (e.g., ℓ 2 -norm can be used to regularize the coding coefficients, except for the sparsity ℓ 1 -norm. In this paper, we show that in the kernel space the nonnegative constraint can also play the same role, and thus suggest the investigation of kernel fully constrained least squares (KFCLS for hyperspectral image classification. Furthermore, in order to improve the classification performance of KFCLS by incorporating spatial-spectral information, we investigate two kinds of spatial-spectral methods using two regularization strategies: (1 the coefficient-level regularization strategy, and (2 the class-level regularization strategy. Experimental results conducted on four real hyperspectral images demonstrate the effectiveness of the proposed KFCLS, and show which way to incorporate spatial-spectral information efficiently in the regularization framework.

  18. Can Automatic Classification Help to Increase Accuracy in Data Collection?

    Directory of Open Access Journals (Sweden)

    Frederique Lang

    2016-09-01

    Full Text Available Purpose: The authors aim at testing the performance of a set of machine learning algorithms that could improve the process of data cleaning when building datasets. Design/methodology/approach: The paper is centered on cleaning datasets gathered from publishers and online resources by the use of specific keywords. In this case, we analyzed data from the Web of Science. The accuracy of various forms of automatic classification was tested here in comparison with manual coding in order to determine their usefulness for data collection and cleaning. We assessed the performance of seven supervised classification algorithms (Support Vector Machine (SVM, Scaled Linear Discriminant Analysis, Lasso and elastic-net regularized generalized linear models, Maximum Entropy, Regression Tree, Boosting, and Random Forest and analyzed two properties: accuracy and recall. We assessed not only each algorithm individually, but also their combinations through a voting scheme. We also tested the performance of these algorithms with different sizes of training data. When assessing the performance of different combinations, we used an indicator of coverage to account for the agreement and disagreement on classification between algorithms. Findings: We found that the performance of the algorithms used vary with the size of the sample for training. However, for the classification exercise in this paper the best performing algorithms were SVM and Boosting. The combination of these two algorithms achieved a high agreement on coverage and was highly accurate. This combination performs well with a small training dataset (10%, which may reduce the manual work needed for classification tasks. Research limitations: The dataset gathered has significantly more records related to the topic of interest compared to unrelated topics. This may affect the performance of some algorithms, especially in their identification of unrelated papers. Practical implications: Although the

  19. Relevance and Effectiveness of the WHO Global Code Practice on the International Recruitment of Health Personnel – Ethical and Systems Perspectives

    Directory of Open Access Journals (Sweden)

    Ruairi Brugha

    2015-06-01

    Full Text Available The relevance and effectiveness of the World Health Organization’s (WHO’s Global Code of Practice on the International Recruitment of Health Personnel is being reviewed in 2015. The Code, which is a set of ethical norms and principles adopted by the World Health Assembly (WHA in 2010, urges members states to train and retain the health personnel they need, thereby limiting demand for international migration, especially from the under-staffed health systems in low- and middle-income countries. Most countries failed to submit a first report in 2012 on implementation of the Code, including those source countries whose health systems are most under threat from the recruitment of their doctors and nurses, often to work in 4 major destination countries: the United States, United Kingdom, Canada and Australia. Political commitment by source country Ministers of Health needs to have been achieved at the May 2015 WHA to ensure better reporting by these countries on Code implementation for it to be effective. This paper uses ethics and health systems perspectives to analyse some of the drivers of international recruitment. The balance of competing ethics principles, which are contained in the Code’s articles, reflects a tension that was evident during the drafting of the Code between 2007 and 2010. In 2007-2008, the right of health personnel to migrate was seen as a preeminent principle by US representatives on the Global Council which co-drafted the Code. Consensus on how to balance competing ethical principles – giving due recognition on the one hand to the obligations of health workers to the countries that trained them and the need for distributive justice given the global inequities of health workforce distribution in relation to need, and the right to migrate on the other hand – was only possible after President Obama took office in January 2009. It is in the interests of all countries to implement the Global Code and not just those that

  20. Developing A Specific Criteria For Categorization Of Radioactive Waste Classification System For Uganda Using The Radar's Computer Code

    International Nuclear Information System (INIS)

    Byamukama, Abdul; Jung, Haiyong

    2014-01-01

    Radioactive materials are utilized in industries, agriculture and research, medical facilities and academic institutions for numerous purposes that are useful in the daily life of mankind. To effectively manage the radioactive waste and selecting appropriate disposal schemes, it is imperative to have a specific criteria for allocating radioactive waste to a particular waste class. Uganda has a radioactive waste classification scheme based on activity concentration and half-life albeit in qualitative terms as documented in the Uganda Atomic Energy Regulations 2012. There is no clear boundary between the different waste classes and hence difficult to; suggest disposal options, make decisions and enforcing compliance, communicate with stakeholders effectively among others. To overcome the challenges, the RESRAD computer code was used to derive a specific criteria for classifying between the different waste categories for Uganda basing on the activity concentration of radionuclides. The results were compared with that of Australia and were found to correlate given the differences in site parameters and consumption habits of the residents in the two countries

  1. Developing A Specific Criteria For Categorization Of Radioactive Waste Classification System For Uganda Using The Radar's Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Byamukama, Abdul [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Haiyong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2014-10-15

    Radioactive materials are utilized in industries, agriculture and research, medical facilities and academic institutions for numerous purposes that are useful in the daily life of mankind. To effectively manage the radioactive waste and selecting appropriate disposal schemes, it is imperative to have a specific criteria for allocating radioactive waste to a particular waste class. Uganda has a radioactive waste classification scheme based on activity concentration and half-life albeit in qualitative terms as documented in the Uganda Atomic Energy Regulations 2012. There is no clear boundary between the different waste classes and hence difficult to; suggest disposal options, make decisions and enforcing compliance, communicate with stakeholders effectively among others. To overcome the challenges, the RESRAD computer code was used to derive a specific criteria for classifying between the different waste categories for Uganda basing on the activity concentration of radionuclides. The results were compared with that of Australia and were found to correlate given the differences in site parameters and consumption habits of the residents in the two countries.

  2. Developing and modifying behavioral coding schemes in pediatric psychology: a practical guide.

    Science.gov (United States)

    Chorney, Jill MacLaren; McMurtry, C Meghan; Chambers, Christine T; Bakeman, Roger

    2015-01-01

    To provide a concise and practical guide to the development, modification, and use of behavioral coding schemes for observational data in pediatric psychology. This article provides a review of relevant literature and experience in developing and refining behavioral coding schemes. A step-by-step guide to developing and/or modifying behavioral coding schemes is provided. Major steps include refining a research question, developing or refining the coding manual, piloting and refining the coding manual, and implementing the coding scheme. Major tasks within each step are discussed, and pediatric psychology examples are provided throughout. Behavioral coding can be a complex and time-intensive process, but the approach is invaluable in allowing researchers to address clinically relevant research questions in ways that would not otherwise be possible. © The Author 2014. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Expressing Youth Voice through Video Games and Coding

    Science.gov (United States)

    Martin, Crystle

    2017-01-01

    A growing body of research focuses on the impact of video games and coding on learning. The research often elevates learning the technical skills associated with video games and coding or the importance of problem solving and computational thinking, which are, of course, necessary and relevant. However, the literature less often explores how young…

  4. Diglossia and Code Switching in Nigeria: Implications for English ...

    African Journals Online (AJOL)

    This paper discusses a sociolinguistic phenomenon, 'diglossia' and how it relates to code-switching in bilingual and multilingual societies like Nigeria. First, it examines the relevance of the social and linguistic contexts in determining the linguistic code that bilinguals and multilinguals use in various communicative ...

  5. Monte Carlo codes use in neutron therapy

    International Nuclear Information System (INIS)

    Paquis, P.; Mokhtari, F.; Karamanoukian, D.; Pignol, J.P.; Cuendet, P.; Iborra, N.

    1998-01-01

    Monte Carlo calculation codes allow to study accurately all the parameters relevant to radiation effects, like the dose deposition or the type of microscopic interactions, through one by one particle transport simulation. These features are very useful for neutron irradiations, from device development up to dosimetry. This paper illustrates some applications of these codes in Neutron Capture Therapy and Neutron Capture Enhancement of fast neutrons irradiations. (authors)

  6. Civil classification of the acquirer and operator of a photovoltaic power plant. Consumer or entrepreneur?; Zivilrechtliche Einordnung des Erwerbers und Betreibers einer Photovoltaikanlage. Verbraucher oder Unternehmer?

    Energy Technology Data Exchange (ETDEWEB)

    Schneidewindt, Holger [Verbraucherzentrale Nordrhein-Westfalen e.V., Duesseldorf (Germany)

    2013-03-15

    With the prospect of revenue from the feed consumption and cost savings by means of private 'small investors' for the energy policy turnaround are obtained. The civil protection in acquisition and operation of the photovoltaic power plant largely depends on their classification according to paragraph paragraph 13, 14 Civil Law Code (BGB). paragraph paragraph 305 ff. BGB are fully applicable only to consumers. Consumer organizations can act only under the participation of consumers. The recent judgments show that the registration of relevant aspects and their proper legal assessment are a major challenge. Therefore, the feed-in tariff as a demarcation criterion was a wrong decision.

  7. A neural network for noise correlation classification

    Science.gov (United States)

    Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas

    2018-02-01

    We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.

  8. Radiotoxicity hazard classification - the basis and development of a new list

    International Nuclear Information System (INIS)

    Carter, M.W.; Burns, P.; Munslow-Davies, L.

    1993-01-01

    The new ICRP recommendations contained in ICRP Publications 60 (ICRP 1991a) and 61 (ICRP 1991b) mean that all radiological regulations, standards and codes of practice based on the earlier recommendations need to be reviewed and revised. In Australia national recommendations on radiation protection are promulgated by the National Health and Medical Research Council (NHMRC) and these are used by the Standards Association of Australia, (SAA) National Occupation Health and Safety Commission (Worksafe Australia), state governments and other bodies, in their standards, codes and regulations. As part of the review and revision process, NHMRC and SAA recognised the need to produce a new radiotoxicity hazard classification, and formed a small working party to carry out this task. This paper is the report of the working party and summarises the work carried out and presents the recommendations for the revised radiotoxicity hazard classification. Previous classifications have been examined and the basis for such classifications has been considered. The working party propose that the most appropriate basis is the most restrictive inhalation annual limit intake (ALI), and that there is a need to consider this ALI in terms of both mass and activity. Using an index based on mass and activity, the radionuclides listed in ICRP 61 have been divided into four classes of radiotoxicity hazard. This list of revised radiotoxicity hazard class is presented in the paper and a floppy disk of the data is available. 17 refs., 4 figs

  9. Use of ICD-10 codes to monitor uterine rupture

    DEFF Research Database (Denmark)

    Thisted, Dorthe L A; Mortensen, Laust Hvas; Hvidman, Lone

    2014-01-01

    OBJECTIVES: Uterine rupture is a rare but severe complication in pregnancies after a previous cesarean section. In Denmark, the monitoring of uterine rupture is based on reporting of relevant diagnostic codes to the Danish Medical Birth Registry (MBR). The aim of our study was to examine the vali......OBJECTIVES: Uterine rupture is a rare but severe complication in pregnancies after a previous cesarean section. In Denmark, the monitoring of uterine rupture is based on reporting of relevant diagnostic codes to the Danish Medical Birth Registry (MBR). The aim of our study was to examine...... uterine ruptures, the sensitivity and specificity of the codes for uterine rupture were 83.8% and 99.1%, respectively. CONCLUSION: During the study period the monitoring of uterine rupture in the MBR was inadequate....

  10. Computerized coding system for life narratives to assess students' personality adaption

    NARCIS (Netherlands)

    He, Q.; Veldkamp, B.P.; Westerhof, G.J.; Pechenizkiy, Mykola; Calders, Toon; Conati, Cristina; Ventura, Sebastian; Romero, Cristobal; Stamper, John

    2011-01-01

    The present study is a trial in developing an automatic computerized coding framework with text mining techniques to identify the characteristics of redemption and contamination in life narratives written by undergraduate students. In the initial stage of text classification, the keyword-based

  11. CONTRANS 2 code conversion from Apollo to HP

    International Nuclear Information System (INIS)

    Lee, Hae Cho

    1996-01-01

    CONTRANS2 computer code is used to calculate transient thermal hydraulic responses of containment building to loss of coolant and main steam line break accident. Mass and energy release to the containment following an accident are code inputs. This report firstly describes detailed work carried out for installation of CONTRANS2 on Apollo DN10000 and code validation results after installation. Secondly, A series of work is also describes in relation to installation of CONTRANS2 on HP 9000/700 series as well as relevant code validation results. Attached is a report on software verification and validation results. 7 refs. (Author) .new

  12. Can the Ni classification of vessels predict neoplasia?

    DEFF Research Database (Denmark)

    Mehlum, Camilla Slot; Rosenberg, Tine; Dyrvig, Anne-Kirstine

    2018-01-01

    OBJECTIVES: The Ni classification of vascular change from 2011 is well documented for evaluating pharyngeal and laryngeal lesions, primarily focusing on cancer. In the planning of surgery it may be more relevant to differentiate neoplasia from non-neoplasia. We aimed to evaluate the ability...... of the Ni classification to predict laryngeal or hypopharyngeal neoplasia and to investigate if a changed cutoff value would support the recent European Laryngological Society (ELS) proposal of perpendicular vascular changes as indicative of neoplasia. DATA SOURCES: PubMed, Embase, Cochrane, and Scopus....... The pooled sensitivity and specificity of the Ni classification with two different cutoffs were calculated, and bubble and summary receiver operating characteristics plots were created. RESULTS: The combined sensitivity of five studies (n = 687) with Ni type IV-V defined as test-positive was 0.89 (95...

  13. Transporter taxonomy - a comparison of different transport protein classification schemes.

    Science.gov (United States)

    Viereck, Michael; Gaulton, Anna; Digles, Daniela; Ecker, Gerhard F

    2014-06-01

    Currently, there are more than 800 well characterized human membrane transport proteins (including channels and transporters) and there are estimates that about 10% (approx. 2000) of all human genes are related to transport. Membrane transport proteins are of interest as potential drug targets, for drug delivery, and as a cause of side effects and drug–drug interactions. In light of the development of Open PHACTS, which provides an open pharmacological space, we analyzed selected membrane transport protein classification schemes (Transporter Classification Database, ChEMBL, IUPHAR/BPS Guide to Pharmacology, and Gene Ontology) for their ability to serve as a basis for pharmacology driven protein classification. A comparison of these membrane transport protein classification schemes by using a set of clinically relevant transporters as use-case reveals the strengths and weaknesses of the different taxonomy approaches.

  14. Automatic Modulation Classification of LFM and Polyphase-coded Radar Signals

    Directory of Open Access Journals (Sweden)

    S. B. S. Hanbali

    2017-12-01

    Full Text Available There are several techniques for detecting and classifying low probability of intercept radar signals such as Wigner distribution, Choi-Williams distribution and time-frequency rate distribution, but these distributions require high SNR. To overcome this problem, we propose a new technique for detecting and classifying linear frequency modulation signal and polyphase coded signals using optimum fractional Fourier transform at low SNR. The theoretical analysis and simulation experiments demonstrate the validity and efficiency of the proposed method.

  15. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  16. Improving the accuracy of operation coding in surgical discharge summaries

    Science.gov (United States)

    Martinou, Eirini; Shouls, Genevieve; Betambeau, Nadine

    2014-01-01

    Procedural coding in surgical discharge summaries is extremely important; as well as communicating to healthcare staff which procedures have been performed, it also provides information that is used by the hospital's coding department. The OPCS code (Office of Population, Censuses and Surveys Classification of Surgical Operations and Procedures) is used to generate the tariff that allows the hospital to be reimbursed for the procedure. We felt that the OPCS coding on discharge summaries was often incorrect within our breast and endocrine surgery department. A baseline measurement over two months demonstrated that 32% of operations had been incorrectly coded, resulting in an incorrect tariff being applied and an estimated loss to the Trust of £17,000. We developed a simple but specific OPCS coding table in collaboration with the clinical coding team and breast surgeons that summarised all operations performed within our department. This table was disseminated across the team, specifically to the junior doctors who most frequently complete the discharge summaries. Re-audit showed 100% of operations were accurately coded, demonstrating the effectiveness of the coding table. We suggest that specifically designed coding tables be introduced across each surgical department to ensure accurate OPCS codes are used to produce better quality surgical discharge summaries and to ensure correct reimbursement to the Trust. PMID:26734286

  17. Development of a simple computer code to obtain relevant data on H2 and CO combustion in severe accidents and to aid in PSA-2 assessments

    International Nuclear Information System (INIS)

    Robledo, F.; Martin-Valdepenas, J.M.; Jimenez, M.A.; Martin-Fuertes, F.

    2007-01-01

    By following Consejo de Seguridad Nuclear (CSN) requirements, all of the Spanish NPPs performed plant specific PSA level 2 studies and implemented Severe Accident Management Guidelines during the first years of this century. CSN and contractors made an independent detailed review of these PSA level 2 studies. This independent review included the performance of plant specific calculations by using the MELCOR code and some other stand-alone codes and the calculation of the fission product release frequencies for each plant. One of the aspects treated in detail by CSN evaluations was the calculation of the containment failure probability due to the burn of combustible gases generated during a severe accident. It was shown that it would be useful to have a fast running code with capability to provide the most relevant data concerning H 2 and CO combustion. Therefore, the Polytechnic University of Madrid (UPM) developed the CPPC code for the CSN. This stand-alone module makes fast calculations on maximum static pressures in the containment building generated from H 2 and CO combustion in severe accidents, considering well-mixed atmospheres and includes the most recent advances and developments in the field of H 2 and CO combustion. Code input is simple: mass of H 2 and CO, initial environmental conditions inside the containment before the combustion and simple geometric data, such as the volume of the building enclosing the combustible gases. The code calculates the containment temperature assuming steam saturated atmosphere and provides the following output: - Combustion completeness (CC); - Adiabatic and isochoric combustion pressure (p AICC ); - Chapman-Jouguet pressure (p CJ ); - Chapman-Jouguet reflected pressure (p Cjrefl ). When the combustion regime results in dynamic pressure loads, the CPPC code calculates the equivalent static pressure (effective pressure p eff ) by modeling the containment structure as a simple harmonic oscillator. Additionally, the code

  18. Intelligent Computer Vision System for Automated Classification

    International Nuclear Information System (INIS)

    Jordanov, Ivan; Georgieva, Antoniya

    2010-01-01

    In this paper we investigate an Intelligent Computer Vision System applied for recognition and classification of commercially available cork tiles. The system is capable of acquiring and processing gray images using several feature generation and analysis techniques. Its functionality includes image acquisition, feature extraction and preprocessing, and feature classification with neural networks (NN). We also discuss system test and validation results from the recognition and classification tasks. The system investigation also includes statistical feature processing (features number and dimensionality reduction techniques) and classifier design (NN architecture, target coding, learning complexity and performance, and training with our own metaheuristic optimization method). The NNs trained with our genetic low-discrepancy search method (GLPτS) for global optimisation demonstrated very good generalisation abilities. In our view, the reported testing success rate of up to 95% is due to several factors: combination of feature generation techniques; application of Analysis of Variance (ANOVA) and Principal Component Analysis (PCA), which appeared to be very efficient for preprocessing the data; and use of suitable NN design and learning method.

  19. Relevant thermal hydraulic aspects of advanced reactors design: status report

    International Nuclear Information System (INIS)

    1996-11-01

    This status report provides an overview on the relevant thermalhydraulic aspects of advanced reactor designs (e.g. ABWR, AP600, SBWR, EPR, ABB 80+, PIUS, etc.). Since all of the advanced reactor concepts are at the design stage, the information and data available in the open literature are still very limited. Some characteristics of advanced reactor designs are provided together with selected phenomena identification and ranking tables. Specific needs for thermalhydraulic codes together with the list of relevant and important thermalhydraulic phenomena for advanced reactor designs are summarized with the purpose of providing some guidance in development of research plans for considering further code development and assessment needs and for the planning of experimental programs

  20. Coding Theory and Applications : 4th International Castle Meeting

    CERN Document Server

    Malonek, Paula; Vettori, Paolo

    2015-01-01

    The topics covered in this book, written by researchers at the forefront of their field, represent some of the most relevant research areas in modern coding theory: codes and combinatorial structures, algebraic geometric codes, group codes, quantum codes, convolutional codes, network coding and cryptography. The book includes a survey paper on the interconnections of coding theory with constrained systems, written by an invited speaker, as well as 37 cutting-edge research communications presented at the 4th International Castle Meeting on Coding Theory and Applications (4ICMCTA), held at the Castle of Palmela in September 2014. The event’s scientific program consisted of four invited talks and 39 regular talks by authors from 24 different countries. This conference provided an ideal opportunity for communicating new results, exchanging ideas, strengthening international cooperation, and introducing young researchers into the coding theory community.

  1. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  2. Atmospheric circulation classification comparison based on wildfires in Portugal

    Science.gov (United States)

    Pereira, M. G.; Trigo, R. M.

    2009-04-01

    Atmospheric circulation classifications are not a simple description of atmospheric states but a tool to understand and interpret the atmospheric processes and to model the relation between atmospheric circulation and surface climate and other related variables (Radan Huth et al., 2008). Classifications were initially developed with weather forecasting purposes, however with the progress in computer processing capability, new and more robust objective methods were developed and applied to large datasets prompting atmospheric circulation classification methods to one of the most important fields in synoptic and statistical climatology. Classification studies have been extensively used in climate change studies (e.g. reconstructed past climates, recent observed changes and future climates), in bioclimatological research (e.g. relating human mortality to climatic factors) and in a wide variety of synoptic climatological applications (e.g. comparison between datasets, air pollution, snow avalanches, wine quality, fish captures and forest fires). Likewise, atmospheric circulation classifications are important for the study of the role of weather in wildfire occurrence in Portugal because the daily synoptic variability is the most important driver of local weather conditions (Pereira et al., 2005). In particular, the objective classification scheme developed by Trigo and DaCamara (2000) to classify the atmospheric circulation affecting Portugal have proved to be quite useful in discriminating the occurrence and development of wildfires as well as the distribution over Portugal of surface climatic variables with impact in wildfire activity such as maximum and minimum temperature and precipitation. This work aims to present: (i) an overview the existing circulation classification for the Iberian Peninsula, and (ii) the results of a comparison study between these atmospheric circulation classifications based on its relation with wildfires and relevant meteorological

  3. The Improved Relevance Voxel Machine

    DEFF Research Database (Denmark)

    Ganz, Melanie; Sabuncu, Mert; Van Leemput, Koen

    The concept of sparse Bayesian learning has received much attention in the machine learning literature as a means of achieving parsimonious representations of features used in regression and classification. It is an important family of algorithms for sparse signal recovery and compressed sensing....... Hence in its current form it is reminiscent of a greedy forward feature selection algorithm. In this report, we aim to solve the problems of the original RVoxM algorithm in the spirit of [7] (FastRVM).We call the new algorithm Improved Relevance Voxel Machine (IRVoxM). Our contributions...... and enables basis selection from overcomplete dictionaries. One of the trailblazers of Bayesian learning is MacKay who already worked on the topic in his PhD thesis in 1992 [1]. Later on Tipping and Bishop developed the concept of sparse Bayesian learning [2, 3] and Tipping published the Relevance Vector...

  4. Fuel performance analysis code 'FAIR'

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1994-01-01

    For modelling nuclear reactor fuel rod behaviour of water cooled reactors under severe power maneuvering and high burnups, a mechanistic fuel performance analysis code FAIR has been developed. The code incorporates finite element based thermomechanical module, physically based fission gas release module and relevant models for modelling fuel related phenomena, such as, pellet cracking, densification and swelling, radial flux redistribution across the pellet due to the build up of plutonium near the pellet surface, pellet clad mechanical interaction/stress corrosion cracking (PCMI/SSC) failure of sheath etc. The code follows the established principles of fuel rod analysis programmes, such as coupling of thermal and mechanical solutions along with the fission gas release calculations, analysing different axial segments of fuel rod simultaneously, providing means for performing local analysis such as clad ridging analysis etc. The modular nature of the code offers flexibility in affecting modifications easily to the code for modelling MOX fuels and thorium based fuels. For performing analysis of fuel rods subjected to very long power histories within a reasonable amount of time, the code has been parallelised and is commissioned on the ANUPAM parallel processing system developed at Bhabha Atomic Research Centre (BARC). (author). 37 refs

  5. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  6. Functional interplay of top-down attention with affective codes during visual short-term memory maintenance.

    Science.gov (United States)

    Kuo, Bo-Cheng; Lin, Szu-Hung; Yeh, Yei-Yu

    2018-06-01

    Visual short-term memory (VSTM) allows individuals to briefly maintain information over time for guiding behaviours. Because the contents of VSTM can be neutral or emotional, top-down influence in VSTM may vary with the affective codes of maintained representations. Here we investigated the neural mechanisms underlying the functional interplay of top-down attention with affective codes in VSTM using functional magnetic resonance imaging. Participants were instructed to remember both threatening and neutral objects in a cued VSTM task. Retrospective cues (retro-cues) were presented to direct attention to the hemifield of a threatening object (i.e., cue-to-threat) or a neutral object (i.e., cue-to-neutral) during VSTM maintenance. We showed stronger activity in the ventral occipitotemporal cortex and amygdala for attending threatening relative to neutral representations. Using multivoxel pattern analysis, we found better classification performance for cue-to-threat versus cue-to-neutral objects in early visual areas and in the amygdala. Importantly, retro-cues modulated the strength of functional connectivity between the frontoparietal and early visual areas. Activity in the frontoparietal areas became strongly correlated with the activity in V3a-V4 coding the threatening representations instructed to be relevant for the task. Together, these findings provide the first demonstration of top-down modulation of activation patterns in early visual areas and functional connectivity between the frontoparietal network and early visual areas for regulating threatening representations during VSTM maintenance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Lymphoma classification update: B-cell non-Hodgkin lymphomas.

    Science.gov (United States)

    Jiang, Manli; Bennani, N Nora; Feldman, Andrew L

    2017-05-01

    Lymphomas are classified based on the normal counterpart, or cell of origin, from which they arise. Because lymphocytes have physiologic immune functions that vary both by lineage and by stage of differentiation, the classification of lymphomas arising from these normal lymphoid populations is complex. Recent genomic data have contributed additional complexity. Areas covered: Lymphoma classification follows the World Health Organization (WHO) system, which reflects international consensus and is based on pathological, genetic, and clinical factors. A 2016 revision to the WHO classification of lymphoid neoplasms recently was reported. The present review focuses on B-cell non-Hodgkin lymphomas, the most common group of lymphomas, and summarizes recent changes most relevant to hematologists and other clinicians who care for lymphoma patients. Expert commentary: Lymphoma classification is a continually evolving field that needs to be responsive to new clinical, pathological, and molecular understanding of lymphoid neoplasia. Among the entities covered in this review, the 2016 revision of the WHO classification particularly impact the subclassification and genetic stratification of diffuse large B-cell lymphoma and high-grade B-cell lymphomas, and reflect evolving criteria and nomenclature for indolent B-cell lymphomas and lymphoproliferative disorders.

  8. Robust pattern decoding in shape-coded structured light

    Science.gov (United States)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  9. APPC - A new standardised coding system for trans-organisational PACS retrieval

    International Nuclear Information System (INIS)

    Fruehwald, F.; Lindner, A.; Mostbeck, G.; Hruby, W.; Fruehwald-Pallamar, J.

    2010-01-01

    As part of a general strategy to integrate the health care enterprise, Austria plans to connect the Picture Archiving and Communication Systems (PACS) of all radiological institutions into a nationwide network. To facilitate the search for relevant correlative imaging data in the PACS of different organisations, a coding system was compiled for all radiological procedures and necessary anatomical details. This code, called the Austrian PACS Procedure Code (APPC), was granted the status of a standard under HL7. Examples are provided of effective coding and filtering when searching for relevant imaging material using the APPC, as well as the planned process for future adjustments of the APPC. The implementation and how the APPC will fit into the future electronic environment, which will include an electronic health act for all citizens in Austria, are discussed. A comparison to other nationwide electronic health record projects and coding systems is given. Limitations and possible use in physical storage media are contemplated. (orig.)

  10. Locality-preserving sparse representation-based classification in hyperspectral imagery

    Science.gov (United States)

    Gao, Lianru; Yu, Haoyang; Zhang, Bing; Li, Qingting

    2016-10-01

    This paper proposes to combine locality-preserving projections (LPP) and sparse representation (SR) for hyperspectral image classification. The LPP is first used to reduce the dimensionality of all the training and testing data by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold, where the high-dimensional data lies. Then, SR codes the projected testing pixels as sparse linear combinations of all the training samples to classify the testing pixels by evaluating which class leads to the minimum approximation error. The integration of LPP and SR represents an innovative contribution to the literature. The proposed approach, called locality-preserving SR-based classification, addresses the imbalance between high dimensionality of hyperspectral data and the limited number of training samples. Experimental results on three real hyperspectral data sets demonstrate that the proposed approach outperforms the original counterpart, i.e., SR-based classification.

  11. A computerized energy systems code and information library at Soreq

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, I; Shapira, M; Caner, D; Sapier, D [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center

    1996-12-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors).

  12. A computerized energy systems code and information library at Soreq

    International Nuclear Information System (INIS)

    Silverman, I.; Shapira, M.; Caner, D.; Sapier, D.

    1996-01-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors)

  13. Automatic coding and selection of causes of death: an adaptation of Iris software for using in Brazil.

    Science.gov (United States)

    Martins, Renata Cristófani; Buchalla, Cassia Maria

    2015-01-01

    To prepare a dictionary in Portuguese for using in Iris and to evaluate its completeness for coding causes of death. Iniatially, a dictionary with all illness and injuries was created based on the International Classification of Diseases - tenth revision (ICD-10) codes. This dictionary was based on two sources: the electronic file of ICD-10 volume 1 and the data from Thesaurus of the International Classification of Primary Care (ICPC-2). Then, a death certificate sample from the Program of Improvement of Mortality Information in São Paulo (PRO-AIM) was coded manually and by Iris version V4.0.34, and the causes of death were compared. Whenever Iris was not able to code the causes of death, adjustments were made in the dictionary. Iris was able to code all causes of death in 94.4% death certificates, but only 50.6% were directly coded, without adjustments. Among death certificates that the software was unable to fully code, 89.2% had a diagnosis of external causes (chapter XX of ICD-10). This group of causes of death showed less agreement when comparing the coding by Iris to the manual one. The software performed well, but it needs adjustments and improvement in its dictionary. In the upcoming versions of the software, its developers are trying to solve the external causes of death problem.

  14. Video coding and decoding devices and methods preserving PPG relevant information

    NARCIS (Netherlands)

    2015-01-01

    The present invention relates to a video encoding device (10, 10', 10") and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder

  15. Video coding and decoding devices and methods preserving ppg relevant information

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a video encoding device (10, 10', 10'') and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder

  16. The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification

    Energy Technology Data Exchange (ETDEWEB)

    Jason L. Wright; Milos Manic

    2010-05-01

    This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.

  17. ILAE Classification of the Epilepsies Position Paper of the ILAE Commission for Classification and Terminology

    Science.gov (United States)

    Scheffer, Ingrid E; Berkovic, Samuel; Capovilla, Giuseppe; Connolly, Mary B; French, Jacqueline; Guilhoto, Laura; Hirsch, Edouard; Jain, Satish; Mathern, Gary W.; Moshé, Solomon L; Nordli, Douglas R; Perucca, Emilio; Tomson, Torbjörn; Wiebe, Samuel; Zhang, Yue-Hua; Zuberi, Sameer M

    2017-01-01

    Summary The ILAE Classification of the Epilepsies has been updated to reflect our gain in understanding of the epilepsies and their underlying mechanisms following the major scientific advances which have taken place since the last ratified classification in 1989. As a critical tool for the practising clinician, epilepsy classification must be relevant and dynamic to changes in thinking, yet robust and translatable to all areas of the globe. Its primary purpose is for diagnosis of patients, but it is also critical for epilepsy research, development of antiepileptic therapies and communication around the world. The new classification originates from a draft document submitted for public comments in 2013 which was revised to incorporate extensive feedback from the international epilepsy community over several rounds of consultation. It presents three levels, starting with seizure type where it assumes that the patient is having epileptic seizures as defined by the new 2017 ILAE Seizure Classification. After diagnosis of the seizure type, the next step is diagnosis of epilepsy type, including focal epilepsy, generalized epilepsy, combined generalized and focal epilepsy, and also an unknown epilepsy group. The third level is that of epilepsy syndrome where a specific syndromic diagnosis can be made. The new classification incorporates etiology along each stage, emphasizing the need to consider etiology at each step of diagnosis as it often carries significant treatment implications. Etiology is broken into six subgroups, selected because of their potential therapeutic consequences. New terminology is introduced such as developmental and epileptic encephalopathy. The term benign is replaced by the terms self-limited and pharmacoresponsive, to be used where appropriate. It is hoped that this new framework will assist in improving epilepsy care and research in the 21st century. PMID:28276062

  18. User profiling and classification for fraud detection in mobile communications networks

    OpenAIRE

    Hollmén, Jaakko

    2000-01-01

    The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the int...

  19. Automated searching for quantum subsystem codes

    International Nuclear Information System (INIS)

    Crosswhite, Gregory M.; Bacon, Dave

    2011-01-01

    Quantum error correction allows for faulty quantum systems to behave in an effectively error-free manner. One important class of techniques for quantum error correction is the class of quantum subsystem codes, which are relevant both to active quantum error-correcting schemes as well as to the design of self-correcting quantum memories. Previous approaches for investigating these codes have focused on applying theoretical analysis to look for interesting codes and to investigate their properties. In this paper we present an alternative approach that uses computational analysis to accomplish the same goals. Specifically, we present an algorithm that computes the optimal quantum subsystem code that can be implemented given an arbitrary set of measurement operators that are tensor products of Pauli operators. We then demonstrate the utility of this algorithm by performing a systematic investigation of the quantum subsystem codes that exist in the setting where the interactions are limited to two-body interactions between neighbors on lattices derived from the convex uniform tilings of the plane.

  20. The design of PSB-VVER experiments relevant to accident management

    International Nuclear Information System (INIS)

    Del Nevo, Alessandro; D'auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander

    2008-01-01

    Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes, which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility, operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed. The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility. (author)

  1. The Design of PSB-VVER Experiments Relevant to Accident Management

    Science.gov (United States)

    Nevo, Alessandro Del; D'Auria, Francesco; Mazzini, Marino; Bykov, Michael; Elkin, Ilya V.; Suslov, Alexander

    Experimental programs carried-out in integral test facilities are relevant for validating the best estimate thermal-hydraulic codes(1), which are used for accident analyses, design of accident management procedures, licensing of nuclear power plants, etc. The validation process, in fact, is based on well designed experiments. It consists in the comparison of the measured and calculated parameters and the determination whether a computer code has an adequate capability in predicting the major phenomena expected to occur in the course of transient and/or accidents. University of Pisa was responsible of the numerical design of the 12 experiments executed in PSB-VVER facility (2), operated at Electrogorsk Research and Engineering Center (Russia), in the framework of the TACIS 2.03/97 Contract 3.03.03 Part A, EC financed (3). The paper describes the methodology adopted at University of Pisa, starting form the scenarios foreseen in the final test matrix until the execution of the experiments. This process considers three key topics: a) the scaling issue and the simulation, with unavoidable distortions, of the expected performance of the reference nuclear power plants; b) the code assessment process involving the identification of phenomena challenging the code models; c) the features of the concerned integral test facility (scaling limitations, control logics, data acquisition system, instrumentation, etc.). The activities performed in this respect are discussed, and emphasis is also given to the relevance of the thermal losses to the environment. This issue affects particularly the small scaled facilities and has relevance on the scaling approach related to the power and volume of the facility.

  2. Classifications of Acute Scaphoid Fractures: A Systematic Literature Review.

    Science.gov (United States)

    Ten Berg, Paul W; Drijkoningen, Tessa; Strackee, Simon D; Buijze, Geert A

    2016-05-01

    Background In the lack of consensus, surgeon-based preference determines how acute scaphoid fractures are classified. There is a great variety of classification systems with considerable controversies. Purposes The purpose of this study was to provide an overview of the different classification systems, clarifying their subgroups and analyzing their popularity by comparing citation indexes. The intention was to improve data comparison between studies using heterogeneous fracture descriptions. Methods We performed a systematic review of the literature based on a search of medical literature from 1950 to 2015, and a manual search using the reference lists in relevant book chapters. Only original descriptions of classifications of acute scaphoid fractures in adults were included. Popularity was based on citation index as reported in the databases of Web of Science (WoS) and Google Scholar. Articles that were cited <10 times in WoS were excluded. Results Our literature search resulted in 308 potentially eligible descriptive reports of which 12 reports met the inclusion criteria. We distinguished 13 different (sub) classification systems based on (1) fracture location, (2) fracture plane orientation, and (3) fracture stability/displacement. Based on citations numbers, the Herbert classification was most popular, followed by the Russe and Mayo classifications. All classification systems were based on plain radiography. Conclusions Most classification systems were based on fracture location, displacement, or stability. Based on the controversy and limited reliability of current classification systems, suggested research areas for an updated classification include three-dimensional fracture pattern etiology and fracture fragment mobility assessed by dynamic imaging.

  3. The National Ecosystem Services Classification System: A Framework for Identifying and Reducing Relevant Uncertainties

    Science.gov (United States)

    Rhodes, C. R.; Sinha, P.; Amanda, N.

    2013-12-01

    In recent years the gap between what scientists know and what policymakers should appreciate in environmental decision making has received more attention, as the costs of the disconnect have become more apparent to both groups. Particularly for water-related policies, the EPA's Office of Water has struggled with benefit estimates held low by the inability to quantify ecological and economic effects that theory, modeling, and anecdotal or isolated case evidence suggest may prove to be larger. Better coordination with ecologists and hydrologists is being explored as a solution. The ecosystem services (ES) concept now nearly two decades old links ecosystem functions and processes to the human value system. But there remains no clear mapping of which ecosystem goods and services affect which individual or economic values. The National Ecosystem Services Classification System (NESCS, 'nexus') project brings together ecologists, hydrologists, and social scientists to do this mapping for aquatic and other ecosystem service-generating systems. The objective is to greatly reduce the uncertainty in water-related policy making by mapping and ultimately quantifying the various functions and products of aquatic systems, as well as how changes to aquatic systems impact the human economy and individual levels of non-monetary appreciation for those functions and products. Primary challenges to fostering interaction between scientists, social scientists, and policymakers are lack of a common vocabulary, and the need for a cohesive comprehensive framework that organizes concepts across disciplines and accommodates scientific data from a range of sources. NESCS builds the vocabulary and the framework so both may inform a scalable transdisciplinary policy-making application. This talk presents for discussion the process and progress in developing both this vocabulary and a classifying framework capable of bridging the gap between a newer but existing ecosystem services classification

  4. [Student nurses and the code of ethics].

    Science.gov (United States)

    Trolliet, Julie

    2017-09-01

    Student nurses, just like all practising professionals, are expected to be aware of and to respect the code of ethics governing their profession. Since the publication of this code, actions to raise awareness of it and explain it to all the relevant players have been put in place. The French National Federation of Student Nurses decided to survey future professionals regarding this new text. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  5. Case file coding of child maltreatment: Methods, challenges, and innovations in a longitudinal project of youth in foster care☆

    Science.gov (United States)

    Huffhines, Lindsay; Tunno, Angela M.; Cho, Bridget; Hambrick, Erin P.; Campos, Ilse; Lichty, Brittany; Jackson, Yo

    2016-01-01

    State social service agency case files are a common mechanism for obtaining information about a child’s maltreatment history, yet these documents are often challenging for researchers to access, and then to process in a manner consistent with the requirements of social science research designs. Specifically, accessing and navigating case files is an extensive undertaking, and a task that many researchers have had to maneuver with little guidance. Even after the files are in hand and the research questions and relevant variables have been clarified, case file information about a child’s maltreatment exposure can be idiosyncratic, vague, inconsistent, and incomplete, making coding such information into useful variables for statistical analyses difficult. The Modified Maltreatment Classification System (MMCS) is a popular tool used to guide the process, and though comprehensive, this coding system cannot cover all idiosyncrasies found in case files. It is not clear from the literature how researchers implement this system while accounting for issues outside of the purview of the MMCS or that arise during MMCS use. Finally, a large yet reliable file coding team is essential to the process, however, the literature lacks training guidelines and methods for establishing reliability between coders. In an effort to move the field toward a common approach, the purpose of the present discussion is to detail the process used by one large-scale study of child maltreatment, the Studying Pathways to Adjustment and Resilience in Kids (SPARK) project, a longitudinal study of resilience in youth in foster care. The article addresses each phase of case file coding, from accessing case files, to identifying how to measure constructs of interest, to dealing with exceptions to the coding system, to coding variables reliably, to training large teams of coders and monitoring for fidelity. Implications for a comprehensive and efficient approach to case file coding are discussed. PMID

  6. Classification across gene expression microarray studies

    Directory of Open Access Journals (Sweden)

    Kuner Ruprecht

    2009-12-01

    Full Text Available Abstract Background The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods on four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive and histological grade (low/high of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM, predictive analysis of microarrays (PAM, random forest (RF and k-top scoring pairs (kTSP. Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing. Results For each individual study the generalization error was benchmarked via complete cross-validation and was found to be similar for all classification methods. The misclassification rates were substantially higher in classification across studies, when each single study was used as an independent test set while all remaining studies were combined for the training of the classifier. However, with increasing number of independent microarray studies used in the training, the overall classification performance improved. DV performed better than the average and showed slightly less variance. In

  7. Drug safety: Pregnancy rating classifications and controversies.

    Science.gov (United States)

    Wilmer, Erin; Chai, Sandy; Kroumpouzos, George

    2016-01-01

    This contribution consolidates data on international pregnancy rating classifications, including the former US Food and Drug Administration (FDA), Swedish, and Australian classification systems, as well as the evidence-based medicine system, and discusses discrepancies among them. It reviews the new Pregnancy and Lactation Labeling Rule (PLLR) that replaced the former FDA labeling system with narrative-based labeling requirements. PLLR emphasizes on human data and highlights pregnancy exposure registry information. In this context, the review discusses important data on the safety of most medications used in the management of skin disease in pregnancy. There are also discussions of controversies relevant to the safety of certain dermatologic medications during gestation. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Quality assurance: The 10-Group Classification System (Robson classification), induction of labor, and cesarean delivery.

    LENUS (Irish Health Repository)

    Robson, Michael

    2015-10-01

    Quality assurance in labor and delivery is needed. The method must be simple and consistent, and be of universal value. It needs to be clinically relevant, robust, and prospective, and must incorporate epidemiological variables. The 10-Group Classification System (TGCS) is a simple method providing a common starting point for further detailed analysis within which all perinatal events and outcomes can be measured and compared. The system is demonstrated in the present paper using data for 2013 from the National Maternity Hospital in Dublin, Ireland. Interpretation of the classification can be easily taught. The standard table can provide much insight into the philosophy of care in the population of women studied and also provide information on data quality. With standardization of audit of events and outcomes, any differences in either sizes of groups, events or outcomes can be explained only by poor data collection, significant epidemiological variables, or differences in practice. In April 2015, WHO proposed that the TGCS (also known as the Robson classification) is used as a global standard for assessing, monitoring, and comparing cesarean delivery rates within and between healthcare facilities.

  9. Code of practice for ionizing radiation

    International Nuclear Information System (INIS)

    Khoo Boo Huat

    1995-01-01

    Prior to 1984, the use of ionizing radiation in Malaysia was governed by the Radioactive Substances Act of 1968. After 1984, its use came under the control of Act 304, called the Atomic Energy Licensing Act 1984. Under powers vested by the Act, the Radiation Protection (Basic Safety Standards) Regulations 1988 were formulated to regulate its use. These Acts do not provide information on proper working procedures. With the publication of the codes of Practice by The Standards and Industrial Research Institute of Malaysia (SIRIM), the users are now able to follow proper guidelines and use ionizing radiation safely and beneficially. This paper discusses the relevant sections in the following codes: 1. Code of Practice for Radiation Protection (Medical X-ray Diagnosis) MS 838:1983. 2. Code of Practice for Safety in Laboratories Part 4: Ionizing radiation MS 1042: Part 4: 1992. (author)

  10. Effectiveness of Multivariate Time Series Classification Using Shapelets

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2015-01-01

    Full Text Available Typically, time series classifiers require signal pre-processing (filtering signals from noise and artifact removal, etc., enhancement of signal features (amplitude, frequency, spectrum, etc., classification of signal features in space using the classical techniques and classification algorithms of multivariate data. We consider a method of classifying time series, which does not require enhancement of the signal features. The method uses the shapelets of time series (time series shapelets i.e. small fragments of this series, which reflect properties of one of its classes most of all.Despite the significant number of publications on the theory and shapelet applications for classification of time series, the task to evaluate the effectiveness of this technique remains relevant. An objective of this publication is to study the effectiveness of a number of modifications of the original shapelet method as applied to the multivariate series classification that is a littlestudied problem. The paper presents the problem statement of multivariate time series classification using the shapelets and describes the shapelet–based basic method of binary classification, as well as various generalizations and proposed modification of the method. It also offers the software that implements a modified method and results of computational experiments confirming the effectiveness of the algorithmic and software solutions.The paper shows that the modified method and the software to use it allow us to reach the classification accuracy of about 85%, at best. The shapelet search time increases in proportion to input data dimension.

  11. Measuring participation according to the International Classification of Functioning to

    NARCIS (Netherlands)

    Perenboom, R.J.M.; Chorus, A.M.J.

    2003-01-01

    Purpose of this study was to report which existing survey instruments assess participation according to the International Classification of Functioning. Disability and Health (ICF). A literature search for relevant survey instruments was conducted. Subsequently, survey instruments were evaluated of

  12. Acute Radiation Sickness Amelioration Analysis

    Science.gov (United States)

    1994-05-01

    Emetic Drugs 16. PRICE CODE Antagonists 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 19, SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF...102 UNCLASSIFIED mcuIw IA IIIcaIIin or Isis PAW CLASSFIED BY: N/A since Unclassified. DECLASSIFY ON: N/A since Unclassified. SECURITY CLASSIFICATION OF...Approximately 2000 documents relevant to the development of the candidate anti-emetic drugs ondansetron (Zofran, Glaxo Pharmaceuticals) and granisetron

  13. Correlation between patients' reasons for encounters/health problems and population density in Japan: a systematic review of observational studies coded by the International Classification of Health Problems in Primary Care (ICHPPC) and the International Classification of Primary care (ICPC).

    Science.gov (United States)

    Kaneko, Makoto; Ohta, Ryuichi; Nago, Naoki; Fukushi, Motoharu; Matsushima, Masato

    2017-09-13

    The Japanese health care system has yet to establish structured training for primary care physicians; therefore, physicians who received an internal medicine based training program continue to play a principal role in the primary care setting. To promote the development of a more efficient primary health care system, the assessment of its current status in regard to the spectrum of patients' reasons for encounters (RFEs) and health problems is an important step. Recognizing the proportions of patients' RFEs and health problems, which are not generally covered by an internist, can provide valuable information to promote the development of a primary care physician-centered system. We conducted a systematic review in which we searched six databases (PubMed, the Cochrane Library, Google Scholar, Ichushi-Web, JDreamIII and CiNii) for observational studies in Japan coded by International Classification of Health Problems in Primary Care (ICHPPC) and International Classification of Primary Care (ICPC) up to March 2015. We employed population density as index of accessibility. We calculated Spearman's rank correlation coefficient to examine the correlation between the proportion of "non-internal medicine-related" RFEs and health problems in each study area in consideration of the population density. We found 17 studies with diverse designs and settings. Among these studies, "non-internal medicine-related" RFEs, which was not thought to be covered by internists, ranged from about 4% to 40%. In addition, "non-internal medicine-related" health problems ranged from about 10% to 40%. However, no significant correlation was found between population density and the proportion of "non-internal medicine-related" RFEs and health problems. This is the first systematic review on RFEs and health problems coded by ICHPPC and ICPC undertaken to reveal the diversity of health problems in Japanese primary care. These results suggest that primary care physicians in some rural areas of Japan

  14. Coding response to a case-mix measurement system based on multiple diagnoses.

    Science.gov (United States)

    Preyra, Colin

    2004-08-01

    To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.

  15. Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

    Science.gov (United States)

    Preyra, Colin

    2004-01-01

    Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940

  16. Code Sharing and Collaboration: Experiences From the Scientist's Expert Assistant Project and Their Relevance to the Virtual Observatory

    Science.gov (United States)

    Korathkar, Anuradha; Grosvenor, Sandy; Jones, Jeremy; Li, Connie; Mackey, Jennifer; Neher, Ken; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    In the Virtual Observatory (VO), software tools will perform the functions that have traditionally been performed by physical observatories and their instruments. These tools will not be adjuncts to VO functionality but will make up the very core of the VO. Consequently, the tradition of observatory and system independent tools serving a small user base is not valid for the VO. For the VO to succeed, we must improve software collaboration and code sharing between projects and groups. A significant goal of the Scientist's Expert Assistant (SEA) project has been promoting effective collaboration and code sharing among groups. During the past three years, the SEA project has been developing prototypes for new observation planning software tools and strategies. Initially funded by the Next Generation Space Telescope, parts of the SEA code have since been adopted by the Space Telescope Science Institute. SEA has also supplied code for the SIRTF (Space Infrared Telescope Facility) planning tools, and the JSky Open Source Java library. The potential benefits of sharing code are clear. The recipient gains functionality for considerably less cost. The provider gains additional developers working with their code. If enough users groups adopt a set of common code and tools, de facto standards can emerge (as demonstrated by the success of the FITS standard). Code sharing also raises a number of challenges related to the management of the code. In this talk, we will review our experiences with SEA--both successes and failures, and offer some lessons learned that might promote further successes in collaboration and re-use.

  17. Categorization of allergic disorders in the new World Health Organization International Classification of Diseases.

    Science.gov (United States)

    Tanno, Luciana Kase; Calderon, Moises A; Goldberg, Bruce J; Akdis, Cezmi A; Papadopoulos, Nikolaos G; Demoly, Pascal

    2014-01-01

    Although efforts to improve the classification of hypersensitivity/allergic diseases have been made, they have not been considered a top-level category in the International Classification of Diseases (ICD)-10 and still are not in the ICD-11 beta phase linearization. ICD-10 is the most used classification system by the allergy community worldwide but it is not considered as appropriate for clinical practice. The Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) on the other hand contains a tightly integrated classification of hypersensitivity/allergic disorders based on the EAACI/WAO nomenclature and the World Health Organization (WHO) may plan to align ICD-11 with SNOMED CT so that they share a common ontological basis. With the aim of actively supporting the ongoing ICD-11 revision and the optimal practice of Allergology, we performed a careful comparison of ICD-10 and 11 beta phase linearization codes to identify gaps, areas of regression in allergy coding and possibly reach solutions, in collaboration with committees in charge of the ICD-11 revision. We have found a significant degree of misclassification of terms in the allergy-related hierarchies. This stems not only from unclear definitions of these conditions but also the use of common names that falsely imply allergy. The lack of understanding of the immune mechanisms underlying some of the conditions contributes to the difficulty in classification. More than providing data to support specific changes into the ongoing linearization, these results highlight the need for either a new chapter entitled Hypersensitivity/Allergic Disorders as in SNOMED CT or a high level structure in the Immunology chapter in order to make classification more appropriate and usable.

  18. Catalogue and classification of technical safety standards, rules and regulations for nuclear power reactors and nuclear fuel cycle facilities

    International Nuclear Information System (INIS)

    Fichtner, N.; Becker, K.; Bashir, M.

    1977-01-01

    The present report is an up-dated version of the report 'Catalogue and Classification of Technical Safety Rules for Light-water Reactors and Reprocessing Plants' edited under code No EUR 5362e, August 1975. Like the first version of the report, it constitutes a catalogue and classification of standards, rules and regulations on land-based nuclear power reactors and fuel cycle facilities. The reasons for the classification system used are given and discussed

  19. Image Classification Workflow Using Machine Learning Methods

    Science.gov (United States)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  20. World Gas Conference 1997. Sub-committee G 2. International classification for the gas industry

    International Nuclear Information System (INIS)

    1997-01-01

    This international Gas Union classification is a documentary tool for the transfer of information relative to the gas industry. It is written in English and in French, official languages of the IGU. In a resource centre, the classification can be used to set up a manual index file of documentary references selected for subsequent information retrieval. In the case of computer file, the classification can be used for automatic classification of document references selected by field. The IGU classification is divided into ten chapters: O - Arts, science and technologies. A - Manufacture and treatment of gases. B - Gas supply. C - Gas utilizations. D - By-products of manufactures and natural gas. E - Personnel of the gas industry. F - Hygiene and safety in the gas industry. G - Administration and organization of the gas industry. H - Commercial questions of the gas industry. L - Geographical classification. Each chapter is subdivided into subjects, and each subject is further divided according to a hierarchical set of classification codes which correspond to a more detailed approach to the subject concerned. (EG)

  1. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  2. Electronic structure classifications using scanning tunneling microscopy conductance imaging

    International Nuclear Information System (INIS)

    Horn, K.M.; Swartzentruber, B.S.; Osbourn, G.C.; Bouchard, A.; Bartholomew, J.W.

    1998-01-01

    The electronic structure of atomic surfaces is imaged by applying multivariate image classification techniques to multibias conductance data measured using scanning tunneling microscopy. Image pixels are grouped into classes according to shared conductance characteristics. The image pixels, when color coded by class, produce an image that chemically distinguishes surface electronic features over the entire area of a multibias conductance image. Such open-quotes classedclose quotes images reveal surface features not always evident in a topograph. This article describes the experimental technique used to record multibias conductance images, how image pixels are grouped in a mathematical, classification space, how a computed grouping algorithm can be employed to group pixels with similar conductance characteristics in any number of dimensions, and finally how the quality of the resulting classed images can be evaluated using a computed, combinatorial analysis of the full dimensional space in which the classification is performed. copyright 1998 American Institute of Physics

  3. Automatic classification of blank substrate defects

    Science.gov (United States)

    Boettiger, Tom; Buck, Peter; Paninjath, Sankaranarayanan; Pereira, Mark; Ronald, Rob; Rost, Dan; Samir, Bhamidipati

    2014-10-01

    Mask preparation stages are crucial in mask manufacturing, since this mask is to later act as a template for considerable number of dies on wafer. Defects on the initial blank substrate, and subsequent cleaned and coated substrates, can have a profound impact on the usability of the finished mask. This emphasizes the need for early and accurate identification of blank substrate defects and the risk they pose to the patterned reticle. While Automatic Defect Classification (ADC) is a well-developed technology for inspection and analysis of defects on patterned wafers and masks in the semiconductors industry, ADC for mask blanks is still in the early stages of adoption and development. Calibre ADC is a powerful analysis tool for fast, accurate, consistent and automatic classification of defects on mask blanks. Accurate, automated classification of mask blanks leads to better usability of blanks by enabling defect avoidance technologies during mask writing. Detailed information on blank defects can help to select appropriate job-decks to be written on the mask by defect avoidance tools [1][4][5]. Smart algorithms separate critical defects from the potentially large number of non-critical defects or false defects detected at various stages during mask blank preparation. Mechanisms used by Calibre ADC to identify and characterize defects include defect location and size, signal polarity (dark, bright) in both transmitted and reflected review images, distinguishing defect signals from background noise in defect images. The Calibre ADC engine then uses a decision tree to translate this information into a defect classification code. Using this automated process improves classification accuracy, repeatability and speed, while avoiding the subjectivity of human judgment compared to the alternative of manual defect classification by trained personnel [2]. This paper focuses on the results from the evaluation of Automatic Defect Classification (ADC) product at MP Mask

  4. Bad-good constraints on a polarity correspondence account for the spatial-numerical association of response codes (SNARC) and markedness association of response codes (MARC) effects.

    Science.gov (United States)

    Leth-Steensen, Craig; Citta, Richie

    2016-01-01

    Performance in numerical classification tasks involving either parity or magnitude judgements is quicker when small numbers are mapped onto a left-sided response and large numbers onto a right-sided response than for the opposite mapping (i.e., the spatial-numerical association of response codes or SNARC effect). Recent research by Gevers et al. [Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number-space interactions. Journal of Experimental Psychology: General, 139, 180-190] suggests that this effect also arises for vocal "left" and "right" responding, indicating that verbal-spatial coding has a role to play in determining it. Another presumably verbal-based, spatial-numerical mapping phenomenon is the linguistic markedness association of response codes (MARC) effect whereby responding in parity tasks is quicker when odd numbers are mapped onto left-sided responses and even numbers onto right-sided responses. A recent account of both the SNARC and MARC effects is based on the polarity correspondence principle [Proctor, R. W., & Cho, Y. S. (2006). Polarity correspondence: A general principle for performance of speeded binary classification tasks. Psychological Bulletin, 132, 416-442]. This account assumes that stimulus and response alternatives are coded along any number of dimensions in terms of - and + polarities with quicker responding when the polarity codes for the stimulus and the response correspond. In the present study, even-odd parity judgements were made using either "left" and "right" or "bad" and "good" vocal responses. Results indicated that a SNARC effect was indeed present for the former type of vocal responding, providing further evidence for the sufficiency of the verbal-spatial coding account for this effect. However, the decided lack of an analogous SNARC-like effect in the results for the latter type of vocal responding provides an important

  5. Deep Recurrent Neural Networks for Supernovae Classification

    Science.gov (United States)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  6. British athletics muscle injury classification: a reliability study for a new grading system

    International Nuclear Information System (INIS)

    Patel, A.; Chakraverty, J.; Pollock, N.; Chakraverty, R.; Suokas, A.K.; James, S.L.

    2015-01-01

    Aim: To implement and validate the newly proposed British athletics muscle injury classification in the assessment of hamstring injuries in track and field athletes and to analyse the nature and frequency of the discrepancies. Materials and methods: This was a retrospective study analysing hamstring injuries in elite British athletes using the proposed classification system. Classification of 65 hamstring injuries in 45 high-level athletes by two radiologists at two time points 4 months apart to determine interrater variability, intrarater variability, and feasibility of the classification system was undertaken. Results: Interrater Kappa values of 0.80 (95% confidence interval [CI]: 0.67–0.92; p<0.0001) for Round 1 and 0.88 (95% CI: 0.76–1.00; p<0.0001) for Round 2 of the review were observed. Percentages of agreement were 85% for Round 1 and 91% for Round 2. The intrarater Kappa value for the two reviewers were 0.76 (95% CI: 0.63–0.88; p<0.0001) and 0.65 (95% CI: 0.53–0.76; p<0.0001) and the average was 0.71 suggesting substantial overall agreement. The percentages of agreement were 82% and 72%, respectively. Conclusions: This classification system is straightforward to use and produces both reproducible and consistent results based on interrater and intrarater Kappa values with at least substantial agreement in all groups. Further work is ongoing to investigate whether individual grades within this classification system provide prognostic information and could guide clinical management. - Highlights: • This classification system is based on MRI parameters shown to have prognostic relevance. • It is simple to use, reproducible and clinically relevant which will enhance clinical practice. • Once clinicians are familiar with the classification inter & intrarater reliability will improve.

  7. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    Science.gov (United States)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  8. Exploiting monotonicity constraints for active learning in ordinal classification

    NARCIS (Netherlands)

    Soons, Pieter; Feelders, Adrianus

    2014-01-01

    We consider ordinal classification and instance ranking problems where each attribute is known to have an increasing or decreasing relation with the class label or rank. For example, it stands to reason that the number of query terms occurring in a document has a positive influence on its relevance

  9. On-line monitoring and inservice inspection in codes

    International Nuclear Information System (INIS)

    Bartonicek, J.; Zaiss, W.; Bath, H.R.

    1999-01-01

    The relevant regulatory codes determine the ISI tasks and the time intervals for recurrent components testing for evaluation of operation-induced damaging or ageing in order to ensure component integrity on the basis of the last available quality data. In-service quality monitoring is carried out through on-line monitoring and recurrent testing. The requirements defined by the engineering codes elaborated by various institutions are comparable, with the KTA nuclear engineering and safety codes being the most complete provisions for quality evaluation and assurance after different, defined service periods. German conventional codes for assuring component integrity provide exclusively for recurrent inspection regimes (mainly pressure tests and optical testing). The requirements defined in the KTA codes however always demanded more specific inspections relying on recurrent testing as well as on-line monitoring. Foreign codes for ensuring component integrity concentrate on NDE tasks at regular time intervals, with time intervals scope of testing activities being defined on the basis of the ASME code, section XI. (orig./CB) [de

  10. Towards an international classification for patient safety : the conceptual framework

    NARCIS (Netherlands)

    Sherman, H.; Castro, G.; Fletcher, M.; Hatlie, M.; Hibbert, P.; Jakob, R.; Koss, R.; Lewalle, P.; Loeb, J.; Perneger, Th.; Runciman, W.; Thomson, R.; Schaaf, van der T.W.; Virtanen, M.

    2009-01-01

    Global advances in patient safety have been hampered by the lack of a uniform classification of patient safety concepts. This is a significant barrier to developing strategies to reduce risk, performing evidence-based research and evaluating existing healthcare policies relevant to patient safety.

  11. Territorial pattern and classification of soils of Kryvyi Rih Iron-Ore Basin

    OpenAIRE

    О. О. Dolina; О. М. Smetana

    2014-01-01

    The authors developed the classification of soils and adapted it to the conditions of Krivyi Rih industrial region. It became the basis for determining the degree of soil cover transformation in the iron-ore basin under technogenesis. The classification represents the system of hierarchical objects of different taxonomic levels. It allows determination of relationships between objects and their properties. Researched patterns of soil cover structures’ distribution were the basis for the relev...

  12. A rule-based electronic phenotyping algorithm for detecting clinically relevant cardiovascular disease cases.

    Science.gov (United States)

    Esteban, Santiago; Rodríguez Tablado, Manuel; Ricci, Ricardo Ignacio; Terrasa, Sergio; Kopitowski, Karin

    2017-07-14

    The implementation of electronic medical records (EMR) is becoming increasingly common. Error and data loss reduction, patient-care efficiency increase, decision-making assistance and facilitation of event surveillance, are some of the many processes that EMRs help improve. In addition, they show a lot of promise in terms of data collection to facilitate observational epidemiological studies and their use for this purpose has increased significantly over the recent years. Even though the quantity and availability of the data are clearly improved thanks to EMRs, still, the problem of the quality of the data remains. This is especially important when attempting to determine if an event has actually occurred or not. We sought to assess the sensitivity, specificity, and agreement level of a codes-based algorithm for the detection of clinically relevant cardiovascular (CaVD) and cerebrovascular (CeVD) disease cases, using data from EMRs. Three family physicians from the research group selected clinically relevant CaVD and CeVD terms from the international classification of primary care, Second Edition (ICPC-2), the ICD 10 version 2015 and SNOMED-CT 2015 Edition. These terms included both signs, symptoms, diagnoses and procedures associated with CaVD and CeVD. Terms not related to symptoms, signs, diagnoses or procedures of CaVD or CeVD and also those describing incidental findings without clinical relevance were excluded. The algorithm yielded a positive result if the patient had at least one of the selected terms in their medical records, as long as it was not recorded as an error. Else, if no terms were found, the patient was classified as negative. This algorithm was applied to a randomly selected sample of the active patients within the hospital's HMO by 1/1/2005 that were 40-79 years old, had at least one year of seniority in the HMO and at least one clinical encounter. Thus, patients were classified into four groups: (1) Negative patients (2) Patients with Ca

  13. Identification of aspects of functioning, disability and health relevant to patients experiencing vertigo: a qualitative study using the international classification of functioning, disability and health

    Science.gov (United States)

    2012-01-01

    Purpose Aims of this study were to identify aspects of functioning and health relevant to patients with vertigo expressed by ICF categories and to explore the potential of the ICF to describe the patient perspective in vertigo. Methods We conducted a series of qualitative semi-structured face-to-face interviews using a descriptive approach. Data was analyzed using the meaning condensation procedure and then linked to categories of the International Classification of Functioning, Disability and Health (ICF). Results From May to July 2010 12 interviews were carried out until saturation was reached. Four hundred and seventy-one single concepts were extracted which were linked to 142 different ICF categories. 40 of those belonged to the component body functions, 62 to the component activity and participation, and 40 to the component environmental factors. Besides the most prominent aspect “dizziness” most participants reported problems within “Emotional functions (b152), problems related to mobility and carrying out the daily routine. Almost all participants reported “Immediate family (e310)” as a relevant modifying environmental factor. Conclusions From the patients’ perspective, vertigo has impact on multifaceted aspects of functioning and disability, mainly body functions and activities and participation. Modifying contextual factors have to be taken into account to cover the complex interaction between the health condition of vertigo on the individuals’ daily life. The results of this study will contribute to developing standards for the measurement of functioning, disability and health relevant for patients suffering from vertigo. PMID:22738067

  14. Improved predictions of nuclear reaction rates with the TALYS reaction code for astrophysical applications

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J

    2008-01-01

    Context. Nuclear reaction rates of astrophysical applications are traditionally determined on the basis of Hauser-Feshbach reaction codes. These codes adopt a number of approximations that have never been tested, such as a simplified width fluctuation correction, the neglect of delayed or multiple-particle emission during the electromagnetic decay cascade, or the absence of the pre-equilibrium contribution at increasing incident energies. Aims. The reaction code TALYS has been recently updated to estimate the Maxwellian-averaged reaction rates that are of astrophysical relevance. These new developments enable the reaction rates to be calculated with increased accuracy and reliability and the approximations of previous codes to be investigated. Methods. The TALYS predictions for the thermonuclear rates of relevance to astrophysics are detailed and compared with those derived by widely-used codes for the same nuclear ingredients. Results. It is shown that TALYS predictions may differ significantly from those of previous codes, in particular for nuclei for which no or little nuclear data is available. The pre-equilibrium process is shown to influence the astrophysics rates of exotic neutron-rich nuclei significantly. For the first time, the Maxwellian- averaged (n, 2n) reaction rate is calculated for all nuclei and its competition with the radiative capture rate is discussed. Conclusions. The TALYS code provides a new tool to estimate all nuclear reaction rates of relevance to astrophysics with improved accuracy and reliability. (authors)

  15. "What is relevant in a text document?": An interpretable machine learning approach.

    Directory of Open Access Journals (Sweden)

    Leila Arras

    Full Text Available Text documents can be described by a number of abstract concepts such as semantic category, writing style, or sentiment. Machine learning (ML models have been trained to automatically map documents to these abstract concepts, allowing to annotate very large text collections, more than could be processed by a human in a lifetime. Besides predicting the text's category very accurately, it is also highly desirable to understand how and why the categorization process takes place. In this paper, we demonstrate that such understanding can be achieved by tracing the classification decision back to individual words using layer-wise relevance propagation (LRP, a recently developed technique for explaining predictions of complex non-linear classifiers. We train two word-based ML models, a convolutional neural network (CNN and a bag-of-words SVM classifier, on a topic categorization task and adapt the LRP method to decompose the predictions of these models onto words. Resulting scores indicate how much individual words contribute to the overall classification decision. This enables one to distill relevant information from text documents without an explicit semantic information extraction step. We further use the word-wise relevance scores for generating novel vector-based document representations which capture semantic information. Based on these document vectors, we introduce a measure of model explanatory power and show that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.

  16. Interobserver variation in classification of malleolar fractures

    International Nuclear Information System (INIS)

    Verhage, S.M.; Hoogendoorn, J.M.; Rhemrev, S.J.; Keizer, S.B.; Quarles van Ufford, H.M.E.

    2015-01-01

    Classification of malleolar fractures is a matter of debate. In the ideal situation, a classification system is easy to use, shows good inter- and intraobserver agreement, and has implications for treatment or research. Interobserver study. Four observers distributed 100 X-rays to the Weber, AO and Lauge-Hansen classification. In case of a trimalleolar fracture, the size of the posterior fragment was measured. Interobserver agreement was calculated with Cohen's kappa. Agreement on the size of the posterior fragment was calculated with the intraclass correlation coefficient. Moderate agreement was found with all classification systems: the Weber (K = 0.49), AO (K = 0.45) and Lauge-Hansen (K = 0.47). Interobserver agreement on the presence of a posterior fracture was substantial (K = 0.63). Estimation of the size of the fragment showed moderate agreement (ICC = 0.57). Classification according to the classical systems showed moderate interobserver agreement, probably due to an unclear trauma mechanism or the difficult relation between the level of the fibular fracture and syndesmosis. Substantial agreement on posterior malleolar fractures is mostly due to small (<5 %) posterior fragments. A classification system that describes the presence and location of fibular fractures, presence of medial malleolar fractures or deep deltoid ligament injury, and presence of relevant and dislocated posterior malleolar fractures is more useful in the daily setting than the traditional systems. In case of a trimalleolar fracture, a CT scan is in our opinion very useful in the detection of small posterior fragments and preoperative planning. (orig.)

  17. Interobserver variation in classification of malleolar fractures

    Energy Technology Data Exchange (ETDEWEB)

    Verhage, S.M.; Hoogendoorn, J.M. [MC Haaglanden, Department of Surgery, The Hague (Netherlands); Secretariaat Heelkunde, MC Haaglanden, locatie Westeinde, Postbus 432, CK, The Hague (Netherlands); Rhemrev, S.J. [MC Haaglanden, Department of Surgery, The Hague (Netherlands); Keizer, S.B. [MC Haaglanden, Department of Orthopaedic Surgery, The Hague (Netherlands); Quarles van Ufford, H.M.E. [MC Haaglanden, Department of Radiology, The Hague (Netherlands)

    2015-10-15

    Classification of malleolar fractures is a matter of debate. In the ideal situation, a classification system is easy to use, shows good inter- and intraobserver agreement, and has implications for treatment or research. Interobserver study. Four observers distributed 100 X-rays to the Weber, AO and Lauge-Hansen classification. In case of a trimalleolar fracture, the size of the posterior fragment was measured. Interobserver agreement was calculated with Cohen's kappa. Agreement on the size of the posterior fragment was calculated with the intraclass correlation coefficient. Moderate agreement was found with all classification systems: the Weber (K = 0.49), AO (K = 0.45) and Lauge-Hansen (K = 0.47). Interobserver agreement on the presence of a posterior fracture was substantial (K = 0.63). Estimation of the size of the fragment showed moderate agreement (ICC = 0.57). Classification according to the classical systems showed moderate interobserver agreement, probably due to an unclear trauma mechanism or the difficult relation between the level of the fibular fracture and syndesmosis. Substantial agreement on posterior malleolar fractures is mostly due to small (<5 %) posterior fragments. A classification system that describes the presence and location of fibular fractures, presence of medial malleolar fractures or deep deltoid ligament injury, and presence of relevant and dislocated posterior malleolar fractures is more useful in the daily setting than the traditional systems. In case of a trimalleolar fracture, a CT scan is in our opinion very useful in the detection of small posterior fragments and preoperative planning. (orig.)

  18. INF Code related matters. Joint IAEA/IMO literature survey on potential consequences of severe maritime accidents involving the transport of radioactive material. 2 volumes. Vol. I - Report and publication titles. Vol. II - Relevant abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-10

    This literature survey was undertaken jointly by the International Maritime Organization (IMO) and the International Atomic Energy Agency (IAEA) as a step in addressing the subject of environmental impact of accidents involving materials subject to the IMO's Code for the Safe Carriage of Irradiated Nuclear Fuel, Plutonium and High-Level Radioactive Wastes in Flasks on Board Ships, also known as the INF Code. The results of the survey are provided in two volumes: the first one containing the description of the search and search results with the list of generated publication titles, and the second volume containing the abstracts of those publications deemed relevant for the purposes of the literature survey. Literature published between 1980 and mid-1999 was reviewed by two independent consultants who generated publication titles by performing searches of appropriate databases, and selected the abstracts of relevant publications for inclusion in this survey. The IAEA operates INIS, the world's leading computerised bibliographical information system on the peaceful uses of nuclear energy. The acronym INIS stands for International Nuclear Information System. INIS Members are responsible for determining the relevant nuclear literature produced within their borders or organizational confines, and then preparing the associated input in accordance with INIS rules. INIS records are included in other major databases such as the Energy, Science and Technology database of the DIALOG service. Because it is the INIS Members, rather than the IAEA Secretariat, who are responsible for its contents, it was considered appropriate that INIS be the primary source of information for this literature review. Selected unpublished reports were also reviewed, e.g. Draft Proceedings of the Special Consultative Meeting of Entities involved in the maritime transport of materials covered by the INF Code (SCM 5), March 1996. Many of the formal papers at SCM 5 were included in the literature

  19. INF Code related matters. Joint IAEA/IMO literature survey on potential consequences of severe maritime accidents involving the transport of radioactive material. 2 volumes. Vol. I - Report and publication titles. Vol. II - Relevant abstracts

    International Nuclear Information System (INIS)

    2000-01-01

    This literature survey was undertaken jointly by the International Maritime Organization (IMO) and the International Atomic Energy Agency (IAEA) as a step in addressing the subject of environmental impact of accidents involving materials subject to the IMO's Code for the Safe Carriage of Irradiated Nuclear Fuel, Plutonium and High-Level Radioactive Wastes in Flasks on Board Ships, also known as the INF Code. The results of the survey are provided in two volumes: the first one containing the description of the search and search results with the list of generated publication titles, and the second volume containing the abstracts of those publications deemed relevant for the purposes of the literature survey. Literature published between 1980 and mid-1999 was reviewed by two independent consultants who generated publication titles by performing searches of appropriate databases, and selected the abstracts of relevant publications for inclusion in this survey. The IAEA operates INIS, the world's leading computerised bibliographical information system on the peaceful uses of nuclear energy. The acronym INIS stands for International Nuclear Information System. INIS Members are responsible for determining the relevant nuclear literature produced within their borders or organizational confines, and then preparing the associated input in accordance with INIS rules. INIS records are included in other major databases such as the Energy, Science and Technology database of the DIALOG service. Because it is the INIS Members, rather than the IAEA Secretariat, who are responsible for its contents, it was considered appropriate that INIS be the primary source of information for this literature review. Selected unpublished reports were also reviewed, e.g. Draft Proceedings of the Special Consultative Meeting of Entities involved in the maritime transport of materials covered by the INF Code (SCM 5), March 1996. Many of the formal papers at SCM 5 were included in the literature

  20. Elastic creep-fatigue evaluation for ASME code

    International Nuclear Information System (INIS)

    Severud, L.K.; Winkel, B.V.

    1987-01-01

    Experience with applying the ASME Code Case N-47 rules for evaluation of creep-fatigue with elastic analysis results has been problematic. The new elastic evaluation methods are intended to bound the stress level and strain range values needed for use in employing the code inelastic analysis creep-fatigue damage counting procedures. To account for elastic followup effects, ad hoc rules for stress classification, shakedown, and ratcheting are employed. Because elastic followup, inelastic strain concentration, and stress-time effects are accounted for, the design fatigue curves in Case N-47 for inelastic analysis are used instead of the more conservative elastic analysis curves. Creep damage assessments are made using an envelope stress-time history that treats multiple load events and repeated cycles during elevated temperature service life. (orig./GL)

  1. Genome-wide identification of coding and non-coding conserved sequence tags in human and mouse genomes

    Directory of Open Access Journals (Sweden)

    Maggi Giorgio P

    2008-06-01

    Full Text Available Abstract Background The accurate detection of genes and the identification of functional regions is still an open issue in the annotation of genomic sequences. This problem affects new genomes but also those of very well studied organisms such as human and mouse where, despite the great efforts, the inventory of genes and regulatory regions is far from complete. Comparative genomics is an effective approach to address this problem. Unfortunately it is limited by the computational requirements needed to perform genome-wide comparisons and by the problem of discriminating between conserved coding and non-coding sequences. This discrimination is often based (thus dependent on the availability of annotated proteins. Results In this paper we present the results of a comprehensive comparison of human and mouse genomes performed with a new high throughput grid-based system which allows the rapid detection of conserved sequences and accurate assessment of their coding potential. By detecting clusters of coding conserved sequences the system is also suitable to accurately identify potential gene loci. Following this analysis we created a collection of human-mouse conserved sequence tags and carefully compared our results to reliable annotations in order to benchmark the reliability of our classifications. Strikingly we were able to detect several potential gene loci supported by EST sequences but not corresponding to as yet annotated genes. Conclusion Here we present a new system which allows comprehensive comparison of genomes to detect conserved coding and non-coding sequences and the identification of potential gene loci. Our system does not require the availability of any annotated sequence thus is suitable for the analysis of new or poorly annotated genomes.

  2. Coding OSICS sports injury diagnoses in epidemiological studies: does the background of the coder matter?

    Science.gov (United States)

    Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C

    2014-04-01

    To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Assessment of intercoder agreement. Community Australian football. 1082 standardised injury surveillance records. Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific 'X' code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise.

  3. Prometheus: the implementation of clinical coding schemes in French routine general practice

    Directory of Open Access Journals (Sweden)

    Laurent Letrilliart

    2006-09-01

    Conclusions Coding health problems on a routine basis proved to be feasible. However, this process can be used on a more widespread basis and linked to other management data only if physicians are specially trained and rewarded, and the software incorporates large terminologies mapped with classifications.

  4. Identifying complications of interventional procedures from UK routine healthcare databases: a systematic search for methods using clinical codes.

    Science.gov (United States)

    Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew

    2014-11-28

    Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical

  5. Differential impact of relevant and irrelevant dimension primes on rule-based and information-integration category learning.

    Science.gov (United States)

    Grimm, Lisa R; Maddox, W Todd

    2013-11-01

    Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.

  6. SGN III code conversion from Apollo to HP

    International Nuclear Information System (INIS)

    Lee, Hae Cho

    1996-04-01

    SGN III computer code is used to analyze transient behavior of reactor coolant system, pressurizer and steam generators in the event of main steam line break (MSLB), and to calculate mass and energy release for containment design. This report firstly describes detailed work carried out for installation of SFN III on Apollo DN 10000 and code validation results after installation. Secondly, a series of work is also describes in relation to installation of SGN III on HP 9000/700 series as well as relevant code validation results. Attached is a report on software verification and validation results. 8 refs. (Author) .new

  7. Diabetes Mellitus Coding Training for Family Practice Residents.

    Science.gov (United States)

    Urse, Geraldine N

    2015-07-01

    Although physicians regularly use numeric coding systems such as the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) to describe patient encounters, coding errors are common. One of the most complicated diagnoses to code is diabetes mellitus. The ICD-9-CM currently has 39 separate codes for diabetes mellitus; this number will be expanded to more than 50 with the introduction of ICD-10-CM in October 2015. To assess the effect of a 1-hour focused presentation on ICD-9-CM codes on diabetes mellitus coding. A 1-hour focused lecture on the correct use of diabetes mellitus codes for patient visits was presented to family practice residents at Doctors Hospital Family Practice in Columbus, Ohio. To assess resident knowledge of the topic, a pretest and posttest were given to residents before and after the lecture, respectively. Medical records of all patients with diabetes mellitus who were cared for at the hospital 6 weeks before and 6 weeks after the lecture were reviewed and compared for the use of diabetes mellitus ICD-9 codes. Eighteen residents attended the lecture and completed the pretest and posttest. The mean (SD) percentage of correct answers was 72.8% (17.1%) for the pretest and 84.4% (14.6%) for the posttest, for an improvement of 11.6 percentage points (P≤.035). The percentage of total available codes used did not substantially change from before to after the lecture, but the use of the generic ICD-9-CM code for diabetes mellitus type II controlled (250.00) declined (58 of 176 [33%] to 102 of 393 [26%]) and the use of other codes increased, indicating a greater variety in codes used after the focused lecture. After a focused lecture on diabetes mellitus coding, resident coding knowledge improved. Review of medical record data did not reveal an overall change in the number of diabetic codes used after the lecture but did reveal a greater variety in the codes used.

  8. Classifying Classifications

    DEFF Research Database (Denmark)

    Debus, Michael S.

    2017-01-01

    This paper critically analyzes seventeen game classifications. The classifications were chosen on the basis of diversity, ranging from pre-digital classification (e.g. Murray 1952), over game studies classifications (e.g. Elverdam & Aarseth 2007) to classifications of drinking games (e.g. LaBrie et...... al. 2013). The analysis aims at three goals: The classifications’ internal consistency, the abstraction of classification criteria and the identification of differences in classification across fields and/or time. Especially the abstraction of classification criteria can be used in future endeavors...... into the topic of game classifications....

  9. An Internal Audit Perspective on Differences between European Corporate Governance Codes and OECD Principles

    OpenAIRE

    Raluca Ivan

    2015-01-01

    The main purpose of this research is to realize an analysis from an internal audit perspective of European Corporate Governance Codes, in regards with Organization for Economic Cooperation and Development – OECD Principles of Corporate Governance. The research methodology used a classification of countries by legal regime, trying to obtain a global view over the differences between the European corporate governance codes and the OECD Principles provisions, from internal audit’s perspective. T...

  10. Audit of Clinical Coding of Major Head and Neck Operations

    Science.gov (United States)

    Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean

    2009-01-01

    INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944

  11. The 2002 Revision of the American Psychological Association's Ethics Code: Implications for School Psychologists

    Science.gov (United States)

    Flanagan, Rosemary; Miller, Jeffrey A.; Jacob, Susan

    2005-01-01

    The Ethical Principles for Psychologists and Code of Conduct has been recently revised. The organization of the code changed, and the language was made more specific. A number of points relevant to school psychology are explicitly stated in the code. A clear advantage of including these items in the code is the assistance to school psychologists…

  12. Classification for Safety-Critical Car-Cyclist Scenarios Using Machine Learning

    NARCIS (Netherlands)

    Cara, I.; Gelder, E.D.

    2015-01-01

    The number of fatal car-cyclist accidents is increasing. Advanced Driver Assistance Systems (ADAS) can improve the safety of cyclists, but they need to be tested with realistic safety-critical car-cyclist scenarios. In order to store only relevant scenarios, an online classification algorithm is

  13. Constructing a classification of hypersensitivity/allergic diseases for ICD-11 by crowdsourcing the allergist community.

    Science.gov (United States)

    Tanno, L K; Calderon, M A; Goldberg, B J; Gayraud, J; Bircher, A J; Casale, T; Li, J; Sanchez-Borges, M; Rosenwasser, L J; Pawankar, R; Papadopoulos, N G; Demoly, P

    2015-06-01

    The global allergy community strongly believes that the 11th revision of the International Classification of Diseases (ICD-11) offers a unique opportunity to improve the classification and coding of hypersensitivity/allergic diseases via inclusion of a specific chapter dedicated to this disease area to facilitate epidemiological studies, as well as to evaluate the true size of the allergy epidemic. In this context, an international collaboration has decided to revise the classification of hypersensitivity/allergic diseases and to validate it for ICD-11 by crowdsourcing the allergist community. After careful comparison between ICD-10 and 11 beta phase linearization codes, we identified gaps and trade-offs allowing us to construct a classification proposal, which was sent to the European Academy of Allergy and Clinical Immunology (EAACI) sections, interest groups, executive committee as well as the World Allergy Organization (WAO), and American Academy of Allergy Asthma and Immunology (AAAAI) leaderships. The crowdsourcing process produced comments from 50 of 171 members contacted by e-mail. The classification proposal has also been discussed at face-to-face meetings with experts of EAACI sections and interest groups and presented in a number of business meetings during the 2014 EAACI annual congress in Copenhagen. As a result, a high-level complex structure of classification for hypersensitivity/allergic diseases has been constructed. The model proposed has been presented to the WHO groups in charge of the ICD revision. The international collaboration of allergy experts appreciates bilateral discussion and aims to get endorsement of their proposals for the final ICD-11. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Internationally Comparable Measures of Occupational Status for the 1988 International Standard Classification of Occupations

    NARCIS (Netherlands)

    Ganzeboom, H.B.G.; Treiman, D.J.

    1996-01-01

    This paper provides operational procedures for coding internationally comparable measures of occupational status from the recently published International Standard Classification of Occupation 1988 (ISCO88) of the International Labor Office (ILO, 1990). We first discuss the nature of the ISCO88

  15. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  16. Environmental Monitoring, Water Quality - MO 2009 Water Quality Standards - Table G Lake Classifications and Use Designations (SHP)

    Data.gov (United States)

    NSGIC State | GIS Inventory — This data set contains Missouri Water Quality Standards (WQS) lake classifications and use designations described in the Missouri Code of State Regulations (CSR), 10...

  17. Pelvic Arterial Anatomy Relevant to Prostatic Artery Embolisation and Proposal for Angiographic Classification

    Energy Technology Data Exchange (ETDEWEB)

    Assis, André Moreira de, E-mail: andre.maa@gmail.com; Moreira, Airton Mota, E-mail: motamoreira@gmail.com; Paula Rodrigues, Vanessa Cristina de, E-mail: vanessapaular@yahoo.com.br [University of Sao Paulo Medical School, Interventional Radiology and Endovascular Surgery Department, Radiology Institute (Brazil); Harward, Sardis Honoria, E-mail: sardis.harward@merit.com [The Dartmouth Center for Health Care Delivery Science (United States); Antunes, Alberto Azoubel, E-mail: antunesuro@uol.com.br; Srougi, Miguel, E-mail: srougi@usp.br [University of Sao Paulo Medical School, Urology Department (Brazil); Carnevale, Francisco Cesar, E-mail: fcarnevale@uol.com.br [University of Sao Paulo Medical School, Interventional Radiology and Endovascular Surgery Department, Radiology Institute (Brazil)

    2015-08-15

    PurposeTo describe and categorize the angiographic findings regarding prostatic vascularization, propose an anatomic classification, and discuss its implications for the PAE procedure.MethodsAngiographic findings from 143 PAE procedures were reviewed retrospectively, and the origin of the inferior vesical artery (IVA) was classified into five subtypes as follows: type I: IVA originating from the anterior division of the internal iliac artery (IIA), from a common trunk with the superior vesical artery (SVA); type II: IVA originating from the anterior division of the IIA, inferior to the SVA origin; type III: IVA originating from the obturator artery; type IV: IVA originating from the internal pudendal artery; and type V: less common origins of the IVA. Incidences were calculated by percentage.ResultsTwo hundred eighty-six pelvic sides (n = 286) were analyzed, and 267 (93.3 %) were classified into I–IV types. Among them, the most common origin was type IV (n = 89, 31.1 %), followed by type I (n = 82, 28.7 %), type III (n = 54, 18.9 %), and type II (n = 42, 14.7 %). Type V anatomy was seen in 16 cases (5.6 %). Double vascularization, defined as two independent prostatic branches in one pelvic side, was seen in 23 cases (8.0 %).ConclusionsDespite the large number of possible anatomical variations of male pelvis, four main patterns corresponded to almost 95 % of the cases. Evaluation of anatomy in a systematic fashion, following a standard classification, will make PAE a faster, safer, and more effective procedure.

  18. Prediction and classification of respiratory motion

    CERN Document Server

    Lee, Suk Jin

    2014-01-01

    This book describes recent radiotherapy technologies including tools for measuring target position during radiotherapy and tracking-based delivery systems. This book presents a customized prediction of respiratory motion with clustering from multiple patient interactions. The proposed method contributes to the improvement of patient treatments by considering breathing pattern for the accurate dose calculation in radiotherapy systems. Real-time tumor-tracking, where the prediction of irregularities becomes relevant, has yet to be clinically established. The statistical quantitative modeling for irregular breathing classification, in which commercial respiration traces are retrospectively categorized into several classes based on breathing pattern are discussed as well. The proposed statistical classification may provide clinical advantages to adjust the dose rate before and during the external beam radiotherapy for minimizing the safety margin. In the first chapter following the Introduction  to this book, we...

  19. Excitation equilibria in plasmas: a classification

    International Nuclear Information System (INIS)

    Mullen, J.-J.A.M. van der.

    1986-01-01

    In this thesis the author presents a classification of plasmas based on the atomic state distribution function. The study is based on the relation between the distribution function and the underlying processes and starts with the proper understanding of thermodynamic equilibrium (TE). Four types of proper balances are relevant: The 'Maxwell balance' of kinetic energy transfer, the 'Boltzmann balance' of excitation/deexcitation, the 'Saha balance' of ionization/recombination and the 'Planck balance' for interaction of atoms with radiation. Special attention is paid to the distribution function of the ionizing excitation saturation balance. The classification theory of the distribution functions in relation with underlying balances is supported by experimental evidence in an ionizing argon plasma. The AR I system provides a pertinent support of the theory. Experimental facts found in the AR II system can be interpreted in global terms. (Auth.)

  20. A radiological characterization extension for the DORIAN code - Summer Student Report

    CERN Document Server

    van Hoorn, Isabelle

    2016-01-01

    During my stay at CERN as a summer student I was working in the Radiation Protection group. The primary task of my project was to expand the functionality of the DORIAN code that is used for the prediction and analysis of residual dose rates due to accelerator radiation induced activation. With the guidance of my supervisor I extended the framework of the DORIAN code to include a radiological classification scheme that is able to compute mass specific activities for a given irradiation profile and cool-down time and compare these specific activities to given waste characterization limit sets . Additionally, the DORIAN code extension can compute the cool-down time required to stay within a certain limit set threshold for a fixed irradiation profile

  1. Web Page Classification Method Using Neural Networks

    Science.gov (United States)

    Selamat, Ali; Omatu, Sigeru; Yanagimoto, Hidekazu; Fujinaka, Toru; Yoshioka, Michifumi

    Automatic categorization is the only viable method to deal with the scaling problem of the World Wide Web (WWW). In this paper, we propose a news web page classification method (WPCM). The WPCM uses a neural network with inputs obtained by both the principal components and class profile-based features (CPBF). Each news web page is represented by the term-weighting scheme. As the number of unique words in the collection set is big, the principal component analysis (PCA) has been used to select the most relevant features for the classification. Then the final output of the PCA is combined with the feature vectors from the class-profile which contains the most regular words in each class before feeding them to the neural networks. We have manually selected the most regular words that exist in each class and weighted them using an entropy weighting scheme. The fixed number of regular words from each class will be used as a feature vectors together with the reduced principal components from the PCA. These feature vectors are then used as the input to the neural networks for classification. The experimental evaluation demonstrates that the WPCM method provides acceptable classification accuracy with the sports news datasets.

  2. [Nordic accident classification system used in the Danish National Hospital Registration System to register causes of severe traumatic brain injury].

    Science.gov (United States)

    Engberg, Aase Worsaa; Penninga, Elisabeth Irene; Teasdale, Thomas William

    2007-11-05

    The purpose was to illustrate the use of the accident classification system worked out by the Nordic Medico-Statistical Committee (NOMESCO). In particular, registration of causes of severe traumatic brain injury according to the system as part of the Danish National Hospital Registration System was studied. The study comprised 117 patients with very severe traumatic brain injury (TBI) admitted to the Brain Injury Unit of the University Hospital in Hvidovre, Copenhagen, from 1 October 2000 to 30 September 2002. Prospective NOMESCO coding at discharge was compared to independent retrospective coding based on hospital records, and to coding from other wards in the Danish National Hospital Registration System. Furthermore, sets of codes in the Danish National Hospital Registration System for consecutive admissions after a particular accident were compared. Identical results of prospective and independent retrospective coding were found for 65% of 588 single codes, and complete sets of codes for the same accident were identical only in 28% of cases. Sets of codes for the first admission in a hospital course corresponded to retrospective coding at the end of the course in only 17% of cases. Accident code sets from different wards, based on the same injury, were identical in only 7% of cases. Prospective coding by the NOMESCO accident classification system proved problematic, both with regard to correctness and completeness. The system--although logical--seems too complicated compared to the resources invested in the coding. The results of this investigation stress the need for better management and for better instruction to those who carry out the registration.

  3. Thermal-hydraulic code selection for modular high temperature gas-cooled reactors

    Energy Technology Data Exchange (ETDEWEB)

    Komen, E M.J.; Bogaard, J.P.A. van den

    1995-06-01

    In order to study the transient thermal-hydraulic system behaviour of modular high temperature gas-cooled reactors, the thermal-hydraulic computer codes RELAP5, MELCOR, THATCH, MORECA, and VSOP are considered at the Netherlands Energy Research Foundation ECN. This report presents the selection of the most appropriate codes. To cover the range of relevant accidents, a suite of three codes is recommended for analyses of HTR-M and MHTGR reactors. (orig.).

  4. An Internal Audit Perspective on Differences between European Corporate Governance Codes and OECD Principles

    Directory of Open Access Journals (Sweden)

    Raluca Ivan

    2015-12-01

    Full Text Available The main purpose of this research is to realize an analysis from an internal audit perspective of European Corporate Governance Codes, in regards with Organization for Economic Cooperation and Development – OECD Principles of Corporate Governance. The research methodology used a classification of countries by legal regime, trying to obtain a global view over the differences between the European corporate governance codes and the OECD Principles provisions, from internal audit’s perspective. The findings suggest that the specificities of internal audit function when studying the differences between European Corporate Governance Codes and OECD Principles lead to different treatment.

  5. Parents' Assessments of Disability in Their Children Using World Health Organization International Classification of Functioning, Disability and Health, Child and Youth Version Joined Body Functions and Activity Codes Related to Everyday Life.

    Science.gov (United States)

    Illum, Niels Ove; Gradel, Kim Oren

    2017-01-01

    To help parents assess disability in their own children using World Health Organization (WHO) International Classification of Functioning, Disability and Health, Child and Youth Version (ICF-CY) code qualifier scoring and to assess the validity and reliability of the data sets obtained. Parents of 162 children with spina bifida, spinal muscular atrophy, muscular disorders, cerebral palsy, visual impairment, hearing impairment, mental disability, or disability following brain tumours performed scoring for 26 body functions qualifiers (b codes) and activities and participation qualifiers (d codes). Scoring was repeated after 6 months. Psychometric and Rasch data analysis was undertaken. The initial and repeated data had Cronbach α of 0.96 and 0.97, respectively. Inter-code correlation was 0.54 (range: 0.23-0.91) and 0.76 (range: 0.20-0.92). The corrected code-total correlations were 0.72 (range: 0.49-0.83) and 0.75 (range: 0.50-0.87). When repeated, the ICF-CY code qualifier scoring showed a correlation R of 0.90. Rasch analysis of the selected ICF-CY code data demonstrated a mean measure of 0.00 and 0.00, respectively. Code qualifier infit mean square (MNSQ) had a mean of 1.01 and 1.00. The mean corresponding outfit MNSQ was 1.05 and 1.01. The ICF-CY code τ thresholds and category measures were continuous when assessed and reassessed by parents. Participating children had a mean of 56 codes scores (range: 26-130) before and a mean of 55.9 scores (range: 25-125) after repeat. Corresponding measures were -1.10 (range: -5.31 to 5.25) and -1.11 (range: -5.42 to 5.36), respectively. Based on measures obtained at the 2 occasions, the correlation coefficient R was 0.84. The child code map showed coherence of ICF-CY codes at each level. There was continuity in covering the range across disabilities. And, first and foremost, the distribution of codes reflexed a true continuity in disability with codes for motor functions activated first, then codes for cognitive functions

  6. An atlas of human long non-coding RNAs with accurate 5′ ends

    KAUST Repository

    Hon, Chung-Chau

    2017-02-28

    Long non-coding RNAs (lncRNAs) are largely heterogeneous and functionally uncharacterized. Here, using FANTOM5 cap analysis of gene expression (CAGE) data, we integrate multiple transcript collections to generate a comprehensive atlas of 27,919 human lncRNA genes with high-confidence 5′ ends and expression profiles across 1,829 samples from the major human primary cell types and tissues. Genomic and epigenomic classification of these lncRNAs reveals that most intergenic lncRNAs originate from enhancers rather than from promoters. Incorporating genetic and expression data, we show that lncRNAs overlapping trait-associated single nucleotide polymorphisms are specifically expressed in cell types relevant to the traits, implicating these lncRNAs in multiple diseases. We further demonstrate that lncRNAs overlapping expression quantitative trait loci (eQTL)-associated single nucleotide polymorphisms of messenger RNAs are co-expressed with the corresponding messenger RNAs, suggesting their potential roles in transcriptional regulation. Combining these findings with conservation data, we identify 19,175 potentially functional lncRNAs in the human genome.

  7. COAST code conversion from Cyber to HP

    International Nuclear Information System (INIS)

    Lee, Hae Cho

    1996-04-01

    The transient thermal hydraulic behavior of reactor coolant system in a nuclear power plant following loss of coolant flow is analyzed by use of COAST digital computer code. COAST calculates individual loop flow rates and steam generator pressure drops is a function of time following coast-down of any number of reactor coolant pumps. This report firstly describes detailed work carried out for installation of COAST on HP 9000/700 series and code validation results after installation. Secondly, a series of work is also describes in relation to installation of COAST on Apollo DN10000 series as well as relevant code validation results. Attached is a report on software verification and validation results. 7 refs. (Author) .new

  8. Portable LQCD Monte Carlo code using OpenACC

    Science.gov (United States)

    Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele

    2018-03-01

    Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.

  9. Multinationality and corporate ethics: codes of conduct in the sporting-goods industry

    NARCIS (Netherlands)

    van Tulder, R.J.M.; Kolk, A.

    2001-01-01

    The international operations of firms have substantial impact on the formulation and implementation of business ethical principles such as codes of conduct. The international sporting goods industry has been a pioneer in setting up codes and thus provides much relevant experience. Different sourcing

  10. Learning scale-variant and scale-invariant features for deep image classification

    NARCIS (Netherlands)

    van Noord, Nanne; Postma, Eric

    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial

  11. Contribution of non-negative matrix factorization to the classification of remote sensing images

    Science.gov (United States)

    Karoui, M. S.; Deville, Y.; Hosseini, S.; Ouamri, A.; Ducrot, D.

    2008-10-01

    Remote sensing has become an unavoidable tool for better managing our environment, generally by realizing maps of land cover using classification techniques. The classification process requires some pre-processing, especially for data size reduction. The most usual technique is Principal Component Analysis. Another approach consists in regarding each pixel of the multispectral image as a mixture of pure elements contained in the observed area. Using Blind Source Separation (BSS) methods, one can hope to unmix each pixel and to perform the recognition of the classes constituting the observed scene. Our contribution consists in using Non-negative Matrix Factorization (NMF) combined with sparse coding as a solution to BSS, in order to generate new images (which are at least partly separated images) using HRV SPOT images from Oran area, Algeria). These images are then used as inputs of a supervised classifier integrating textural information. The results of classifications of these "separated" images show a clear improvement (correct pixel classification rate improved by more than 20%) compared to classification of initial (i.e. non separated) images. These results show the contribution of NMF as an attractive pre-processing for classification of multispectral remote sensing imagery.

  12. MAD parsing and conversion code

    International Nuclear Information System (INIS)

    Mokhov, Dmitri N.

    2000-01-01

    The authors describe design and implementation issues while developing an embeddable MAD language parser. Two working applications of the parser are also described, namely, MAD-> C++ converter and C++ factory. The report contains some relevant details about the parser and examples of converted code. It also describes some of the problems that were encountered and the solutions found for them

  13. A systematic review and comprehensive classification of pectoralis major tears.

    Science.gov (United States)

    ElMaraghy, Amr W; Devereaux, Moira W

    2012-03-01

    Reported descriptions of pectoralis major (PM) injury are often inconsistent with the actual musculotendinous morphology. The literature lacks an injury classification system that is consistently applied and accurately reflects surgically relevant anatomic injury patterns, making meaningful comparison of treatment techniques and outcomes difficult. Published cases of PM injury between 1822 and 2010 were analyzed to identify incidence and injury patterns and the extent to which these injuries fit into a classification category. Recent work outlining the 3-dimensional anatomy of the PM muscle and tendon, as well as biomechanical studies of PM muscle segments, were reviewed to identify the aspects of musculotendinous anatomy that are clinically and surgically relevant to injury classification. We identified 365 cases of PM injury, with 75% occurring in the last 20 years; of these, 83% were a result of indirect trauma, with 48% occurring during weight-training activities. Injury patterns were not classified in any consistent way in timing, location, or tear extent, particularly with regard to affected muscle segments contributing to the PM's bilaminar tendon. A contemporary injury classification system is proposed that includes (1) injury timing (acute vs chronic), (2) injury location (at the muscle origin or muscle belly, at or between the musculotendinous junction and the tendinous insertion, or bony avulsion), and (3) standardized terminology addressing tear extent (anterior-to-posterior thickness and complete vs incomplete width) to more accurately reflect the musculotendinous morphology of PM injuries and better inform surgical management, rehabilitation, and research. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.

  14. The task-relevant attribute representation can mediate the Simon effect.

    Directory of Open Access Journals (Sweden)

    Dandan Tang

    Full Text Available Researchers have previously suggested a working memory (WM account of spatial codes, and based on this suggestion, the present study carries out three experiments to investigate how the task-relevant attribute representation (verbal or visual in the typical Simon task affects the Simon effect. Experiment 1 compared the Simon effect between the between- and within-category color conditions, which required subjects to discriminate between red and blue stimuli (presumed to be represented by verbal WM codes because it was easy and fast to name the colors verbally and to discriminate between two similar green stimuli (presumed to be represented by visual WM codes because it was hard and time-consuming to name the colors verbally, respectively. The results revealed a reliable Simon effect that only occurs in the between-category condition. Experiment 2 assessed the Simon effect by requiring subjects to discriminate between two different isosceles trapezoids (within-category shapes and to discriminate isosceles trapezoid from rectangle (between-category shapes, and the results replicated and expanded the findings of Experiment 1. In Experiment 3, subjects were required to perform both tasks from Experiment 1. Wherein, in Experiment 3A, the between-category task preceded the within-category task; in Experiment 3B, the task order was opposite. The results showed the reliable Simon effect when subjects represented the task-relevant stimulus attributes by verbal WM encoding. In addition, the response times (RTs distribution analysis for both the between- and within-category conditions of Experiments 3A and 3B showed decreased Simon effect with the RTs lengthened. Altogether, although the present results are consistent with the temporal coding account, we put forth that the Simon effect also depends on the verbal WM representation of task-relevant stimulus attribute.

  15. Building locally relevant ethics curricula for nursing education in Botswana.

    Science.gov (United States)

    Barchi, F; Kasimatis Singleton, M; Magama, M; Shaibu, S

    2014-12-01

    The goal of this multi-institutional collaboration was to develop an innovative, locally relevant ethics curriculum for nurses in Botswana. Nurses in Botswana face ethical challenges that are compounded by lack of resources, pressures to handle tasks beyond training or professional levels, workplace stress and professional isolation. Capacity to teach nursing ethics in the classroom and in professional practice settings has been limited. A pilot curriculum, including cases set in local contexts, was tested with nursing faculty in Botswana in 2012. Thirty-three per cent of the faculty members indicated they would be more comfortable teaching ethics. A substantial number of faculty members were more likely to introduce the International Council of Nurses Code of Ethics in teaching, practice and mentoring as a result of the training. Based on evaluation data, curricular materials were developed using the Code and the regulatory requirements for nursing practice in Botswana. A web-based repository of sample lectures, discussion cases and evaluation rubrics was created to support the use of the materials. A new master degree course, Nursing Ethics in Practice, has been proposed for fall 2015 at the University of Botswana. The modular nature of the materials and the availability of cases set within the context of clinical nurse practice in Botswana make them readily adaptable to various student academic levels and continuing professional development programmes. The ICN Code of Ethics for Nursing is a valuable teaching tool in developing countries when taught using locally relevant case materials and problem-based teaching methods. The approach used in the development of a locally relevant nursing ethics curriculum in Botswana can serve as a model for nursing education and continuing professional development programmes in other sub-Saharan African countries to enhance use of the ICN Code of Ethics in nursing practice. © 2014 International Council of Nurses.

  16. Genetic coding and gene expression - new Quadruplet genetic coding model

    Science.gov (United States)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  17. Current and anticipated uses of thermal-hydraulic codes in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Pelayo, F.; Reventos, F. [Consejo de Seguridad Nuclear, Barcelona (Spain)

    1997-07-01

    Spanish activities in the field of Applied Thermal-Hydraulics are steadily increasing as the codes are becoming practicable enough to efficiently sustain engineering decision in the Nuclear Power industry. Before reaching this point, a lot of effort has been devoted to achieve this goal. This paper briefly describes this process, points at the current applications and draws conclusions on the limitations. Finally it establishes the applications where the use of T-H codes would be worth in the future, this in turn implies further development of the codes to widen the scope of application and improve the general performance. Due to the different uses of the codes, the applications mainly come from the authority, industry, universities and research institutions. The main conclusion derived from this paper establishes that further code development is justified if the following requisites are considered: (1) Safety relevance of scenarios not presently covered is established. (2) A substantial gain in margins or the capability to use realistic assumptions is obtained. (3) A general consensus on the licensability and methodology for application is reached. The role of Regulatory Body is stressed, as the most relevant outcome of the project may be related to the evolution of the licensing frame.

  18. Mammogram classification scheme using 2D-discrete wavelet and local binary pattern for detection of breast cancer

    Science.gov (United States)

    Adi Putra, Januar

    2018-04-01

    In this paper, we propose a new mammogram classification scheme to classify the breast tissues as normal or abnormal. Feature matrix is generated using Local Binary Pattern to all the detailed coefficients from 2D-DWT of the region of interest (ROI) of a mammogram. Feature selection is done by selecting the relevant features that affect the classification. Feature selection is used to reduce the dimensionality of data and features that are not relevant, in this paper the F-test and Ttest will be performed to the results of the feature extraction dataset to reduce and select the relevant feature. The best features are used in a Neural Network classifier for classification. In this research we use MIAS and DDSM database. In addition to the suggested scheme, the competent schemes are also simulated for comparative analysis. It is observed that the proposed scheme has a better say with respect to accuracy, specificity and sensitivity. Based on experiments, the performance of the proposed scheme can produce high accuracy that is 92.71%, while the lowest accuracy obtained is 77.08%.

  19. International Classification of Primary Care-2 coding of primary care data at the general out-patients' clinic of General Hospital, Lagos, Nigeria.

    Science.gov (United States)

    Olagundoye, Olawunmi Abimbola; van Boven, Kees; van Weel, Chris

    2016-01-01

    Primary care serves as an integral part of the health systems of nations especially the African continent. It is the portal of entry for nearly all patients into the health care system. Paucity of accurate data for health statistics remains a challenge in the most parts of Africa because of inadequate technical manpower and infrastructure. Inadequate quality of data systems contributes to inaccurate data. A simple-to-use classification system such as the International Classification of Primary Care (ICPC) may be a solution to this problem at the primary care level. To apply ICPC-2 for secondary coding of reasons for encounter (RfE), problems managed and processes of care in a Nigerian primary care setting. Furthermore, to analyze the value of selected presented symptoms as predictors of the most common diagnoses encountered in the study setting. Content analysis of randomly selected patients' paper records for data collection at the end of clinic sessions conducted by family physicians at the general out-patients' clinics. Contents of clinical consultations were secondarily coded with the ICPC-2 and recorded into excel spreadsheets with fields for sociodemographic data such as age, sex, occupation, religion, and ICPC elements of an encounter: RfE/complaints, diagnoses/problems, and interventions/processes of care. Four hundred and one encounters considered in this study yielded 915 RfEs, 546 diagnoses, and 1221 processes. This implies an average of 2.3 RfE, 1.4 diagnoses, and 3.0 processes per encounter. The top 10 RfE, diagnoses/common illnesses, and processes were determined. Through the determination of the probability of the occurrence of certain diseases beginning with a RfE/complaint, the top five diagnoses that resulted from each of the top five RfE were also obtained. The top five RfE were: headache, fever, pain general/multiple sites, visual disturbance other and abdominal pain/cramps general. The top five diagnoses were: Malaria, hypertension

  20. Classification Accuracy Increase Using Multisensor Data Fusion

    Science.gov (United States)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  1. Evolutionary image simplification for lung nodule classification with convolutional neural networks.

    Science.gov (United States)

    Lückehe, Daniel; von Voigt, Gabriele

    2018-05-29

    Understanding decisions of deep learning techniques is important. Especially in the medical field, the reasons for a decision in a classification task are as crucial as the pure classification results. In this article, we propose a new approach to compute relevant parts of a medical image. Knowing the relevant parts makes it easier to understand decisions. In our approach, a convolutional neural network is employed to learn structures of images of lung nodules. Then, an evolutionary algorithm is applied to compute a simplified version of an unknown image based on the learned structures by the convolutional neural network. In the simplified version, irrelevant parts are removed from the original image. In the results, we show simplified images which allow the observer to focus on the relevant parts. In these images, more than 50% of the pixels are simplified. The simplified pixels do not change the meaning of the images based on the learned structures by the convolutional neural network. An experimental analysis shows the potential of the approach. Besides the examples of simplified images, we analyze the run time development. Simplified images make it easier to focus on relevant parts and to find reasons for a decision. The combination of an evolutionary algorithm employing a learned convolutional neural network is well suited for the simplification task. From a research perspective, it is interesting which areas of the images are simplified and which parts are taken as relevant.

  2. Human Rights in Natural Science and Technology Professions’ Codes of Ethics?

    OpenAIRE

    Haugen, Hans Morten

    2013-01-01

    Abstract: No global professional codes for the natural science and technology professions exist. In light of how the application of new technology can affect individuals and communities, this discrepancy warrants greater scrutiny. This article analyzes the most relevant processes and seeks to explain why these processes have not resulted in global codes. Moreover, based on a human rights approach, the article gives recommendations on the future process and content of codes for ...

  3. Interobserver and intraobserver reliability of radiographic classification of acromioclavicular joint dislocations.

    Science.gov (United States)

    Ringenberg, Jonathan D; Foughty, Zachary; Hall, Adam D; Aldridge, J Mack; Wilson, Joseph B; Kuremsky, Marshall A

    2018-03-01

    The classification and treatment of acromioclavicular (AC) joint dislocations remain controversial. The purpose of this study was to determine the interobserver and intraobserver reliability of the Rockwood classification system. We hypothesized poor interobserver and intraobserver reliability, limiting the role of the Rockwood classification system in determining severity of AC joint dislocations and accurately guiding treatment decisions. We identified 200 patients with AC joint injuries using the International Classification of Diseases, Ninth Revision code 831.04. Fifty patients met inclusion criteria. Deidentified radiographs were compiled and presented to 6 fellowship-trained upper extremity orthopedic surgeons. The surgeons classified each patient into 1 of the 6 classification types described by Rockwood. A second review was performed several months later by 2 surgeons. A κ value was calculated to determine the interobserver and intraobserver reliability. The interobserver and intraobserver κ values were fair (κ = 0.278) and moderate (κ = 0.468), respectively. Interobserver results showed that 4 of the 50 radiographic images had a unanimous classification. Intraobserver results for the 2 surgeons showed that 18 of the 50 images were rated the same on second review by the first surgeon and 38 of the 50 images were rated the same on second review by the second surgeon. We found that the Rockwood classification system has limited interobserver and intraobserver reliability. We believe that unreliable classification may account for some of the inconsistent treatment outcomes among patients with similarly classified injuries. We suggest that a better classification system is needed to use radiographic imaging for diagnosis and treatment of AC joint dislocations. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  4. CESEC III code conversion from Apollo to HP9000

    International Nuclear Information System (INIS)

    Lee, Hae Cho

    1996-01-01

    CESEC code is a computer program used to analyze transient behaviour of reactor coolant systems of nuclear power plants. CESEC III is an extension of original CESEC code in order to apply wide range of accident analysis including ATWS model. Major parameters during the transients are calculated by CESEC. This report firstly describes detailed work carried out for installation of CESEC III on Apollo DN10000 and code validation results after installation. Secondly, A series of work is also described in relation to installation of CESECIII on HP 9000/700 series as well as relevant code validation results. Attached is a report on software verification and validation results. 7 refs. (Author) .new

  5. CESEC III code conversion from Apollo to HP9000

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hae Cho [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-01-01

    CESEC code is a computer program used to analyze transient behaviour of reactor coolant systems of nuclear power plants. CESEC III is an extension of original CESEC code in order to apply wide range of accident analysis including ATWS model. Major parameters during the transients are calculated by CESEC. This report firstly describes detailed work carried out for installation of CESEC III on Apollo DN10000 and code validation results after installation. Secondly, A series of work is also described in relation to installation of CESECIII on HP 9000/700 series as well as relevant code validation results. Attached is a report on software verification and validation results. 7 refs. (Author) .new.

  6. Coding OSICS sports injury diagnoses in epidemiological studies: does the background of the coder matter?

    Science.gov (United States)

    Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C

    2014-01-01

    Objective To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Design Assessment of intercoder agreement. Setting Community Australian football. Participants 1082 standardised injury surveillance records. Main outcome measurements Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). Results The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific ‘X’ code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Conclusions Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise. PMID:22919021

  7. Building classification trees to explain the radioactive contamination levels of the plants

    International Nuclear Information System (INIS)

    Briand, B.

    2008-04-01

    The objective of this thesis is the development of a method allowing the identification of factors leading to various radioactive contamination levels of the plants. The methodology suggested is based on the use of a radioecological transfer model of the radionuclides through the environment (A.S.T.R.A.L. computer code) and a classification-tree method. Particularly, to avoid the instability problems of classification trees and to preserve the tree structure, a node level stabilizing technique is used. Empirical comparisons are carried out between classification trees built by this method (called R.E.N. method) and those obtained by the C.A.R.T. method. A similarity measure is defined to compare the structure of two classification trees. This measure is used to study the stabilizing performance of the R.E.N. method. The methodology suggested is applied to a simplified contamination scenario. By the results obtained, we can identify the main variables responsible of the various radioactive contamination levels of four leafy-vegetables (lettuce, cabbage, spinach and leek). Some extracted rules from these classification trees can be usable in a post-accidental context. (author)

  8. Vulnerabilities Classification for Safe Development on Android

    Directory of Open Access Journals (Sweden)

    Ricardo Luis D. M. Ferreira

    2016-06-01

    Full Text Available The global sales market is currently led by devices with the Android operating system. In 2015, more than 1 billion smartphones were sold, of which 81.5% were operated by the Android platform. In 2017, it is estimated that 267.78 billion applications will be downloaded from Google Play. According to Qian, 90% of applications are vulnerable, despite the recommendations of rules and standards for the safe software development. This study presents a classification of vulnerabilities, indicating the vulnerability, the safety aspect defined by the Brazilian Association of Technical Standards (Associação Brasileira de Normas Técnicas - ABNT norm NBR ISO/IEC 27002 which will be violated, which lines of code generate the vulnerability and what should be done to avoid it, and the threat agent used by each of them. This classification allows the identification of possible points of vulnerability, allowing the developer to correct the identified gaps.

  9. Comparison of codes and standards for radiographic inspection

    International Nuclear Information System (INIS)

    Bingoeldag, M. M.; Aksu, M.; Akguen, A. F.

    1995-01-01

    This report compares the procedurel requirements and acceptance criteria for radiographic inspections specified in the relevant national and international codes and standards. In particular, detailed analysis of inspection conditions such as exposure arrangements, and contrast requirements are given

  10. Manufactured solutions for the three-dimensional Euler equations with relevance to Inertial Confinement Fusion

    International Nuclear Information System (INIS)

    Waltz, J.; Canfield, T.R.; Morgan, N.R.; Risinger, L.D.; Wohlbier, J.G.

    2014-01-01

    We present a set of manufactured solutions for the three-dimensional (3D) Euler equations. The purpose of these solutions is to allow for code verification against true 3D flows with physical relevance, as opposed to 3D simulations of lower-dimensional problems or manufactured solutions that lack physical relevance. Of particular interest are solutions with relevance to Inertial Confinement Fusion (ICF) capsules. While ICF capsules are designed for spherical symmetry, they are hypothesized to become highly 3D at late time due to phenomena such as Rayleigh–Taylor instability, drive asymmetry, and vortex decay. ICF capsules also involve highly nonlinear coupling between the fluid dynamics and other physics, such as radiation transport and thermonuclear fusion. The manufactured solutions we present are specifically designed to test the terms and couplings in the Euler equations that are relevant to these phenomena. Example numerical results generated with a 3D Finite Element hydrodynamics code are presented, including mesh convergence studies

  11. Application of United Nations Framework Classification – 2009 (UNFC-2009) to nuclear fuel resources

    International Nuclear Information System (INIS)

    Tulsidas, Harikrishnan; Li Shengxiang; Van Gosen, Bradley

    2014-01-01

    United Nations Framework Classification for Fossil Fuel and Mineral Reserves and Resources 2009: • Generic, principles-based system: – Applicable to both solid minerals and fluids; • Applications in: – International energy studies; – National resource reporting; – Company project management; – Financial reporting; • 3-D classification of resources on the basis of: – Socio-economic criteria (E); – Project maturity (technical feasibility) (F); – Geological knowledge (G); • A key goal of UNFC-2009 is to provide a tool to facilitate global communications: – Uses a numerical coding system; – Language independent reporting

  12. Transcriptome classification reveals molecular subtypes in psoriasis

    Directory of Open Access Journals (Sweden)

    Ainali Chrysanthi

    2012-09-01

    Full Text Available Abstract Background Psoriasis is an immune-mediated disease characterised by chronically elevated pro-inflammatory cytokine levels, leading to aberrant keratinocyte proliferation and differentiation. Although certain clinical phenotypes, such as plaque psoriasis, are well defined, it is currently unclear whether there are molecular subtypes that might impact on prognosis or treatment outcomes. Results We present a pipeline for patient stratification through a comprehensive analysis of gene expression in paired lesional and non-lesional psoriatic tissue samples, compared with controls, to establish differences in RNA expression patterns across all tissue types. Ensembles of decision tree predictors were employed to cluster psoriatic samples on the basis of gene expression patterns and reveal gene expression signatures that best discriminate molecular disease subtypes. This multi-stage procedure was applied to several published psoriasis studies and a comparison of gene expression patterns across datasets was performed. Conclusion Overall, classification of psoriasis gene expression patterns revealed distinct molecular sub-groups within the clinical phenotype of plaque psoriasis. Enrichment for TGFb and ErbB signaling pathways, noted in one of the two psoriasis subgroups, suggested that this group may be more amenable to therapies targeting these pathways. Our study highlights the potential biological relevance of using ensemble decision tree predictors to determine molecular disease subtypes, in what may initially appear to be a homogenous clinical group. The R code used in this paper is available upon request.

  13. Development and test of a classification scheme for human factors in incident reports

    International Nuclear Information System (INIS)

    Miller, R.; Freitag, M.; Wilpert, B.

    1997-01-01

    The Research Center System Safety of the Berlin University of Technology conducted a research project on the analysis of Human Factors (HF) aspects in incident reported by German Nuclear Power Plants. Based on psychological theories and empirical studies a classification scheme was developed which permits the identification of human involvement in incidents. The classification scheme was applied in an epidemiological study to a selection of more than 600 HF - relevant incidents. The results allow insights into HF related problem areas. An additional study proved that the application of the classification scheme produces results which are reliable and independent from raters. (author). 13 refs, 1 fig

  14. Site Classification using Multichannel Channel Analysis of Surface Wave (MASW) method on Soft and Hard Ground

    Science.gov (United States)

    Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.

    2018-04-01

    Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.

  15. Quality assurance aspects of the computer code CODAR2

    International Nuclear Information System (INIS)

    Maul, P.R.

    1986-03-01

    The computer code CODAR2 was developed originally for use in connection with the Sizewell Public Inquiry to evaluate the radiological impact of routine discharges to the sea from the proposed PWR. It has subsequently bee used to evaluate discharges from Heysham 2. The code was frozen in September 1983, and this note gives details of its verification, validation and evaluation. Areas where either improved modelling methods or more up-to-date information relevant to CODAR2 data bases have subsequently become available are indicated; these will be incorporated in any future versions of the code. (author)

  16. Comparison and classification of all-optical CDMA systems for future telecommunication networks

    Science.gov (United States)

    Iversen, Kay; Hampicke, Dirk

    1995-12-01

    This paper shows the state of the art in fiber optical code-division multiple-access (CDMA). Recent work in this area for both, systems and sequences is reviewed and analyzed. For that purpose a classification of systems, corresponding to the manner of signal processing and a classification of known (0,1)-sequences are presented. It is shown that due to the limits by currently available device technology especially two techniques are promising for implementation in broadband telecommunication networks: spectral encoding with integrated optical filters and CDMA in combination with wavelength multiple access schemes. Further an overview about some important experiments in this field is given.

  17. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  18. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  19. Software Helps Retrieve Information Relevant to the User

    Science.gov (United States)

    Mathe, Natalie; Chen, James

    2003-01-01

    The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrieved information to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.

  20. Identifying and acting on potentially inappropriate care? Inadequacy of current hospital coding for this task.

    Science.gov (United States)

    Cooper, P David; Smart, David R

    2017-06-01

    Recent Australian attempts to facilitate disinvestment in healthcare, by identifying instances of 'inappropriate' care from large Government datasets, are subject to significant methodological flaws. Amongst other criticisms has been the fact that the Government datasets utilized for this purpose correlate poorly with datasets collected by relevant professional bodies. Government data derive from official hospital coding, collected retrospectively by clerical personnel, whilst professional body data derive from unit-specific databases, collected contemporaneously with care by clinical personnel. Assessment of accuracy of official hospital coding data for hyperbaric services in a tertiary referral hospital. All official hyperbaric-relevant coding data submitted to the relevant Australian Government agencies by the Royal Hobart Hospital, Tasmania, Australia for financial year 2010-2011 were reviewed and compared against actual hyperbaric unit activity as determined by reference to original source documents. Hospital coding data contained one or more errors in diagnoses and/or procedures in 70% of patients treated with hyperbaric oxygen that year. Multiple discrete error types were identified, including (but not limited to): missing patients; missing treatments; 'additional' treatments; 'additional' patients; incorrect procedure codes and incorrect diagnostic codes. Incidental observations of errors in surgical, anaesthetic and intensive care coding within this cohort suggest that the problems are not restricted to the specialty of hyperbaric medicine alone. Publications from other centres indicate that these problems are not unique to this institution or State. Current Government datasets are irretrievably compromised and not fit for purpose. Attempting to inform the healthcare policy debate by reference to these datasets is inappropriate. Urgent clinical engagement with hospital coding departments is warranted.

  1. Validity of vascular trauma codes at major trauma centres.

    Science.gov (United States)

    Altoijry, Abdulmajeed; Al-Omran, Mohammed; Lindsay, Thomas F; Johnston, K Wayne; Melo, Magda; Mamdani, Muhammad

    2013-12-01

    The use of administrative databases in vascular injury research has been increasing, but the validity of the diagnosis codes used in this research is uncertain. We assessed the positive predictive value (PPV) of International Classification of Diseases, tenth revision (ICD-10), vascular injury codes in administrative claims data in Ontario. We conducted a retrospective validation study using the Canadian Institute for Health Information Discharge Abstract Database, an administrative database that records all hospital admissions in Canada. We evaluated 380 randomly selected hospital discharge abstracts from the 2 main trauma centres in Toronto, Ont., St.Michael's Hospital and Sunnybrook Health Sciences Centre, between Apr. 1, 2002, and Mar. 31, 2010. We then compared these records with the corresponding patients' hospital charts to assess the level of agreement for procedure coding. We calculated the PPV and sensitivity to estimate the validity of vascular injury diagnosis coding. The overall PPV for vascular injury coding was estimated to be 95% (95% confidence interval [CI] 92.3-96.8). The PPV among code groups for neck, thorax, abdomen, upper extremity and lower extremity injuries ranged from 90.8 (95% CI 82.2-95.5) to 97.4 (95% CI 91.0-99.3), whereas sensitivity ranged from 90% (95% CI 81.5-94.8) to 98.7% (95% CI 92.9-99.8). Administrative claims hospital discharge data based on ICD-10 diagnosis codes have a high level of validity when identifying cases of vascular injury. Observational Study Level III.

  2. A curated database of cyanobacterial strains relevant for modern taxonomy and phylogenetic studies.

    Science.gov (United States)

    Ramos, Vitor; Morais, João; Vasconcelos, Vitor M

    2017-04-25

    The dataset herein described lays the groundwork for an online database of relevant cyanobacterial strains, named CyanoType (http://lege.ciimar.up.pt/cyanotype). It is a database that includes categorized cyanobacterial strains useful for taxonomic, phylogenetic or genomic purposes, with associated information obtained by means of a literature-based curation. The dataset lists 371 strains and represents the first version of the database (CyanoType v.1). Information for each strain includes strain synonymy and/or co-identity, strain categorization, habitat, accession numbers for molecular data, taxonomy and nomenclature notes according to three different classification schemes, hierarchical automatic classification, phylogenetic placement according to a selection of relevant studies (including this), and important bibliographic references. The database will be updated periodically, namely by adding new strains meeting the criteria for inclusion and by revising and adding up-to-date metadata for strains already listed. A global 16S rDNA-based phylogeny is provided in order to assist users when choosing the appropriate strains for their studies.

  3. The effect of hot spots upon swelling of Zircaloy cladding as modelled by the code CANSWEL-2

    International Nuclear Information System (INIS)

    Haste, T.J.; Gittus, J.H.

    1980-12-01

    The code CANSWEL-2 models cladding creep deformation under conditions relevant to a loss-of-coolant accident (LOCA) in a pressurised-water reactor (PWR). It can treat azimuthal non-uniformities in cladding thickness and temperature, and model the mechanical restraint imposed by the nearest neighbouring rods, including situations where cladding is forced into non-circular shapes. The physical and mechanical models used in the code are presented. Applications of the code are described, both as a stand-alone version and as part of the PWR LOCA code MABEL-2. Comparison with a limited number of relevant out-of-reactor creep strain experiments has generally shown encouraging agreement with the data. (author)

  4. An Integrated Approach to Battery Health Monitoring using Bayesian Regression, Classification and State Estimation

    Data.gov (United States)

    National Aeronautics and Space Administration — The application of the Bayesian theory of managing uncertainty and complexity to regression and classification in the form of Relevance Vector Machine (RVM), and to...

  5. A comparison of the International Classification of Functioning, Disability, and Health to the disability tax credit.

    Science.gov (United States)

    Conti-Becker, Angela; Doralp, Samantha; Fayed, Nora; Kean, Crystal; Lencucha, Raphael; Leyshon, Rhysa; Mersich, Jackie; Robbins, Shawn; Doyle, Phillip C

    2007-01-01

    The Disability Tax Credit (DTC) Certification is an assessment tool used to provide Canadians with disability tax relief The International Classification of Functioning, Disability and Health (ICF) provides a universal framework for defining disability. The purpose of this study was to evaluate the DTC and familiarize occupational therapists with the process of mapping measures to the ICF classification system. Concepts within the DTC were identified and mapped to appropriate ICF codes (Cieza et al., 2005). The DTC was linked to 45 unique ICF codes (16 Body Functions, 19 Activities and Participation, and 8 Environmental Factors). The DTC encompasses various domains of the ICF; however, there is no consideration of Personal Factors, Body Structures, and key aspects of Activities and Participation. Refining the DTC to address these aspects will provide an opportunity for fair and just determinations for those who experience disability.

  6. Prototype semantic infrastructure for automated small molecule classification and annotation in lipidomics.

    Science.gov (United States)

    Chepelev, Leonid L; Riazanov, Alexandre; Kouznetsov, Alexandre; Low, Hong Sang; Dumontier, Michel; Baker, Christopher J O

    2011-07-26

    The development of high-throughput experimentation has led to astronomical growth in biologically relevant lipids and lipid derivatives identified, screened, and deposited in numerous online databases. Unfortunately, efforts to annotate, classify, and analyze these chemical entities have largely remained in the hands of human curators using manual or semi-automated protocols, leaving many novel entities unclassified. Since chemical function is often closely linked to structure, accurate structure-based classification and annotation of chemical entities is imperative to understanding their functionality. As part of an exploratory study, we have investigated the utility of semantic web technologies in automated chemical classification and annotation of lipids. Our prototype framework consists of two components: an ontology and a set of federated web services that operate upon it. The formal lipid ontology we use here extends a part of the LiPrO ontology and draws on the lipid hierarchy in the LIPID MAPS database, as well as literature-derived knowledge. The federated semantic web services that operate upon this ontology are deployed within the Semantic Annotation, Discovery, and Integration (SADI) framework. Structure-based lipid classification is enacted by two core services. Firstly, a structural annotation service detects and enumerates relevant functional groups for a specified chemical structure. A second service reasons over lipid ontology class descriptions using the attributes obtained from the annotation service and identifies the appropriate lipid classification. We extend the utility of these core services by combining them with additional SADI services that retrieve associations between lipids and proteins and identify publications related to specified lipid types. We analyze the performance of SADI-enabled eicosanoid classification relative to the LIPID MAPS classification and reflect on the contribution of our integrative methodology in the context of

  7. Prototype semantic infrastructure for automated small molecule classification and annotation in lipidomics

    Directory of Open Access Journals (Sweden)

    Dumontier Michel

    2011-07-01

    Full Text Available Abstract Background The development of high-throughput experimentation has led to astronomical growth in biologically relevant lipids and lipid derivatives identified, screened, and deposited in numerous online databases. Unfortunately, efforts to annotate, classify, and analyze these chemical entities have largely remained in the hands of human curators using manual or semi-automated protocols, leaving many novel entities unclassified. Since chemical function is often closely linked to structure, accurate structure-based classification and annotation of chemical entities is imperative to understanding their functionality. Results As part of an exploratory study, we have investigated the utility of semantic web technologies in automated chemical classification and annotation of lipids. Our prototype framework consists of two components: an ontology and a set of federated web services that operate upon it. The formal lipid ontology we use here extends a part of the LiPrO ontology and draws on the lipid hierarchy in the LIPID MAPS database, as well as literature-derived knowledge. The federated semantic web services that operate upon this ontology are deployed within the Semantic Annotation, Discovery, and Integration (SADI framework. Structure-based lipid classification is enacted by two core services. Firstly, a structural annotation service detects and enumerates relevant functional groups for a specified chemical structure. A second service reasons over lipid ontology class descriptions using the attributes obtained from the annotation service and identifies the appropriate lipid classification. We extend the utility of these core services by combining them with additional SADI services that retrieve associations between lipids and proteins and identify publications related to specified lipid types. We analyze the performance of SADI-enabled eicosanoid classification relative to the LIPID MAPS classification and reflect on the contribution of

  8. High-Fidelity Coding with Correlated Neurons

    Science.gov (United States)

    da Silveira, Rava Azeredo; Berry, Michael J.

    2014-01-01

    Positive correlations in the activity of neurons are widely observed in the brain. Previous studies have shown these correlations to be detrimental to the fidelity of population codes, or at best marginally favorable compared to independent codes. Here, we show that positive correlations can enhance coding performance by astronomical factors. Specifically, the probability of discrimination error can be suppressed by many orders of magnitude. Likewise, the number of stimuli encoded—the capacity—can be enhanced more than tenfold. These effects do not necessitate unrealistic correlation values, and can occur for populations with a few tens of neurons. We further show that both effects benefit from heterogeneity commonly seen in population activity. Error suppression and capacity enhancement rest upon a pattern of correlation. Tuning of one or several effective parameters can yield a limit of perfect coding: the corresponding pattern of positive correlation leads to a ‘lock-in’ of response probabilities that eliminates variability in the subspace relevant for stimulus discrimination. We discuss the nature of this pattern and we suggest experimental tests to identify it. PMID:25412463

  9. Using DRG to analyze hospital production: a re-classification model based on a linear tree-network topology

    Directory of Open Access Journals (Sweden)

    Achille Lanzarini

    2014-09-01

    Full Text Available Background: Hospital discharge records are widely classified through the Diagnosis Related Group (DRG system; the version currently used in Italy counts 538 different codes, including thousands of diagnosis and procedures. These numbers reflect the considerable effort of simplification, yet the current classification system is of little use to evaluate hospital production and performance.Methods: As the case-mix of a given Hospital Unit (HU is driven by its physicians’ specializations, a grouping of DRGs into a specialization-driven classification system has been conceived through the analysis of HUs discharging and the ICD-9-CM codes. We propose a three-folded classification, based on the analysis of 1,670,755 Hospital Discharge Cards (HDCs produced by Lombardy Hospitals in 2010; it consists of 32 specializations (e.g. Neurosurgery, 124 sub-specialization (e.g. skull surgery and 337 sub-sub-specialization (e.g. craniotomy.Results: We give a practical application of the three-layered approach, based on the production of a Neurosurgical HU; we observe synthetically the profile of production (1,305 hospital discharges for 79 different DRG codes of 16 different MDC are grouped in few groups of homogeneous DRG codes, a more informative production comparison (through process-specific comparisons, rather than crude or case-mix standardized comparisons and a potentially more adequate production planning (considering the Neurosurgical HUs of the same city, those produce a limited quote of the whole neurosurgical production, because the same activity can be realized by non-Neurosugical HUs.Conclusion: Our work may help to evaluate the hospital production for a rational planning of available resources, blunting information asymmetries between physicians and managers. 

  10. A Study on Classification Cases in UAE BNPP Focused on Physical Breakdown Structure for the Establishment of Intelligent Export Control System

    International Nuclear Information System (INIS)

    Yang, Seung Hyo; Tae, Jae Woong; Shin, Dong Hoon

    2013-01-01

    NSG and the international society haven't suggested a clear standard for EDP, but recommended every member of NSG to establish their own standard and control strategic item export. As a result, it is hard to sustain objectivity and consistency in examining thousands of items and technologies related to nuclear power plants through the existing methods that classifies strategic items with limited human resources. Its long processing time may bring about a financial burden to the related companies as well. Accordingly, it is more required to establish Intelligent eXport Control System (IXCS) than ever before so that a great many of strategic items can be effectively processed. To provide basic data used to establish IXCS, this study analyzed classification cases of Braka Nuclear Power Plant (BNPP) in UAE mainly focusing on Physical Breakdown Structure (PBS). Due to commercial reactors exported to UAE and research reactors exported to Jordan, the number of classification requests is rapidly increasing in Korea. Therefore, it is required to develop IXCS that can assist an efficient and consistent decision on a great many classification request. This system will be developed to present the possibility that items could be strategic through analyzing characteristics (e.g. name, function, use, purpose, duplication, relation to nuclear activities, etc.). The result in this study that analyzed the linkage between PBS code and items, one of characteristics (use) cited above, will be used as weighting factor to classify items for developing the system. (e. g. PBS code 431 has weighting factor 0.7.) In addition, it can be more efficient to sort items which are likely to be strategic among all requests before processing classification requests growing exponentially. It is concluded that the linkage between PBS code and items will be used as filtering factor to select items which could be strategic accordingly. Henceforth, it is plan to derive highly reliable statistical results

  11. Reframing the Principle of Specialisation in Legitimation Code Theory: A Blended Learning Perspective

    Science.gov (United States)

    Owusu-Agyeman, Yaw; Larbi-Siaw, Otu

    2017-01-01

    This study argues that in developing a robust framework for students in a blended learning environment, Structural Alignment (SA) becomes the third principle of specialisation in addition to Epistemic Relation (ER) and Social Relation (SR). We provide an extended code: (ER+/-, SR+/-, SA+/-) that present strong classification and framing to the…

  12. Classification of protein-protein interaction full-text documents using text and citation network features.

    Science.gov (United States)

    Kolchinsky, Artemy; Abi-Haidar, Alaa; Kaur, Jasleen; Hamed, Ahmed Abdeen; Rocha, Luis M

    2010-01-01

    We participated (as Team 9) in the Article Classification Task of the Biocreative II.5 Challenge: binary classification of full-text documents relevant for protein-protein interaction. We used two distinct classifiers for the online and offline challenges: 1) the lightweight Variable Trigonometric Threshold (VTT) linear classifier we successfully introduced in BioCreative 2 for binary classification of abstracts and 2) a novel Naive Bayes classifier using features from the citation network of the relevant literature. We supplemented the supplied training data with full-text documents from the MIPS database. The lightweight VTT classifier was very competitive in this new full-text scenario: it was a top-performing submission in this task, taking into account the rank product of the Area Under the interpolated precision and recall Curve, Accuracy, Balanced F-Score, and Matthew's Correlation Coefficient performance measures. The novel citation network classifier for the biomedical text mining domain, while not a top performing classifier in the challenge, performed above the central tendency of all submissions, and therefore indicates a promising new avenue to investigate further in bibliome informatics.

  13. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  14. Nuclear power plants - Instrumentation and control systems important for safety - Classification (International Electrotechnical Commission Standard Publication 1226:1993)

    International Nuclear Information System (INIS)

    Stefanik, J.

    1996-01-01

    This international standard established a method of classification of the information and command functions for nuclear power plants, and the I and C and equipment that provide those functions, into categories that designate the importance for safety of the functions, and the associated systems and equipment. The resulting classification then determines relevant design criteria. The design criteria are the measures of quality by which the adequacy of each functions, and the associated systems and equipment in relation to its importance to plant safety is ensured. In this standard, the criteria are those of functionality, reliability, performance, environmental durability and quality assurance. This standard is applicable to all the information and command functions, and the instrumentation and control systems and equipment that provide those functions. The functions, systems and equipment under consideration provide automated protection, closed or open loop control, and information to the operating staff. They keep the NPP conditions inside the safe operating envelope and provide automatic actions, or enable manual actions, that mitigate accidents or prevent or minimize radioactive releases to the site or wider environment. The functions, and the associated systems and equipment that fulfill these roles safeguard the health and safety of the NPP operators and the public. This standard complements, and does not replace or supersede, the Safety Guides and Codes of Practice published by the International Atomic Energy Agency

  15. Land use and land cover classification for rural residential areas in China using soft-probability cascading of multifeatures

    Science.gov (United States)

    Zhang, Bin; Liu, Yueyan; Zhang, Zuyu; Shen, Yonglin

    2017-10-01

    A multifeature soft-probability cascading scheme to solve the problem of land use and land cover (LULC) classification using high-spatial-resolution images to map rural residential areas in China is proposed. The proposed method is used to build midlevel LULC features. Local features are frequently considered as low-level feature descriptors in a midlevel feature learning method. However, spectral and textural features, which are very effective low-level features, are neglected. The acquisition of the dictionary of sparse coding is unsupervised, and this phenomenon reduces the discriminative power of the midlevel feature. Thus, we propose to learn supervised features based on sparse coding, a support vector machine (SVM) classifier, and a conditional random field (CRF) model to utilize the different effective low-level features and improve the discriminability of midlevel feature descriptors. First, three kinds of typical low-level features, namely, dense scale-invariant feature transform, gray-level co-occurrence matrix, and spectral features, are extracted separately. Second, combined with sparse coding and the SVM classifier, the probabilities of the different LULC classes are inferred to build supervised feature descriptors. Finally, the CRF model, which consists of two parts: unary potential and pairwise potential, is employed to construct an LULC classification map. Experimental results show that the proposed classification scheme can achieve impressive performance when the total accuracy reached about 87%.

  16. The Nursing Code of Ethics: Its Value, Its History.

    Science.gov (United States)

    Epstein, Beth; Turner, Martha

    2015-05-31

    To practice competently and with integrity, today's nurses must have in place several key elements that guide the profession, such as an accreditation process for education, a rigorous system for licensure and certification, and a relevant code of ethics. The American Nurses Association has guided and supported nursing practice through creation and implementation of a nationally accepted Code of Ethics for Nurses with Interpretive Statements. This article will discuss ethics in society, professions, and nursing and illustrate how a professional code of ethics can guide nursing practice in a variety of settings. We also offer a brief history of the Code of Ethics, discuss the modern Code of Ethics, and describe the importance of periodic revision, including the inclusive and thorough process used to develop the 2015 Code and a summary of recent changes. Finally, the article provides implications for practicing nurses to assure that this document is a dynamic, useful resource in a variety of healthcare settings.

  17. RELEVANT ISSUES CONCERNING THE RELOCATION OF CIVIL PROCEEDINGS UNDER THE NEW CODE OF CIVIL PROCEDURE (NCPC

    Directory of Open Access Journals (Sweden)

    Andrei Costin GRIMBERG

    2015-07-01

    Full Text Available The change of the new code of civil procedure and obvious the entry of the new provisions at 15th February 2013, has been thought with the hope to accelerate the procedures related to judgement with a noticeable simplification of procedures, all designed with the aim of unifying the case law and to lower the costs generated by lawsuits , costs both borne by the State as well by citizens involved the cases in court . Thus, the implementation of the New Code of Civil Procedure, desired the compliance right to a fair trial within a optimal time and predictable by the court, by judging the trial in a speedy way , avoiding unjustified delays of the pending cases and to the new petitions introduced, by excessive and unjustified delays often. By the noticeable changes that occurred following the entry into force of the new Code of Civil Procedure, it identify and amend the provisions regarding requests for displacement, in terms of the grounds on which it may formulate the petition of displacement and the court competent to hear such an application.

  18. Code of practice for food handler activities.

    Science.gov (United States)

    Smith, T A; Kanas, R P; McCoubrey, I A; Belton, M E

    2005-08-01

    The food industry regulates various aspects of food handler activities, according to legislation and customer expectations. The purpose of this paper is to provide a code of practice which delineates a set of working standards for food handler hygiene, handwashing, use of protective equipment, wearing of jewellery and body piercing. The code was developed by a working group of occupational physicians with expertise in both food manufacturing and retail, using a risk assessment approach. Views were also obtained from other occupational physicians working within the food industry and the relevant regulatory bodies. The final version of the code (available in full as Supplementary data in Occupational Medicine Online) therefore represents a broad consensus of opinion. The code of practice represents a set of minimum standards for food handler suitability and activities, based on a practical assessment of risk, for application in food businesses. It aims to provide useful working advice to food businesses of all sizes.

  19. The Oxford classification of IgA nephropathy: pathology definitions, correlations, and reproducibility

    NARCIS (Netherlands)

    Roberts, Ian S. D.; Cook, H. Terence; Troyanov, Stéphan; Alpers, Charles E.; Amore, Alessandro; Barratt, Jonathan; Berthoux, Francois; Bonsib, Stephen; Bruijn, Jan A.; Cattran, Daniel C.; Coppo, Rosanna; D'Agati, Vivette; D'Amico, Giuseppe; Emancipator, Steven; Emma, Francesco; Feehally, John; Ferrario, Franco; Fervenza, Fernando C.; Florquin, Sandrine; Fogo, Agnes; Geddes, Colin C.; Groene, Hermann-Josef; Haas, Mark; Herzenberg, Andrew M.; Hill, Prue A.; Hogg, Ronald J.; Hsu, Stephen I.; Jennette, J. Charles; Joh, Kensuke; Julian, Bruce A.; Kawamura, Tetsuya; Lai, Fernand M.; Li, Lei-Shi; Li, Philip K. T.; Liu, Zhi-Hong; Mackinnon, Bruce; Mezzano, Sergio; Schena, F. Paolo; Tomino, Yasuhiko; Walker, Patrick D.; Wang, Haiyan; Weening, Jan J.; Yoshikawa, Nori; Zhang, Hong

    2009-01-01

    Pathological classifications in current use for the assessment of glomerular disease have been typically opinion-based and built on the expert assumptions of renal pathologists about lesions historically thought to be relevant to prognosis. Here we develop a unique approach for the pathological

  20. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE.

    Science.gov (United States)

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2016-04-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance.

  1. Self-organizing ontology of biochemically relevant small molecules.

    Science.gov (United States)

    Chepelev, Leonid L; Hastings, Janna; Ennis, Marcus; Steinbeck, Christoph; Dumontier, Michel

    2012-01-06

    The advent of high-throughput experimentation in biochemistry has led to the generation of vast amounts of chemical data, necessitating the development of novel analysis, characterization, and cataloguing techniques and tools. Recently, a movement to publically release such data has advanced biochemical structure-activity relationship research, while providing new challenges, the biggest being the curation, annotation, and classification of this information to facilitate useful biochemical pattern analysis. Unfortunately, the human resources currently employed by the organizations supporting these efforts (e.g. ChEBI) are expanding linearly, while new useful scientific information is being released in a seemingly exponential fashion. Compounding this, currently existing chemical classification and annotation systems are not amenable to automated classification, formal and transparent chemical class definition axiomatization, facile class redefinition, or novel class integration, thus further limiting chemical ontology growth by necessitating human involvement in curation. Clearly, there is a need for the automation of this process, especially for novel chemical entities of biological interest. To address this, we present a formal framework based on Semantic Web technologies for the automatic design of chemical ontology which can be used for automated classification of novel entities. We demonstrate the automatic self-assembly of a structure-based chemical ontology based on 60 MeSH and 40 ChEBI chemical classes. This ontology is then used to classify 200 compounds with an accuracy of 92.7%. We extend these structure-based classes with molecular feature information and demonstrate the utility of our framework for classification of functionally relevant chemicals. Finally, we discuss an iterative approach that we envision for future biochemical ontology development. We conclude that the proposed methodology can ease the burden of chemical data annotators and

  2. Self-organizing ontology of biochemically relevant small molecules

    Science.gov (United States)

    2012-01-01

    Background The advent of high-throughput experimentation in biochemistry has led to the generation of vast amounts of chemical data, necessitating the development of novel analysis, characterization, and cataloguing techniques and tools. Recently, a movement to publically release such data has advanced biochemical structure-activity relationship research, while providing new challenges, the biggest being the curation, annotation, and classification of this information to facilitate useful biochemical pattern analysis. Unfortunately, the human resources currently employed by the organizations supporting these efforts (e.g. ChEBI) are expanding linearly, while new useful scientific information is being released in a seemingly exponential fashion. Compounding this, currently existing chemical classification and annotation systems are not amenable to automated classification, formal and transparent chemical class definition axiomatization, facile class redefinition, or novel class integration, thus further limiting chemical ontology growth by necessitating human involvement in curation. Clearly, there is a need for the automation of this process, especially for novel chemical entities of biological interest. Results To address this, we present a formal framework based on Semantic Web technologies for the automatic design of chemical ontology which can be used for automated classification of novel entities. We demonstrate the automatic self-assembly of a structure-based chemical ontology based on 60 MeSH and 40 ChEBI chemical classes. This ontology is then used to classify 200 compounds with an accuracy of 92.7%. We extend these structure-based classes with molecular feature information and demonstrate the utility of our framework for classification of functionally relevant chemicals. Finally, we discuss an iterative approach that we envision for future biochemical ontology development. Conclusions We conclude that the proposed methodology can ease the burden of

  3. Self-organizing ontology of biochemically relevant small molecules

    Directory of Open Access Journals (Sweden)

    Chepelev Leonid L

    2012-01-01

    Full Text Available Abstract Background The advent of high-throughput experimentation in biochemistry has led to the generation of vast amounts of chemical data, necessitating the development of novel analysis, characterization, and cataloguing techniques and tools. Recently, a movement to publically release such data has advanced biochemical structure-activity relationship research, while providing new challenges, the biggest being the curation, annotation, and classification of this information to facilitate useful biochemical pattern analysis. Unfortunately, the human resources currently employed by the organizations supporting these efforts (e.g. ChEBI are expanding linearly, while new useful scientific information is being released in a seemingly exponential fashion. Compounding this, currently existing chemical classification and annotation systems are not amenable to automated classification, formal and transparent chemical class definition axiomatization, facile class redefinition, or novel class integration, thus further limiting chemical ontology growth by necessitating human involvement in curation. Clearly, there is a need for the automation of this process, especially for novel chemical entities of biological interest. Results To address this, we present a formal framework based on Semantic Web technologies for the automatic design of chemical ontology which can be used for automated classification of novel entities. We demonstrate the automatic self-assembly of a structure-based chemical ontology based on 60 MeSH and 40 ChEBI chemical classes. This ontology is then used to classify 200 compounds with an accuracy of 92.7%. We extend these structure-based classes with molecular feature information and demonstrate the utility of our framework for classification of functionally relevant chemicals. Finally, we discuss an iterative approach that we envision for future biochemical ontology development. Conclusions We conclude that the proposed methodology

  4. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  5. COOLII code conversion from Cyber to HP

    International Nuclear Information System (INIS)

    Lee, Hae Cho; Kim, Hee Kyung

    1996-06-01

    COOLII computer code determines the time required to cooldown the plant from shutdown cooling system initiation condition to cold shutdown or refueling condition. Required time for cooldown is calculated under the various assumption on shutdown cooling heat exchanger(SDCHX) availability, reactor coolant system (RCS) low pressure safety injection(LPSI) flowrate. RCS cooldown rates and component cooling system flow rates. This report firstly describes detailed work carried out for installation of COOLII on HP 9000/700 series as well as relevant code validation results. Attached is a report on software verification and validation results. 7 refs. (Author) .new

  6. Performance of automated and manual coding systems for occupational data: a case study of historical records.

    Science.gov (United States)

    Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S

    2012-03-01

    Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.

  7. Application range affected by software failures in safety relevant instrumentation and control systems of nuclear power plants

    International Nuclear Information System (INIS)

    Jopen, Manuela; Mbonjo, Herve; Sommer, Dagmar; Ulrich, Birte

    2017-03-01

    This report presents results that have been developed within a BMUB-funded research project (Promotion Code 3614R01304). The overall objective of this project was to broaden the knowledge base of GRS regarding software failures and their impact in software-based instrumentation and control (I and C) systems. To this end, relevant definitions and terms in standards and publications (DIN, IEEE standards, IAEA standards, NUREG publications) as well as in the German safety requirements for nuclear power plants were analyzed first. In particular, it was found that the term ''software fault'' is defined differently and partly contradictory in the considered literature sources. For this reason, a definition of software fault was developed on the basis of the software life cycle of software-based I and C systems within the framework of this project, which takes into account the various aspects relevant to software faults and their related effects. It turns out that software failures result from latent faults in a software-based control system, which can lead to a non-compliant behavior of a software-based I and C system. Hereby a distinction should be made between programming faults and specification faults. In a further step, operational experience with software failures in software-based I and C systems in nuclear facilities and in nonnuclear sector was investigated. The identified events were analyzed with regard to their cause and impacts and the analysis results were summarized. Based on the developed definition of software failure and on the COMPSIS-classification scheme for events related to software based I and C systems, the COCS-classification scheme was developed to classify events from operating experience with software failures, in which the events are classified according to the criteria ''cause'', ''affected system'', ''impact'' and ''CCF potential''. This classification scheme was applied to evaluate the events identified in the framework of this project

  8. Clinical coding of prospectively identified paediatric adverse drug reactions--a retrospective review of patient records.

    Science.gov (United States)

    Bellis, Jennifer R; Kirkham, Jamie J; Nunn, Anthony J; Pirmohamed, Munir

    2014-12-17

    National Health Service (NHS) hospitals in the UK use a system of coding for patient episodes. The coding system used is the International Classification of Disease (ICD-10). There are ICD-10 codes which may be associated with adverse drug reactions (ADRs) and there is a possibility of using these codes for ADR surveillance. This study aimed to determine whether ADRs prospectively identified in children admitted to a paediatric hospital were coded appropriately using ICD-10. The electronic admission abstract for each patient with at least one ADR was reviewed. A record was made of whether the ADR(s) had been coded using ICD-10. Of 241 ADRs, 76 (31.5%) were coded using at least one ICD-10 ADR code. Of the oncology ADRs, 70/115 (61%) were coded using an ICD-10 ADR code compared with 6/126 (4.8%) non-oncology ADRs (difference in proportions 56%, 95% CI 46.2% to 65.8%; p codes as a single means of detection. Data derived from administrative healthcare databases are not reliable for identifying ADRs by themselves, but may complement other methods of detection.

  9. [QR-Code based patient tracking: a cost-effective option to improve patient safety].

    Science.gov (United States)

    Fischer, M; Rybitskiy, D; Strauß, G; Dietz, A; Dressler, C R

    2013-03-01

    Hospitals are implementing a risk management system to avoid patient or surgery mix-ups. The trend is to use preoperative checklists. This work deals specifically with a type of patient identification, which is realized by storing patient data on a patient-fixed medium. In 127 ENT surgeries data relevant for patient identification were encrypted in a 2D-QR-Code. The code, as a separate document coming with the patient chart or as a patient wristband, has been decrypted in the OR and the patient data were presented visible for all persons. The decoding time, the compliance of the patient data, as well as the duration of the patient identification was compared with the traditional patient identification by inspection of the patient chart. A total of 125 QR codes were read. The time for the decrypting of QR-Code was 5.6 s, the time for the screen view for patient identification was 7.9 s, and for a comparison group of 75 operations traditional patient identification was 27.3 s. Overall, there were 6 relevant information errors in the two parts of the experiment. This represents a ratio of 0.6% for 8 relevant classes per each encrypted QR code. This work allows a cost effective way to technically support patient identification based on electronic patient data. It was shown that the use in the clinical routine is possible. The disadvantage is a potential misinformation from incorrect or missing information in the HIS, or due to changes of the data after the code was created. The QR-code-based patient tracking is seen as a useful complement to the already widely used identification wristband. © Georg Thieme Verlag KG Stuttgart · New York.

  10. From Novice to Expert: Problem Solving in ICD-10-PCS Procedural Coding

    Science.gov (United States)

    Rousse, Justin Thomas

    2013-01-01

    The benefits of converting to ICD-10-CM/PCS have been well documented in recent years. One of the greatest challenges in the conversion, however, is how to train the workforce in the code sets. The International Classification of Diseases, Tenth Revision, Procedure Coding System (ICD-10-PCS) has been described as a language requiring higher-level reasoning skills because of the system's increased granularity. Training and problem-solving strategies required for correct procedural coding are unclear. The objective of this article is to propose that the acquisition of rule-based logic will need to be augmented with self-evaluative and critical thinking. Awareness of how this process works is helpful for established coders as well as for a new generation of coders who will master the complexities of the system. PMID:23861674

  11. Mobile communications – on standards, classifications and generations

    DEFF Research Database (Denmark)

    Tadayoni, Reza; Henten, Anders; Sørensen, Jannick Kirk

    The research question addressed in this paper is concerned with the manners in which the general technological progress in mobile communications is presented and the reasons for the differences in these manners of presentation. The relevance of this research question is that the different....... In common parlance, progress in mobile technologies is mostly referred to as generations. In the International Telecommunication Union (ITU), the classification terminology is that of International Mobile Telecommunication (IMT) standards. In the specialized standards body with a central position...... in the standardization of core mobile technologies, namely 3GPP (3rd Generation Partnership Project), the terminology of ‘releases’ is used. In order to address the research question, the paper uses an analytical framework based on the differences and relationships between the concepts of standards, classifications...

  12. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  13. Fuzzy Mutual Information Based min-Redundancy and Max-Relevance Heterogeneous Feature Selection

    Directory of Open Access Journals (Sweden)

    Daren Yu

    2011-08-01

    Full Text Available Feature selection is an important preprocessing step in pattern classification and machine learning, and mutual information is widely used to measure relevance between features and decision. However, it is difficult to directly calculate relevance between continuous or fuzzy features using mutual information. In this paper we introduce the fuzzy information entropy and fuzzy mutual information for computing relevance between numerical or fuzzy features and decision. The relationship between fuzzy information entropy and differential entropy is also discussed. Moreover, we combine fuzzy mutual information with qmin-Redundancy-Max-Relevanceq, qMax-Dependencyq and min-Redundancy-Max-Dependencyq algorithms. The performance and stability of the proposed algorithms are tested on benchmark data sets. Experimental results show the proposed algorithms are effective and stable.

  14. Simplified diagnostic coding sheet for computerized data storage and analysis in ophthalmology.

    Science.gov (United States)

    Tauber, J; Lahav, M

    1987-11-01

    A review of currently-available diagnostic coding systems revealed that most are either too abbreviated or too detailed. We have compiled a simplified diagnostic coding sheet based on the International Coding and Diagnosis (ICD-9), which is both complete and easy to use in a general practice. The information is transferred to a computer, which uses the relevant (ICD-9) diagnoses as database and can be retrieved later for display of patients' problems or analysis of clinical data.

  15. 78 FR 68983 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-11-18

    ...-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing... regulations to allow for the addition of an optional cotton futures classification procedure--identified and... response to requests from the U.S. cotton industry and ICE, AMS will offer a futures classification option...

  16. Improved Management of Part Safety Classification System for Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jin Young; Park, Youn Won; Park, Heung Gyu; Park, Hyo Chan [BEES Inc., Daejeon (Korea, Republic of)

    2016-10-15

    As, in recent years, many quality assurance (QA) related incidents, such as falsely-certified parts and forged documentation, etc., were reported in association with the supply of structures, systems, components and parts to nuclear power plants, a need for a better management of safety classification system was addressed so that it would be based more on the level of parts . Presently, the Korean nuclear power plants do not develop and apply relevant procedures for safety classifications, but rather the safety classes of parts are determined solely based on the experience of equipment designers. So proposed in this paper is a better management plan for safety equipment classification system with an aim to strengthen the quality management for parts. The plan was developed through the analysis of newly introduced technical criteria to be applied to parts of nuclear power plant.

  17. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  18. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Application of the International Classification of Functioning, Disability and Health (ICF) to people with dysphagia following non-surgical head and neck cancer management.

    Science.gov (United States)

    Nund, Rebecca L; Scarinci, Nerina A; Cartmill, Bena; Ward, Elizabeth C; Kuipers, Pim; Porceddu, Sandro V

    2014-12-01

    The International Classification of Functioning, Disability, and Health (ICF) is an internationally recognized framework which allows its user to describe the consequences of a health condition on an individual in the context of their environment. With growing recognition that dysphagia can have broad ranging physical and psychosocial impacts, the aim of this paper was to identify the ICF domains and categories that describe the full functional impact of dysphagia following non-surgical head and neck cancer (HNC) management, from the perspective of the person with dysphagia. A secondary analysis was conducted on previously published qualitative study data which explored the lived experiences of dysphagia of 24 individuals with self-reported swallowing difficulties following HNC management. Categories and sub-categories identified by the qualitative analysis were subsequently mapped to the ICF using the established linking rules to develop a set of ICF codes relevant to the impact of dysphagia following HNC management. The 69 categories and sub-categories that had emerged from the qualitative analysis were successfully linked to 52 ICF codes. The distribution of these codes across the ICF framework revealed that the components of Body Functions, Activities and Participation, and Environmental Factors were almost equally represented. The findings confirm that the ICF is a valuable framework for representing the complexity and multifaceted impact of dysphagia following HNC. This list of ICF codes, which reflect the diverse impact of dysphagia associated with HNC on the individual, can be used to guide more holistic assessment and management for this population.

  20. Systematic review of validated case definitions for diabetes in ICD-9-coded and ICD-10-coded data in adult populations.

    Science.gov (United States)

    Khokhar, Bushra; Jette, Nathalie; Metcalfe, Amy; Cunningham, Ceara Tess; Quan, Hude; Kaplan, Gilaad G; Butalia, Sonia; Rabi, Doreen

    2016-08-05

    With steady increases in 'big data' and data analytics over the past two decades, administrative health databases have become more accessible and are now used regularly for diabetes surveillance. The objective of this study is to systematically review validated International Classification of Diseases (ICD)-based case definitions for diabetes in the adult population. Electronic databases, MEDLINE and Embase, were searched for validation studies where an administrative case definition (using ICD codes) for diabetes in adults was validated against a reference and statistical measures of the performance reported. The search yielded 2895 abstracts, and of the 193 potentially relevant studies, 16 met criteria. Diabetes definition for adults varied by data source, including physician claims (sensitivity ranged from 26.9% to 97%, specificity ranged from 94.3% to 99.4%, positive predictive value (PPV) ranged from 71.4% to 96.2%, negative predictive value (NPV) ranged from 95% to 99.6% and κ ranged from 0.8 to 0.9), hospital discharge data (sensitivity ranged from 59.1% to 92.6%, specificity ranged from 95.5% to 99%, PPV ranged from 62.5% to 96%, NPV ranged from 90.8% to 99% and κ ranged from 0.6 to 0.9) and a combination of both (sensitivity ranged from 57% to 95.6%, specificity ranged from 88% to 98.5%, PPV ranged from 54% to 80%, NPV ranged from 98% to 99.6% and κ ranged from 0.7 to 0.8). Overall, administrative health databases are useful for undertaking diabetes surveillance, but an awareness of the variation in performance being affected by case definition is essential. The performance characteristics of these case definitions depend on the variations in the definition of primary diagnosis in ICD-coded discharge data and/or the methodology adopted by the healthcare facility to extract information from patient records. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  1. Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

    Directory of Open Access Journals (Sweden)

    Hoefsloot Huub CJ

    2009-05-01

    Full Text Available Abstract Background Mass spectrometry is increasingly being used to discover proteins or protein profiles associated with disease. Experimental design of mass-spectrometry studies has come under close scrutiny and the importance of strict protocols for sample collection is now understood. However, the question of how best to process the large quantities of data generated is still unanswered. Main challenges for the analysis are the choice of proper pre-processing and classification methods. While these two issues have been investigated in isolation, we propose to use the classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Results Two in-house generated clinical SELDI-TOF MS datasets are used in this study as an example of high throughput mass-spectrometry data. We perform a systematic comparison of two commonly used pre-processing methods as implemented in Ciphergen ProteinChip Software and in the Cromwell package. With respect to reproducibility, Ciphergen and Cromwell pre-processing are largely comparable. We find that the overlap between peaks detected by either Ciphergen ProteinChip Software or Cromwell is large. This is especially the case for the more stringent peak detection settings. Moreover, similarity of the estimated intensities between matched peaks is high. We evaluate the pre-processing methods using five different classification methods. Classification is done in a double cross-validation protocol using repeated random sampling to obtain an unbiased estimate of classification accuracy. No pre-processing method significantly outperforms the other for all peak detection settings evaluated. Conclusion We use classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Both pre-processing methods lead to similar classification results on an ovarian cancer and a Gaucher disease dataset. However, the settings for pre

  2. [Bioethical analysis of the Brazilian Dentistry Code of Ethics].

    Science.gov (United States)

    Pyrrho, Monique; do Prado, Mauro Machado; Cordón, Jorge; Garrafa, Volnei

    2009-01-01

    The Brazilian Dentistry Code of Ethics (DCE), Resolution CFO-71 from May 2006, is an instrument created to guide dentists' behavior in relation to the ethical aspects of professional practice. The purpose of the study is to analyze the above mentioned code comparing the deontological and bioethical focuses. In order to do so, an interpretative analysis of the code and of twelve selected texts was made. Six of the texts were about bioethics and six on deontology, and the analysis was made through the methodological classification of the context units, textual paragraphs and items from the code in the following categories: the referentials of bioethical principlism--autonomy, beneficence, nonmaleficence and justice -, technical aspects and moral virtues related to the profession. Together the four principles represented 22.9%, 39.8% and 54.2% of the content of the DCE, of the deontological texts and of the bioethical texts respectively. In the DCE, 42% of the items referred to virtues, 40.2% were associated to technical aspects and just 22.9% referred to principles. The virtues related to the professionals and the technical aspects together amounted to 70.1% of the code. Instead of focusing on the patient as the subject of the process of oral health care, the DCE focuses on the professional, and it is predominantly turned to legalistic and corporate aspects.

  3. Learning about the internal structure of categories through classification and feature inference.

    Science.gov (United States)

    Jee, Benjamin D; Wiley, Jennifer

    2014-01-01

    Previous research on category learning has found that classification tasks produce representations that are skewed toward diagnostic feature dimensions, whereas feature inference tasks lead to richer representations of within-category structure. Yet, prior studies often measure category knowledge through tasks that involve identifying only the typical features of a category. This neglects an important aspect of a category's internal structure: how typical and atypical features are distributed within a category. The present experiments tested the hypothesis that inference learning results in richer knowledge of internal category structure than classification learning. We introduced several new measures to probe learners' representations of within-category structure. Experiment 1 found that participants in the inference condition learned and used a wider range of feature dimensions than classification learners. Classification learners, however, were more sensitive to the presence of atypical features within categories. Experiment 2 provided converging evidence that classification learners were more likely to incorporate atypical features into their representations. Inference learners were less likely to encode atypical category features, even in a "partial inference" condition that focused learners' attention on the feature dimensions relevant to classification. Overall, these results are contrary to the hypothesis that inference learning produces superior knowledge of within-category structure. Although inference learning promoted representations that included a broad range of category-typical features, classification learning promoted greater sensitivity to the distribution of typical and atypical features within categories.

  4. A complete electrical hazard classification system and its application

    Energy Technology Data Exchange (ETDEWEB)

    Gordon, Lloyd B [Los Alamos National Laboratory; Cartelli, Laura [Los Alamos National Laboratory

    2009-01-01

    The Standard for Electrical Safety in the Workplace, NFPA 70E, and relevant OSHA electrical safety standards evolved to address the hazards of 60-Hz power that are faced primarily by electricians, linemen, and others performing facility and utility work. This leaves a substantial gap in the management of electrical hazards in Research and Development (R&D) and specialized high voltage and high power equipment. Examples include lasers, accelerators, capacitor banks, electroplating systems, induction and dielectric heating systems, etc. Although all such systems are fed by 50/60 Hz alternating current (ac) power, we find substantial use of direct current (dc) electrical energy, and the use of capacitors, inductors, batteries, and radiofrequency (RF) power. The electrical hazards of these forms of electricity and their systems are different than for 50160 Hz power. Over the past 10 years there has been an effort to develop a method of classifying all of the electrical hazards found in all types of R&D and utilization equipment. Examples of the variation of these hazards from NFPA 70E include (a) high voltage can be harmless, if the available current is sufficiently low, (b) low voltage can be harmful if the available current/power is high, (c) high voltage capacitor hazards are unique and include severe reflex action, affects on the heart, and tissue damage, and (d) arc flash hazard analysis for dc and capacitor systems are not provided in existing standards. This work has led to a comprehensive electrical hazard classification system that is based on various research conducted over the past 100 years, on analysis of such systems in R&D, and on decades of experience. Initially, national electrical safety codes required the qualified worker only to know the source voltage to determine the shock hazard. Later, as arc flash hazards were understood, the fault current and clearing time were needed. These items are still insufficient to fully characterize all types of

  5. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    Science.gov (United States)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  6. Discriminative Bayesian Dictionary Learning for Classification.

    Science.gov (United States)

    Akhtar, Naveed; Shafait, Faisal; Mian, Ajmal

    2016-12-01

    We propose a Bayesian approach to learn discriminative dictionaries for sparse representation of data. The proposed approach infers probability distributions over the atoms of a discriminative dictionary using a finite approximation of Beta Process. It also computes sets of Bernoulli distributions that associate class labels to the learned dictionary atoms. This association signifies the selection probabilities of the dictionary atoms in the expansion of class-specific data. Furthermore, the non-parametric character of the proposed approach allows it to infer the correct size of the dictionary. We exploit the aforementioned Bernoulli distributions in separately learning a linear classifier. The classifier uses the same hierarchical Bayesian model as the dictionary, which we present along the analytical inference solution for Gibbs sampling. For classification, a test instance is first sparsely encoded over the learned dictionary and the codes are fed to the classifier. We performed experiments for face and action recognition; and object and scene-category classification using five public datasets and compared the results with state-of-the-art discriminative sparse representation approaches. Experiments show that the proposed Bayesian approach consistently outperforms the existing approaches.

  7. Ebolavirus Classification Based on Natural Vectors

    Science.gov (United States)

    Zheng, Hui; Yin, Changchuan; Hoang, Tung; He, Rong Lucy; Yang, Jie

    2015-01-01

    According to the WHO, ebolaviruses have resulted in 8818 human deaths in West Africa as of January 2015. To better understand the evolutionary relationship of the ebolaviruses and infer virulence from the relationship, we applied the alignment-free natural vector method to classify the newest ebolaviruses. The dataset includes three new Guinea viruses as well as 99 viruses from Sierra Leone. For the viruses of the family of Filoviridae, both genus label classification and species label classification achieve an accuracy rate of 100%. We represented the relationships among Filoviridae viruses by Unweighted Pair Group Method with Arithmetic Mean (UPGMA) phylogenetic trees and found that the filoviruses can be separated well by three genera. We performed the phylogenetic analysis on the relationship among different species of Ebolavirus by their coding-complete genomes and seven viral protein genes (glycoprotein [GP], nucleoprotein [NP], VP24, VP30, VP35, VP40, and RNA polymerase [L]). The topology of the phylogenetic tree by the viral protein VP24 shows consistency with the variations of virulence of ebolaviruses. The result suggests that VP24 be a pharmaceutical target for treating or preventing ebolaviruses. PMID:25803489

  8. The Impact of Diagnostic Code Misclassification on Optimizing the Experimental Design of Genetic Association Studies

    Directory of Open Access Journals (Sweden)

    Steven J. Schrodi

    2017-01-01

    Full Text Available Diagnostic codes within electronic health record systems can vary widely in accuracy. It has been noted that the number of instances of a particular diagnostic code monotonically increases with the accuracy of disease phenotype classification. As a growing number of health system databases become linked with genomic data, it is critically important to understand the effect of this misclassification on the power of genetic association studies. Here, I investigate the impact of this diagnostic code misclassification on the power of genetic association studies with the aim to better inform experimental designs using health informatics data. The trade-off between (i reduced misclassification rates from utilizing additional instances of a diagnostic code per individual and (ii the resulting smaller sample size is explored, and general rules are presented to improve experimental designs.

  9. Metode Linear Predictive Coding (LPC Pada klasifikasi Hidden Markov Model (HMM Untuk Kata Arabic pada penutur Indonesia

    Directory of Open Access Journals (Sweden)

    Ririn Kusumawati

    2016-05-01

    In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.

  10. Defining active sacroiliitis on magnetic resonance imaging (MRI) for classification of axial spondyloarthritis: a consensual approach by the ASAS/OMERACT MRI group

    DEFF Research Database (Denmark)

    Rudwaleit, M; Jurik, A G; Hermann, K-G A

    2009-01-01

    BACKGROUND: Magnetic resonance imaging (MRI) of sacroiliac joints has evolved as the most relevant imaging modality for diagnosis and classification of early axial spondyloarthritis (SpA) including early ankylosing spondylitis. OBJECTIVES: To identify and describe MRI findings in sacroiliitis and...... relevant for sacroiliitis have been defined by consensus by a group of rheumatologists and radiologists. These definitions should help in applying correctly the imaging feature "active sacroiliitis by MRI" in the new ASAS classification criteria for axial SpA.......BACKGROUND: Magnetic resonance imaging (MRI) of sacroiliac joints has evolved as the most relevant imaging modality for diagnosis and classification of early axial spondyloarthritis (SpA) including early ankylosing spondylitis. OBJECTIVES: To identify and describe MRI findings in sacroiliitis...... conditions which may mimic SpA. Descriptions of the pathological findings and technical requirements for the appropriate acquisition were formulated. In a consensual approach MRI findings considered to be essential for sacroiliitis were defined. RESULTS: Active inflammatory lesions such as bone marrow oedema...

  11. Fuel management and core design code systems for pressurized water reactor neutronic calculations

    International Nuclear Information System (INIS)

    Ahnert, C.; Arayones, J.M.

    1985-01-01

    A package of connected code systems for the neutronic calculations relevant in fuel management and core design has been developed and applied for validation to the startup tests and first operating cycle of a 900MW (electric) PWR. The package includes the MARIA code system for the modeling of the different types of PWR fuel assemblies, the CARMEN code system for detailed few group diffusion calculations for PWR cores at operating and burnup conditions, and the LOLA code system for core simulation using onegroup nodal theory parameters explicitly calculated from the detailed solutions

  12. Is overall similarity classification less effortful than single-dimension classification?

    Science.gov (United States)

    Wills, Andy J; Milton, Fraser; Longmore, Christopher A; Hester, Sarah; Robinson, Jo

    2013-01-01

    It is sometimes argued that the implementation of an overall similarity classification is less effortful than the implementation of a single-dimension classification. In the current article, we argue that the evidence securely in support of this view is limited, and report additional evidence in support of the opposite proposition--overall similarity classification is more effortful than single-dimension classification. Using a match-to-standards procedure, Experiments 1A, 1B and 2 demonstrate that concurrent load reduces the prevalence of overall similarity classification, and that this effect is robust to changes in the concurrent load task employed, the level of time pressure experienced, and the short-term memory requirements of the classification task. Experiment 3 demonstrates that participants who produced overall similarity classifications from the outset have larger working memory capacities than those who produced single-dimension classifications initially, and Experiment 4 demonstrates that instructions to respond meticulously increase the prevalence of overall similarity classification.

  13. Code of practice in industrial radiography

    International Nuclear Information System (INIS)

    Karma, S. E. M.

    2010-12-01

    The aim of this research is to developing a draft for a new radiation protection code of practice in industrial radiography without ignoring that one issued in 1998 and meet the current international recommendation. Another aim of this study was to assess the current situation of radiation protection in some of the industrial radiography department in Sudan. To achieve the aims of this study, a draft of a code of practice has been developed which is based on international and local relevant recommendations. The developed code includes the following main issues: regulatory responsibilities, radiation protection program and design of radiation installation. The practical part of this study includes scientific visits to two of industrial radiography departments in Sudan so as to assess the degree of compliance of that department with what state in the developed code. The result of each scientific visits revealed that most of the department do not have an effective radiation protection program and that could lead to exposure workers and public to unnecessary dose. Some recommendations were stated that, if implemented could improve the status of radiation protection in industrial radiography department. (Author)

  14. Operational experiences with automated acoustic burst classification by neural networks

    International Nuclear Information System (INIS)

    Olma, B.; Ding, Y.; Enders, R.

    1996-01-01

    Monitoring of Loose Parts Monitoring System sensors for signal bursts associated with metallic impacts of loose parts has proved as an useful methodology for on-line assessing the mechanical integrity of components in the primary circuit of nuclear power plants. With the availability of neural networks new powerful possibilities for classification and diagnosis of burst signals can be realized for acoustic monitoring with the online system RAMSES. In order to look for relevant burst signals an automated classification is needed, that means acoustic signature analysis and assessment has to be performed automatically on-line. A back propagation neural network based on five pre-calculated signal parameter values has been set up for identification of different signal types. During a three-month monitoring program of medium-operated check valves burst signals have been measured and classified separately according to their cause. The successful results of the three measurement campaigns with an automated burst type classification are presented. (author)

  15. Refinement of diagnosis and disease classification in psychiatry.

    Science.gov (United States)

    Lecrubier, Yves

    2008-03-01

    Knowledge concerning the classification of mental disorders progressed substantially with the use of DSM III-IV and IDCD 10 because it was based on observed data, with precise definitions. These classifications a priori avoided to generate definitions related to etiology or treatment response. They are based on a categorical approach where diagnostic entities share common phenomenological features. Modifications proposed or discussed are related to the weak validity of the classification strategy described above. (a) Disorders are supposed to be independent but the current coexistence of two or more disorders is the rule; (b) They also are supposed to have stability, however anxiety disorders most of the time precede major depression. For GAD age at onset, family history, biology and symptomatology are close to those of depression. As a consequence broader entities such as depression-GAD spectrum, panic-phobias spectrum and OCD spectrum including eating disorders and pathological gambling are taken into consideration; (c) Diagnostic categories use thresholds to delimitate a border with normals. This creates "subthreshold" conditions. The relevance of such conditions is well documented. Measuring the presence and severity of different dimensions, independent from a threshold, will improve the relevance of the description of patients pathology. In addition, this dimensional approach will improve the problems posed by the mutually exclusive diagnoses (depression and GAD, schizophrenia and depression); (d) Some disorders are based on the coexistence of different dimensions. Patients may present only one set of symptoms and have different characteristics, evolution and response to treatment. An example would be negative symptoms in Schizophrenia; (e) Because no etiological model is available and most measures are subjective, objective measures (cognitive, biological) and genetics progresses created important hopes. None of these measures is pathognomonic and most appear

  16. Ombuds’ corner: Code of Conduct and change of behaviour

    CERN Multimedia

    Vincent Vuillemin

    2012-01-01

    In this series, the Bulletin aims to explain the role of the Ombuds at CERN by presenting practical examples of misunderstandings that could have been resolved by the Ombuds if he had been contacted earlier. Please note that, in all the situations we present, the names are fictitious and used only to improve clarity.   Is our Code of Conduct actually effective in influencing behaviour? Research studies suggest that codes, while necessary, are insufficient as a means of encouraging respectful behaviour among employees. Codes are only a potential means of influencing employee behaviour. For a Code of Conduct to be effective, several elements must be in place. Firstly, there needs to be communication and effective training using relevant examples to make the code real. It should be embraced by the leaders and accepted by the personnel. Finally, it should be embedded in the CERN culture and not seen as a separate entity, which requires serious discussions to raise awareness. In addition, every c...

  17. a Hyperspectral Image Classification Method Using Isomap and Rvm

    Science.gov (United States)

    Chang, H.; Wang, T.; Fang, H.; Su, Y.

    2018-04-01

    Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.

  18. Variable Dimension Trellis-Coded Quantization of Sinusoidal Parameters

    DEFF Research Database (Denmark)

    Larsen, Morten Holm; Christensen, Mads G.; Jensen, Søren Holdt

    2008-01-01

    In this letter, we propose joint quantization of the parameters of a set of sinusoids based on the theory of trellis-coded quantization. A particular advantage of this approach is that it allows for joint quantization of a variable number of sinusoids, which is particularly relevant in variable...

  19. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer

    2017-12-25

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels.

  20. Performance Measures of Diagnostic Codes for Detecting Opioid Overdose in the Emergency Department.

    Science.gov (United States)

    Rowe, Christopher; Vittinghoff, Eric; Santos, Glenn-Milo; Behar, Emily; Turner, Caitlin; Coffin, Phillip O

    2017-04-01

    Opioid overdose mortality has tripled in the United States since 2000 and opioids are responsible for more than half of all drug overdose deaths, which reached an all-time high in 2014. Opioid overdoses resulting in death, however, represent only a small fraction of all opioid overdose events and efforts to improve surveillance of this public health problem should include tracking nonfatal overdose events. International Classification of Disease (ICD) diagnosis codes, increasingly used for the surveillance of nonfatal drug overdose events, have not been rigorously assessed for validity in capturing overdose events. The present study aimed to validate the use of ICD, 9th revision, Clinical Modification (ICD-9-CM) codes in identifying opioid overdose events in the emergency department (ED) by examining multiple performance measures, including sensitivity and specificity. Data on ED visits from January 1, 2012, to December 31, 2014, including clinical determination of whether the visit constituted an opioid overdose event, were abstracted from electronic medical records for patients prescribed long-term opioids for pain from any of six safety net primary care clinics in San Francisco, California. Combinations of ICD-9-CM codes were validated in the detection of overdose events as determined by medical chart review. Both sensitivity and specificity of different combinations of ICD-9-CM codes were calculated. Unadjusted logistic regression models with robust standard errors and accounting for clustering by patient were used to explore whether overdose ED visits with certain characteristics were more or less likely to be assigned an opioid poisoning ICD-9-CM code by the documenting physician. Forty-four (1.4%) of 3,203 ED visits among 804 patients were determined to be opioid overdose events. Opioid-poisoning ICD-9-CM codes (E850.2-E850.2, 965.00-965.09) identified overdose ED visits with a sensitivity of 25.0% (95% confidence interval [CI] = 13.6% to 37.8%) and

  1. The history of female genital tract malformation classifications and proposal of an updated system.

    Science.gov (United States)

    Acién, Pedro; Acién, Maribel I

    2011-01-01

    A correct classification of malformations of the female genital tract is essential to prevent unnecessary and inadequate surgical operations and to compare reproductive results. An ideal classification system should be based on aetiopathogenesis and should suggest the appropriate therapeutic strategy. We conducted a systematic review of relevant articles found in PubMed, Scopus, Scirus and ISI webknowledge, and analysis of historical collections of 'female genital malformations' and 'classifications'. Of 124 full-text articles assessed for eligibility, 64 were included because they contained original general, partial or modified classifications. All the existing classifications were analysed and grouped. The unification of terms and concepts was also analysed. Traditionally, malformations of the female genital tract have been catalogued and classified as Müllerian malformations due to agenesis, lack of fusion, the absence of resorption and lack of posterior development of the Müllerian ducts. The American Fertility Society classification of the late 1980s included seven basic groups of malformations also considering the Müllerian development and the relationship of the malformations to fertility. Other classifications are based on different aspects: functional, defects in vertical fusion, embryological or anatomical (Vagina, Cervix, Uterus, Adnex and Associated Malformation: VCUAM classification). However, an embryological-clinical classification system seems to be the most appropriate. Accepting the need for a new classification system of genitourinary malformations that considers the experience gained from the application of the current classification systems, the aetiopathogenesis and that also suggests the appropriate treatment, we proposed an update of our embryological-clinical classification as a new system with six groups of female genitourinary anomalies.

  2. Automatic medical image annotation and keyword-based image retrieval using relevance feedback.

    Science.gov (United States)

    Ko, Byoung Chul; Lee, JiHyeon; Nam, Jae-Yeal

    2012-08-01

    This paper presents novel multiple keywords annotation for medical images, keyword-based medical image retrieval, and relevance feedback method for image retrieval for enhancing image retrieval performance. For semantic keyword annotation, this study proposes a novel medical image classification method combining local wavelet-based center symmetric-local binary patterns with random forests. For keyword-based image retrieval, our retrieval system use the confidence score that is assigned to each annotated keyword by combining probabilities of random forests with predefined body relation graph. To overcome the limitation of keyword-based image retrieval, we combine our image retrieval system with relevance feedback mechanism based on visual feature and pattern classifier. Compared with other annotation and relevance feedback algorithms, the proposed method shows both improved annotation performance and accurate retrieval results.

  3. Automatic classification of journalistic documents on the Internet1

    Directory of Open Access Journals (Sweden)

    Elias OLIVEIRA

    Full Text Available Abstract Online journalism is increasing every day. There are many news agencies, newspapers, and magazines using digital publication in the global network. Documents published online are available to users, who use search engines to find them. In order to deliver documents that are relevant to the search, they must be indexed and classified. Due to the vast number of documents published online every day, a lot of research has been carried out to find ways to facilitate automatic document classification. The objective of the present study is to describe an experimental approach for the automatic classification of journalistic documents published on the Internet using the Vector Space Model for document representation. The model was tested based on a real journalism database, using algorithms that have been widely reported in the literature. This article also describes the metrics used to assess the performance of these algorithms and their required configurations. The results obtained show the efficiency of the method used and justify further research to find ways to facilitate the automatic classification of documents.

  4. Document representations for classification of short web-page descriptions

    Directory of Open Access Journals (Sweden)

    Radovanović Miloš

    2008-01-01

    Full Text Available Motivated by applying Text Categorization to classification of Web search results, this paper describes an extensive experimental study of the impact of bag-of- words document representations on the performance of five major classifiers - Naïve Bayes, SVM, Voted Perceptron, kNN and C4.5. The texts, representing short Web-page descriptions sorted into a large hierarchy of topics, are taken from the dmoz Open Directory Web-page ontology, and classifiers are trained to automatically determine the topics which may be relevant to a previously unseen Web-page. Different transformations of input data: stemming, normalization, logtf and idf, together with dimensionality reduction, are found to have a statistically significant improving or degrading effect on classification performance measured by classical metrics - accuracy, precision, recall, F1 and F2. The emphasis of the study is not on determining the best document representation which corresponds to each classifier, but rather on describing the effects of every individual transformation on classification, together with their mutual relationships. .

  5. Fatal anaphylaxis registries data support changes in the who anaphylaxis mortality coding rules.

    Science.gov (United States)

    Tanno, Luciana Kase; Simons, F Estelle R; Annesi-Maesano, Isabella; Calderon, Moises A; Aymé, Ségolène; Demoly, Pascal

    2017-01-13

    Anaphylaxis is defined as a severe life-threatening generalized or systemic hypersensitivity reaction. The difficulty of coding anaphylaxis fatalities under the World Health Organization (WHO) International Classification of Diseases (ICD) system is recognized as an important reason for under-notification of anaphylaxis deaths. On current death certificates, a limited number of ICD codes are valid as underlying causes of death, and death certificates do not include the word anaphylaxis per se. In this review, we provide evidences supporting the need for changes in WHO mortality coding rules and call for addition of anaphylaxis as an underlying cause of death on international death certificates. This publication will be included in support of a formal request to the WHO as a formal request for this move taking the 11 th ICD revision.

  6. DABIE: a data banking system of integral experiments for reactor core characteristics computer codes

    International Nuclear Information System (INIS)

    Matsumoto, Kiyoshi; Naito, Yoshitaka; Ohkubo, Shuji; Aoyanagi, Hideo.

    1987-05-01

    A data banking system of integral experiments for reactor core characteristics computer codes, DABIE, has been developed to lighten the burden on searching so many documents to obtain experiment data required for verification of reactor core characteristics computer code. This data banking system, DABIE, has capabilities of systematic classification, registration and easy retrieval of experiment data. DABIE consists of data bank and supporting programs. Supporting programs are data registration program, data reference program and maintenance program. The system is designed so that user can easily register information of experiment systems including figures as well as geometry data and measured data or obtain those data through TSS terminal interactively. This manual describes the system structure, how-to-use and sample uses of this code system. (author)

  7. Progressive Classification Using Support Vector Machines

    Science.gov (United States)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user

  8. Changing Histopathological Diagnostics by Genome-Based Tumor Classification

    Directory of Open Access Journals (Sweden)

    Michael Kloth

    2014-05-01

    Full Text Available Traditionally, tumors are classified by histopathological criteria, i.e., based on their specific morphological appearances. Consequently, current therapeutic decisions in oncology are strongly influenced by histology rather than underlying molecular or genomic aberrations. The increase of information on molecular changes however, enabled by the Human Genome Project and the International Cancer Genome Consortium as well as the manifold advances in molecular biology and high-throughput sequencing techniques, inaugurated the integration of genomic information into disease classification. Furthermore, in some cases it became evident that former classifications needed major revision and adaption. Such adaptations are often required by understanding the pathogenesis of a disease from a specific molecular alteration, using this molecular driver for targeted and highly effective therapies. Altogether, reclassifications should lead to higher information content of the underlying diagnoses, reflecting their molecular pathogenesis and resulting in optimized and individual therapeutic decisions. The objective of this article is to summarize some particularly important examples of genome-based classification approaches and associated therapeutic concepts. In addition to reviewing disease specific markers, we focus on potentially therapeutic or predictive markers and the relevance of molecular diagnostics in disease monitoring.

  9. Development of the standard classification system of technical information in the field of RI-biomics and its application to the web system

    International Nuclear Information System (INIS)

    Jang, Sol Ah; Kim, Joo Yeon; Park, Tai Jin

    2014-01-01

    RI-Biomics is a new concept that combines radioisotopes (RI) and Biomics. For efficient collection of information, establishment of database for technical information system and its application to the system, there is an increasing need for constructing the standard classification system of technical information by its systematical classification. In this paper, we have summarized the development process of the standard classification system of technical information in the field of RI-Biomics and its application to the system. Constructing the draft version for the standard classification system of technical information was based on that standard classification one in national science and technology in Korea. The final classification system was then derived through the reconstruction and the feedback process based on the consultation from the 7 experts. These results were applied to the database of technical information system after transforming as standard code. Thus, the standard classification system were composed of 5 large classifications and 20 small classifications, and those classification are expected to establish the foundation of information system by achieving the circular structure of collection-analysis-application of information

  10. Structured Literature Review of Electricity Consumption Classification Using Smart Meter Data

    Directory of Open Access Journals (Sweden)

    Alexander Martin Tureczek

    2017-04-01

    Full Text Available Smart meters for measuring electricity consumption are fast becoming prevalent in households. The meters measure consumption on a very fine scale, usually on a 15 min basis, and the data give unprecedented granularity of consumption patterns at household level. A multitude of papers have emerged utilizing smart meter data for deepening our knowledge of consumption patterns. This paper applies a modification of Okoli’s method for conducting structured literature reviews to generate an overview of research in electricity customer classification using smart meter data. The process assessed 2099 papers before identifying 34 significant papers, and highlights three key points: prominent methods, datasets and application. Three important findings are outlined. First, only a few papers contemplate future applications of the classification, rendering papers relevant only in a classification setting. Second; the encountered classification methods do not consider correlation or time series analysis when classifying. The identified papers fail to thoroughly analyze the statistical properties of the data, investigations that could potentially improve classification performance. Third, the description of the data utilized is of varying quality, with only 50% acknowledging missing values impact on the final sample size. A data description score for assessing the quality in data description has been developed and applied to all papers reviewed.

  11. Ship-Iceberg Discrimination in Sentinel-2 Multispectral Imagery by Supervised Classification

    Directory of Open Access Journals (Sweden)

    Peder Heiselberg

    2017-11-01

    Full Text Available The European Space Agency Sentinel-2 satellites provide multispectral images with pixel sizes down to 10 m. This high resolution allows for fast and frequent detection, classification and discrimination of various objects in the sea, which is relevant in general and specifically for the vast Arctic environment. We analyze several sets of multispectral image data from Denmark and Greenland fall and winter, and describe a supervised search and classification algorithm based on physical parameters that successfully finds and classifies all objects in the sea with reflectance above a threshold. It discriminates between objects like ships, islands, wakes, and icebergs, ice floes, and clouds with accuracy better than 90%. Pan-sharpening the infrared bands leads to classification and discrimination of ice floes and clouds better than 95%. For complex images with abundant ice floes or clouds, however, the false alarm rate dominates for small non-sailing boats.

  12. About the application of MCNP4 code in nuclear reactor core design calculations

    International Nuclear Information System (INIS)

    Svarny, J.

    2000-01-01

    This paper provides short review about application of MCNP code for reactor physics calculations performed in SKODA JS. Problems of criticality safety analysis of spent fuel systems for storage and transport of spent fuel are discussed and relevant applications are presented. Application of standard Monte Carlo code for accelerator driven system for LWR waste destruction is shown and conclusions are reviewed. Specific heterogeneous effects in neutron balance of WWER nuclear cores are solved for adjusting standard design codes. (Authors)

  13. Classification of chemical substances, reactions, and interactions: The effect of expertise

    Science.gov (United States)

    Stains, Marilyne Nicole Olivia

    2007-12-01

    This project explored the strategies that undergraduate and graduate chemistry students engaged in when solving classification tasks involving microscopic (particulate) representations of chemical substances and microscopic and symbolic representations of different chemical reactions. We were specifically interested in characterizing the basic features to which students pay attention while classifying, identifying the patterns of reasoning that they follow, and comparing the performance of students with different levels of preparation in the discipline. In general, our results suggest that advanced levels of expertise in chemical classification do not necessarily evolve in a linear and continuous way with academic training. Novice students had a tendency to reduce the cognitive demand of the task and rely on common-sense reasoning; they had difficulties differentiating concepts (conceptual undifferentiation) and based their classification decisions on only one variable (reduction). These ways of thinking lead them to consider extraneous features, pay more attention to explicit or surface features than implicit features and to overlook important and relevant features. However, unfamiliar levels of representations (microscopic level) seemed to trigger deeper and more meaningful thinking processes. On the other hand, expert students classified entities using a specific set of rules that they applied throughout the classification tasks. They considered a larger variety of implicit features and the unfamiliarity with the microscopic level of representation did not affect their reasoning processes. Consequently, novices created numerous small groups, few of them being chemically meaningful, while experts created few but large chemically meaningful groups. Novices also had difficulties correctly classifying entities in chemically meaningful groups. Finally, expert chemists in our study used classification schemes that are not necessarily traditionally taught in classroom

  14. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  15. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 – 4.9 % and 1.9 – 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 – 0.006 % and 0.002 – 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respectively. GRLVQ permits objective feature reduction by inclusion of all processing stages, but is not as accurate as SVM.

  16. EOG feature relevance determination for microsleep detection

    Directory of Open Access Journals (Sweden)

    Golz Martin

    2017-09-01

    Full Text Available Automatic relevance determination (ARD was applied to two-channel EOG recordings for microsleep event (MSE recognition. 10 s immediately before MSE and also before counterexamples of fatigued, but attentive driving were analysed. Two type of signal features were extracted: the maximum cross correlation (MaxCC and logarithmic power spectral densities (PSD averaged in spectral bands of 0.5 Hz width ranging between 0 and 8 Hz. Generalised learn-ing vector quantisation (GRLVQ was used as ARD method to show the potential of feature reduction. This is compared to support-vector machines (SVM, in which the feature reduction plays a much smaller role. Cross validation yielded mean normalised relevancies of PSD features in the range of 1.6 - 4.9 % and 1.9 - 10.4 % for horizontal and vertical EOG, respectively. MaxCC relevancies were 0.002 - 0.006 % and 0.002 - 0.06 %, respectively. This shows that PSD features of vertical EOG are indispensable, whereas MaxCC can be neglected. Mean classification accuracies were estimated at 86.6±b 1.3 % and 92.3±b 0.2 % for GRLVQ and SVM, respec-tively. GRLVQ permits objective feature reduction by inclu-sion of all processing stages, but is not as accurate as SVM.

  17. Biodiversity among Lactobacillus helveticus Strains Isolated from Different Natural Whey Starter Cultures as Revealed by Classification Trees

    Science.gov (United States)

    Gatti, Monica; Trivisano, Carlo; Fabrizi, Enrico; Neviani, Erasmo; Gardini, Fausto

    2004-01-01

    Lactobacillus helveticus is a homofermentative thermophilic lactic acid bacterium used extensively for manufacturing Swiss type and aged Italian cheese. In this study, the phenotypic and genotypic diversity of strains isolated from different natural dairy starter cultures used for Grana Padano, Parmigiano Reggiano, and Provolone cheeses was investigated by a classification tree technique. A data set was used that consists of 119 L. helveticus strains, each of which was studied for its physiological characters, as well as surface protein profiles and hybridization with a species-specific DNA probe. The methodology employed in this work allowed the strains to be grouped into terminal nodes without difficult and subjective interpretation. In particular, good discrimination was obtained between L. helveticus strains isolated, respectively, from Grana Padano and from Provolone natural whey starter cultures. The method used in this work allowed identification of the main characteristics that permit discrimination of biotypes. In order to understand what kind of genes could code for phenotypes of technological relevance, evidence that specific DNA sequences are present only in particular biotypes may be of great interest. PMID:14711641

  18. [The importance of classifications in psychiatry].

    Science.gov (United States)

    Lempérière, T

    1995-12-01

    variously dubbed "a reductive academic exercise of no relevance to patients", "a dehumanizing labelling system, and a potential source of social and political violence", "a destructive prognostic guide", and so on. Other critics point to various aspects of certain classifications: the abandonment of theoretical concepts, the arbitrary nature of certain categories, the selection of definitions and criteria, the privileged position systematically accorded to the notion of category over that of general dimension. Multiaxial systems such as those proposed in successive versions of DSM or the classifications used in child psychiatry go some way towards meeting these criticisms. They go beyond simple labelling and place the patient in an overall medicopsycho-social setting. Nosographical indicators do not constitute an obstacle to psychopathological understanding. No classifications are capable of satisfactorily fulfilling all needs, namely those of daily practice, research and health statistics. The has led to the development of specialized diagnostic criteria and instruments, as in research for example. It should also be noted in this context that different versions of ICD-10 exist for psychiatrists, general practitioners, researchers and healthcare managers. The greatest danger posed by classifications is the potential reification of hypothetical approaches, arbitrary categorization and the dulling of reflection, all of which have created a need for regular revisions underpinned by field trials.

  19. Sandia National Laboratories analysis code data base

    Science.gov (United States)

    Peterson, C. W.

    1994-11-01

    Sandia National Laboratories' mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The laboratories' strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia's technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems, and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code 'ownership' and release status, and references describing the physical models and numerical implementation.

  20. Sandia National Laboratories analysis code data base

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  1. Machine Learning Based Localization and Classification with Atomic Magnetometers

    Science.gov (United States)

    Deans, Cameron; Griffin, Lewis D.; Marmugi, Luca; Renzoni, Ferruccio

    2018-01-01

    We demonstrate identification of position, material, orientation, and shape of objects imaged by a Rb 85 atomic magnetometer performing electromagnetic induction imaging supported by machine learning. Machine learning maximizes the information extracted from the images created by the magnetometer, demonstrating the use of hidden data. Localization 2.6 times better than the spatial resolution of the imaging system and successful classification up to 97% are obtained. This circumvents the need of solving the inverse problem and demonstrates the extension of machine learning to diffusive systems, such as low-frequency electrodynamics in media. Automated collection of task-relevant information from quantum-based electromagnetic imaging will have a relevant impact from biomedicine to security.

  2. SAW Classification Algorithm for Chinese Text Classification

    OpenAIRE

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  3. Terminology and classification aspects of the Bethesda System for Reporting Thyroid Cytopathology

    Directory of Open Access Journals (Sweden)

    G V Semkina

    2012-12-01

    Full Text Available The article is devoted to the relevance of Betesda System for Reporting Thyroid Cytopathology. This article summarizes recent data on the main differences and advantages of new classification system. Application of the Betesda System for Reporting Thyroid Cytopathology leads to the increased sensitivity and specificity of FNA.

  4. Some debatable problems of stratigraphic classification

    Science.gov (United States)

    Gladenkov, Yury

    2014-05-01

    Russian geologists perform large-scale geological mapping in Russia and abroad. Therefore we urge unification of legends of geological maps compiled in different countries. It seems important to continuously organize discussions on problems of stratigraphic classification. 1. The stratigraphic schools (conventionally called "European" and "American") define "stratigraphy" in different ways. The former prefers "single" stratigraphy that uses data proved by many methods. The latter divides stratigraphy into several independent stratigraphers (litho-, bio-, magneto- and others). Russian geologists classify stratigraphic units into general (chronostratigraphic) and special (in accordance with a method applied). 2. There exist different interpretations of chronostratigraphy. Some stratigraphers suppose that a chronostratigraphic unit corresponds to rock strata formed during a certain time interval (it is somewhat formalistic because a length of interval is frequently unspecified). Russian specialists emphasize the historical-geological background of chronostratigraphic units. Every stratigraphic unit (global and regional) reflects a stage of geological evolution of biosphere and stratisphere. 3. In the view of Russian stratigraphers, the main stratigraphic units may have different extent: a) global (stage), b) regional (regional stage,local zone), and c) local (suite). There is no such hierarchy in the ISG. 4. Russian specialists think that local "lithostratigraphic" units (formations) which may have diachronous boundaries are not chronostratigraphic ones in strict sense (actually they are lithological bodies). In this case "lithostratigraphy" can be considered as "prostratigraphy" and employed in initial studies of sequences. Therefore, a suite is a main local unit of the Russian Code and differs from a formation, although it is somewhat similar. It does not mean that lithostratigraphy is unnecessary. Usage of marker horizons, members and other bodies is of great help

  5. Classification in context

    DEFF Research Database (Denmark)

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  6. The assessment of containment codes by experiments simulating severe accident scenarios

    International Nuclear Information System (INIS)

    Karwat, H.

    1992-01-01

    Hitherto, a generally applicable validation matrix for codes simulating the containment behaviour under severe accident conditions did not exist. Past code applications have shown that most problems may be traced back to inaccurate thermalhydraulic parameters governing gas- or aerosol-distribution events. A provisional code-validation matrix is proposed, based on a careful selection of containment experiments performed during recent years in relevant test facilities under various operating conditions. The matrix focuses on the thermalhydraulic aspects of the containment behaviour after severe accidents as a first important step. It may be supplemented in the future by additional suitable tests

  7. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo

    2017-06-14

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  8. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    KAUST Repository

    Magana-Mora, Arturo; Bajic, Vladimir B.

    2017-01-01

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  9. A proposal for a code of ethics for nurse practitioners.

    Science.gov (United States)

    Peterson, Moya; Potter, Robert Lyman

    2004-03-01

    To review established codes for health care professionals and standards of practice for the nurse practitioner (NP) and to utilize these codes and standards, general ethical themes, and a new ethical triangle to propose an ethical code for NPs. Reviews of three generally accepted ethical themes (deontological, teleological, and areteological), the ethical triangle by Potter, the American Academy of Nurse Practitioners (AANP) standards of practice for NPs, and codes of ethics from the American Nurses Association (ANA) and the American Medical Association (AMA). A proposal for a code of ethics for NPs is presented. This code was determined by basic ethical themes and established codes for nursing, formulated by the ANA, and for physicians, formulated by the AMA. The proposal was also developed in consideration of the AANP standards of practice for NPs. The role of the NP is unique in its ethical demands. The authors believe that the expanded practice of NPs presents ethical concerns that are not addressed by the ANA code and yet are relevant to nursing and therefore different than the ethical concerns of physicians. This proposal attempts to broaden NPs' perspective of the role that ethics should hold in their professional lives.

  10. Classification

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  11. Code-specific learning rules improve action selection by populations of spiking neurons.

    Science.gov (United States)

    Friedrich, Johannes; Urbanczik, Robert; Senn, Walter

    2014-08-01

    Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.

  12. [The psychosomatics of chronic back pain. Classification, aetiology and therapy].

    Science.gov (United States)

    Henningsen, P

    2004-05-01

    An overview is given on the current classification, description and treatment of chronic pain with causally relevant psychological factors. It is based on the "practice guidelines on somatoform disorders" and on a thematically related meta-analysis. The classificatory problems, especially of the demarcation of somatoform and other chronic pain, are presented. Additional descriptive dimensions of the relevant psychosocial factors are: pain description, other organically unexplained pain- and non-pain-symptoms, anxiety and depression, disease conviction and illness behaviour, personality and childhood abuse. A modified psychotherapy for (somatoform) chronic pain is outlined. Finally, this aetiologically oriented psychosomatic-psychiatric approach is compared to psychological coping models for chronic pain.

  13. Silence–breathing–snore classification from snore-related sounds

    International Nuclear Information System (INIS)

    Karunajeewa, Asela S; Abeyratne, Udantha R; Hukins, Craig

    2008-01-01

    Obstructive sleep apnea (OSA) is a highly prevalent disease in which upper airways are collapsed during sleep, leading to serious consequences. Snoring is the earliest symptom of OSA, but its potential in clinical diagnosis is not fully recognized yet. The first task in the automatic analysis of snore-related sounds (SRS) is to segment the SRS data as accurately as possible into three main classes: snoring (voiced non-silence), breathing (unvoiced non-silence) and silence. SRS data are generally contaminated with background noise. In this paper, we present classification performance of a new segmentation algorithm based on pattern recognition. We considered four features derived from SRS to classify samples of SRS into three classes. The features—number of zero crossings, energy of the signal, normalized autocorrelation coefficient at 1 ms delay and the first predictor coefficient of linear predictive coding (LPC) analysis—in combination were able to achieve a classification accuracy of 90.74% in classifying a set of test data. We also investigated the performance of the algorithm when three commonly used noise reduction (NR) techniques in speech processing—amplitude spectral subtraction (ASS), power spectral subtraction (PSS) and short time spectral amplitude (STSA) estimation—are used for noise reduction. We found that noise reduction together with a proper choice of features could improve the classification accuracy to 96.78%, making the automated analysis a possibility

  14. Possible Relevance of Receptor-Receptor Interactions between Viral- and Host-Coded Receptors for Viral-Induced Disease

    Directory of Open Access Journals (Sweden)

    Luigi F. Agnati

    2007-01-01

    Full Text Available It has been demonstrated that some viruses, such as the cytomegalovirus, code for G-protein coupled receptors not only to elude the immune system, but also to redirect cellular signaling in the receptor networks of the host cells. In view of the existence of receptor-receptor interactions, the hypothesis is introduced that these viral-coded receptors not only operate as constitutively active monomers, but also can affect other receptor function by interacting with receptors of the host cell. Furthermore, it is suggested that viruses could also insert not single receptors (monomers, but clusters of receptors (receptor mosaics, altering the cell metabolism in a profound way. The prevention of viral receptor-induced changes in host receptor networks may give rise to novel antiviral drugs that counteract viral-induced disease.

  15. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.

    Science.gov (United States)

    Kounios, J; Holcomb, P J

    1994-07-01

    Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.

  16. Interval Coded Scoring: a toolbox for interpretable scoring systems

    Directory of Open Access Journals (Sweden)

    Lieven Billiet

    2018-04-01

    Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability

  17. Job coding (PCS 2003): feedback from a study conducted in an Occupational Health Service

    Science.gov (United States)

    Henrotin, Jean-Bernard; Vaissière, Monique; Etaix, Maryline; Malard, Stéphane; Dziurla, Mathieu; Lafon, Dominique

    2016-10-19

    Aim: To examine the quality of manual job coding carried out by occupational health teams with access to a software application that provides assistance in job and business sector coding (CAPS). Methods: Data from a study conducted in an Occupational Health Service were used to examine the first-level coding of 1,495 jobs by occupational health teams according to the French job classification entitled “PSC- Professions and socio-professional categories” (INSEE, 2003 version). A second level of coding was also performed by an experienced coder and the first and second level codes were compared. Agreement between the two coding systems was studied using the kappa coefficient (κ) and frequencies were compared by Chi2 tests. Results: Missing data or incorrect codes were observed for 14.5% of social groups (1 digit) and 25.7% of job codes (4 digits). While agreement between the first two levels of PCS 2003 appeared to be satisfactory (κ=0.73 and κ=0.75), imbalances in reassignment flows were effectively noted. The divergent job code rate was 48.2%. Variation in the frequency of socio-occupational variables was as high as 8.6% after correcting for missing data and divergent codes. Conclusions: Compared with other studies, the use of the CAPS tool appeared to provide effective coding assistance. However, our results indicate that job coding based on PSC 2003 should be conducted using ancillary data by personnel trained in the use of this tool.

  18. Modern Methods of Multidimensional Data Visualization: Analysis, Classification, Implementation, and Applications in Technical Systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2016-01-01

    Full Text Available The article deals with theoretical and practical aspects of solving the problem of visualization of multidimensional data as an effective means of multivariate analysis of systems. Several classifications are proposed for visualization techniques, according to data types, visualization objects, the method of transformation of coordinates and data. To represent classification are used charts with links to the relevant work. The article also proposes two classifications of modern trends in display technology, including integration of visualization techniques as one of the modern trends of development, along with the introduction of interactive technologies and the dynamics of development processes. It describes some approaches to the visualization problem, which are concerned with fulfilling the needs. The needs are generated by the relevant tasks such as information retrieval in global networks, development of bioinformatics, study and control of business processes, development of regions, etc. The article highlights modern visualization tools, which are capable of improving the efficiency of the multivariate analysis and searching for solutions in multi-objective optimization of technical systems, but are not very actively used for such studies. These are horizontal graphs, graphics "quantile-quantile", etc. The paper proposes to use Choropleth cards traditionally used in cartography for simultaneous presentation of the distribution parameters of several criteria in the space. It notes that visualizations of graphs in network applications can be more actively used to describe the control system. The article suggests using the heat maps to provide graphical representation of the sensitivity of the system quality criteria under variations of options (multivariate analysis of technical systems. It also mentions that it is useful to extend the supervising heat maps to the task of estimating quality of identify in constructing system models. A

  19. 78 FR 54970 - Cotton Futures Classification: Optional Classification Procedure

    Science.gov (United States)

    2013-09-09

    ... Service 7 CFR Part 27 [AMS-CN-13-0043] RIN 0581-AD33 Cotton Futures Classification: Optional Classification Procedure AGENCY: Agricultural Marketing Service, USDA. ACTION: Proposed rule. SUMMARY: The... optional cotton futures classification procedure--identified and known as ``registration'' by the U.S...

  20. [The G-DRG System 2009--relevant changes for rheumatology].

    Science.gov (United States)

    Fiori, W; Liedtke-Dyong, A; Lakomek, H-J; Buscham, K; Lehmann, H; Liman, W; Fuchs, A-K; Bessler, F; Roeder, N

    2010-05-01

    The following article presents the major general and specific changes in the G-DRG system, in the classification systems for diagnoses and procedures as well as for the billing process for 2010. Since the G-DRG system is primarily a tool for the redistribution of resources, every hospital needs to analyze the economic effects of the changes by applying the G-DRG transition-grouper to its own cases. Depending on their clinical focus, rheumatological departments may experience positive or negative consequences from the adjustments. In addition, relevant current case law is considered.

  1. Improving the quality of clinical coding: a comprehensive audit model

    Directory of Open Access Journals (Sweden)

    Hamid Moghaddasi

    2014-04-01

    Full Text Available Introduction: The review of medical records with the aim of assessing the quality of codes has long been conducted in different countries. Auditing medical coding, as an instructive approach, could help to review the quality of codes objectively using defined attributes, and this in turn would lead to improvement of the quality of codes. Method: The current study aimed to present a model for auditing the quality of clinical codes. The audit model was formed after reviewing other audit models, considering their strengths and weaknesses. A clear definition was presented for each quality attribute and more detailed criteria were then set for assessing the quality of codes. Results: The audit tool (based on the quality attributes included legibility, relevancy, completeness, accuracy, definition and timeliness; led to development of an audit model for assessing the quality of medical coding. Delphi technique was then used to reassure the validity of the model. Conclusion: The inclusive audit model designed could provide a reliable and valid basis for assessing the quality of codes considering more quality attributes and their clear definition. The inter-observer check suggested in the method of auditing is of particular importance to reassure the reliability of coding.

  2. Audio Mining with emphasis on Music Genre Classification

    DEFF Research Database (Denmark)

    Meng, Anders

    2004-01-01

    Audio is an important part of our daily life, basically it increases our impression of the world around us whether this is communication, music, danger detection etc. Currently the field of Audio Mining, which here includes areas of music genre, music recognition / retrieval, playlist generation...... the world the problem of detecting environments from the input audio is researched as to increase the life quality of hearing-impaired. Basically there is a lot of work within the field of audio mining. The presentation will mainly focus on music genre classification where we have a fixed amount of genres...... to choose from. Basically every audio mining system is more or less consisting of the same stages as for the music genre setting. My research so far has mainly focussed on finding relevant features for music genre classification living at different timescales using early and late information fusion. It has...

  3. Radon Protection in the Technical Building Code

    International Nuclear Information System (INIS)

    Frutos, B.; Garcia, J. P.; Martin, J. L.; Olaya, M.; Serrano, J I.; Suarez, E.; Fernandez, J. A.; Rodrigo, F.

    2003-01-01

    Building construction in areas with high radon gas contamination in land requires the incorporation of certain measures in order to prevent the accumulation of this gas on the inside of buildings. These measures should be considered primarily in the design and construction phases and should take the area of the country into consideration where the construction will take place depending on the potential risk of radon entrance. Within the Technical Building Code, radon protection has been considered through general classification of the country and specific areas where building construction is to take place, in different risk categories and in the introduction of building techniques appropriate for each area. (Author) 17 refs

  4. Multimodal Task-Driven Dictionary Learning for Image Classification

    Science.gov (United States)

    2015-12-18

    recognition, multi-view face recognition, multi-view action recognition, and multimodal biometric recognition. It is also shown that, compared to the...improvement in several multi-task learning applications such as target classification, biometric recognitions, and multiview face recognition [12], [14], [17...relevant samples from other modalities for a given unimodal query. However, α1 α2 …αS D1 … Index finger Thumb finger … Iris x1 x2 xS D2 DS … … … J o in

  5. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  6. Improved RMR Rock Mass Classification Using Artificial Intelligence Algorithms

    Science.gov (United States)

    Gholami, Raoof; Rasouli, Vamegh; Alimoradi, Andisheh

    2013-09-01

    Rock mass classification systems such as rock mass rating (RMR) are very reliable means to provide information about the quality of rocks surrounding a structure as well as to propose suitable support systems for unstable regions. Many correlations have been proposed to relate measured quantities such as wave velocity to rock mass classification systems to limit the associated time and cost of conducting the sampling and mechanical tests conventionally used to calculate RMR values. However, these empirical correlations have been found to be unreliable, as they usually overestimate or underestimate the RMR value. The aim of this paper is to compare the results of RMR classification obtained from the use of empirical correlations versus machine-learning methodologies based on artificial intelligence algorithms. The proposed methods were verified based on two case studies located in northern Iran. Relevance vector regression (RVR) and support vector regression (SVR), as two robust machine-learning methodologies, were used to predict the RMR for tunnel host rocks. RMR values already obtained by sampling and site investigation at one tunnel were taken into account as the output of the artificial networks during training and testing phases. The results reveal that use of empirical correlations overestimates the predicted RMR values. RVR and SVR, however, showed more reliable results, and are therefore suggested for use in RMR classification for design purposes of rock structures.

  7. Promoter Analysis Reveals Globally Differential Regulation of Human Long Non-Coding RNA and Protein-Coding Genes

    KAUST Repository

    Alam, Tanvir

    2014-10-02

    Transcriptional regulation of protein-coding genes is increasingly well-understood on a global scale, yet no comparable information exists for long non-coding RNA (lncRNA) genes, which were recently recognized to be as numerous as protein-coding genes in mammalian genomes. We performed a genome-wide comparative analysis of the promoters of human lncRNA and protein-coding genes, finding global differences in specific genetic and epigenetic features relevant to transcriptional regulation. These two groups of genes are hence subject to separate transcriptional regulatory programs, including distinct transcription factor (TF) proteins that significantly favor lncRNA, rather than coding-gene, promoters. We report a specific signature of promoter-proximal transcriptional regulation of lncRNA genes, including several distinct transcription factor binding sites (TFBS). Experimental DNase I hypersensitive site profiles are consistent with active configurations of these lncRNA TFBS sets in diverse human cell types. TFBS ChIP-seq datasets confirm the binding events that we predicted using computational approaches for a subset of factors. For several TFs known to be directly regulated by lncRNAs, we find that their putative TFBSs are enriched at lncRNA promoters, suggesting that the TFs and the lncRNAs may participate in a bidirectional feedback loop regulatory network. Accordingly, cells may be able to modulate lncRNA expression levels independently of mRNA levels via distinct regulatory pathways. Our results also raise the possibility that, given the historical reliance on protein-coding gene catalogs to define the chromatin states of active promoters, a revision of these chromatin signature profiles to incorporate expressed lncRNA genes is warranted in the future.

  8. Classification of refrigerants; Classification des fluides frigorigenes

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This document was made from the US standard ANSI/ASHRAE 34 published in 2001 and entitled 'designation and safety classification of refrigerants'. This classification allows to clearly organize in an international way the overall refrigerants used in the world thanks to a codification of the refrigerants in correspondence with their chemical composition. This note explains this codification: prefix, suffixes (hydrocarbons and derived fluids, azeotropic and non-azeotropic mixtures, various organic compounds, non-organic compounds), safety classification (toxicity, flammability, case of mixtures). (J.S.)

  9. Artificial Intelligence Learning Semantics via External Resources for Classifying Diagnosis Codes in Discharge Notes.

    Science.gov (United States)

    Lin, Chin; Hsu, Chia-Jung; Lou, Yu-Sheng; Yeh, Shih-Jen; Lee, Chia-Cheng; Su, Sui-Lung; Chen, Hsiang-Cheng

    2017-11-06

    Automated disease code classification using free-text medical information is important for public health surveillance. However, traditional natural language processing (NLP) pipelines are limited, so we propose a method combining word embedding with a convolutional neural network (CNN). Our objective was to compare the performance of traditional pipelines (NLP plus supervised machine learning models) with that of word embedding combined with a CNN in conducting a classification task identifying International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codes in discharge notes. We used 2 classification methods: (1) extracting from discharge notes some features (terms, n-gram phrases, and SNOMED CT categories) that we used to train a set of supervised machine learning models (support vector machine, random forests, and gradient boosting machine), and (2) building a feature matrix, by a pretrained word embedding model, that we used to train a CNN. We used these methods to identify the chapter-level ICD-10-CM diagnosis codes in a set of discharge notes. We conducted the evaluation using 103,390 discharge notes covering patients hospitalized from June 1, 2015 to January 31, 2017 in the Tri-Service General Hospital in Taipei, Taiwan. We used the receiver operating characteristic curve as an evaluation measure, and calculated the area under the curve (AUC) and F-measure as the global measure of effectiveness. In 5-fold cross-validation tests, our method had a higher testing accuracy (mean AUC 0.9696; mean F-measure 0.9086) than traditional NLP-based approaches (mean AUC range 0.8183-0.9571; mean F-measure range 0.5050-0.8739). A real-world simulation that split the training sample and the testing sample by date verified this result (mean AUC 0.9645; mean F-measure 0.9003 using the proposed method). Further analysis showed that the convolutional layers of the CNN effectively identified a large number of keywords and automatically

  10. Use of International Classification of Functioning, Disability and Health (ICF) to describe patient-reported disability in multiple sclerosis and identification of relevant environmental factors.

    Science.gov (United States)

    Khan, Fary; Pallant, Julie F

    2007-01-01

    To use the International Classification of Functioning, Disability and Health (ICF) to describe patient-reported disability in multiple sclerosis and identify relevant environmental factors. Cross-sectional survey of 101 participants in the community. Their multiple sclerosis-related problems were linked with ICF categories (second level) using a checklist, consensus between health professionals and the "linking rules". The impact of multiple sclerosis on health areas corresponding to 48 ICF categories was also assessed. A total of 170 ICF categories were identified (mean age 49 years, 72 were female). Average number of problems reported was 18. The categories include 48 (42%) for body function, 16 (34%) body structure, 68 (58%) activities and participation and 38 (51%) for environmental factors. Extreme impact in health areas corresponding to ICF categories for activities and participation were reported for mobility, work, everyday home activities, community and social activities. While those for the environmental factors (barriers) included products for mobility, attitudes of extended family, restriction accessing social security and health resources. This study is a first step in the use of the ICF in persons with multiple sclerosis and towards development of the ICF Core set for multiple sclerosis from a broader international perspective.

  11. Safety in nuclear power plant siting. A code of practice

    International Nuclear Information System (INIS)

    1978-01-01

    This publication is brought out within the framework of establishing Codes of Practice and Safety Guides for nuclear power plants: NUSS programme. The scope of the document encompasses site and site-plant interaction factors related to operational states and accident conditions. The purpose of the Code is to give criteria and procedures to be applied as appropriate to operational states and accident conditions, including those which could lead to emergency situations. This Code is mainly concerned with severe events of low probability which relate to the siting of nuclear power plants and have to be considered in designing a particular nuclear power plant. Annex: Examples of natural and man-made events relevant for design basis evaluation

  12. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    Science.gov (United States)

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  13. Visual attention mitigates information loss in small- and large-scale neural codes

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  14. Rehabilitation Counselor Education and the New Code of Ethics

    Science.gov (United States)

    Glosoff, Harriet L.; Cottone, R. Rocco

    2010-01-01

    The purpose of this article is to discuss recent changes in the Commission on Rehabilitation Counselor Certification "Code of Professional Ethics for Rehabilitation Counselors", effective January 1, 2010, that are most relevant to rehabilitation counselor educators. The authors provide a brief overview of these key changes along with implications…

  15. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  16. Ex-vessel break in ITER divertor cooling loop analysis with the ECART code

    CERN Document Server

    Cambi, G; Parozzi, F; Porfiri, MT

    2003-01-01

    A hypothetical double-ended pipe rupture in the ex-vessel section of the International Thermonuclear Experimental Reactor (ITER) divertor primary heat transfer system during pulse operation has been assessed using the nuclear source term ECART code. That code was originally designed and validated for traditional nuclear power plant safety analyses, and has been internationally recognized as a relevant nuclear source term codes for nuclear fission plants. It permits the simulation of chemical reactions and transport of radioactive gases and aerosols under two-phase flow transients in generic flow systems, using a built-in thermal-hydraulic model. A comparison with the results given in ITER Generic Site Safety Report, obtained using a thermal-hydraulic system code (ATHENA), a containment code (INTRA) and an aerosol transportation code (NAUA), in a sequential way, is also presented and discussed.

  17. Diabetes classification using a redundancy reduction preprocessor

    Directory of Open Access Journals (Sweden)

    Áurea Celeste Ribeiro

    Full Text Available Introduction Diabetes patients can benefit significantly from early diagnosis. Thus, accurate automated screening is becoming increasingly important due to the wide spread of that disease. Previous studies in automated screening have found a maximum accuracy of 92.6%. Methods This work proposes a classification methodology based on efficient coding of the input data, which is carried out by decreasing input data redundancy using well-known ICA algorithms, such as FastICA, JADE and INFOMAX. The classifier used in the task to discriminate diabetics from non-diaibetics is the one class support vector machine. Classification tests were performed using noninvasive and invasive indicators. Results The results suggest that redundancy reduction increases one-class support vector machine performance when discriminating between diabetics and nondiabetics up to an accuracy of 98.47% while using all indicators. By using only noninvasive indicators, an accuracy of 98.28% was obtained. Conclusion The ICA feature extraction improves the performance of the classifier in the data set because it reduces the statistical dependence of the collected data, which increases the ability of the classifier to find accurate class boundaries.

  18. Derivation and validation of the Systemic Lupus International Collaborating Clinics classification criteria for systemic lupus erythematosus

    DEFF Research Database (Denmark)

    Petri, Michelle; Orbai, Ana-Maria; Alarcón, Graciela S

    2012-01-01

    The Systemic Lupus International Collaborating Clinics (SLICC) group revised and validated the American College of Rheumatology (ACR) systemic lupus erythematosus (SLE) classification criteria in order to improve clinical relevance, meet stringent methodology requirements, and incorporate new...

  19. A review of supervised object-based land-cover image classification

    Science.gov (United States)

    Ma, Lei; Li, Manchun; Ma, Xiaoxue; Cheng, Liang; Du, Peijun; Liu, Yongxue

    2017-08-01

    Object-based image classification for land-cover mapping purposes using remote-sensing imagery has attracted significant attention in recent years. Numerous studies conducted over the past decade have investigated a broad array of sensors, feature selection, classifiers, and other factors of interest. However, these research results have not yet been synthesized to provide coherent guidance on the effect of different supervised object-based land-cover classification processes. In this study, we first construct a database with 28 fields using qualitative and quantitative information extracted from 254 experimental cases described in 173 scientific papers. Second, the results of the meta-analysis are reported, including general characteristics of the studies (e.g., the geographic range of relevant institutes, preferred journals) and the relationships between factors of interest (e.g., spatial resolution and study area or optimal segmentation scale, accuracy and number of targeted classes), especially with respect to the classification accuracy of different sensors, segmentation scale, training set size, supervised classifiers, and land-cover types. Third, useful data on supervised object-based image classification are determined from the meta-analysis. For example, we find that supervised object-based classification is currently experiencing rapid advances, while development of the fuzzy technique is limited in the object-based framework. Furthermore, spatial resolution correlates with the optimal segmentation scale and study area, and Random Forest (RF) shows the best performance in object-based classification. The area-based accuracy assessment method can obtain stable classification performance, and indicates a strong correlation between accuracy and training set size, while the accuracy of the point-based method is likely to be unstable due to mixed objects. In addition, the overall accuracy benefits from higher spatial resolution images (e.g., unmanned aerial

  20. Wind power within European grid codes: Evolution, status and outlook

    DEFF Research Database (Denmark)

    Vrana, Til Kristian; Flynn, Damian; Gomez-Lazaro, Emilio

    2018-01-01

    Grid codes are technical specifications that define the requirements for any facility connected to electricity grids. Wind power plants are increasingly facing system stability support requirements similar to conventional power stations, which is to some extent unavoidable, as the share of wind...... power in the generation mix is growing. The adaptation process of grid codes for wind power plants is not yet complete, and grid codes are expected to evolve further in the future. ENTSO-E is the umbrella organization for European TSOs, seen by many as a leader in terms of requirements sophistication...... is largely based on the definitions and provisions set out by ENTSO-E. The main European grid code requirements are outlined here, including also HVDC connections and DC-connected power park modules. The focus is on requirements that are considered particularly relevant for large wind power plants...

  1. Implementation of the chemical PbLi/water reaction in the SIMMER code

    Energy Technology Data Exchange (ETDEWEB)

    Eboli, Marica, E-mail: marica.eboli@for.unipi.it [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Forgione, Nicola [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Del Nevo, Alessandro [ENEA FSN-ING-PAN, CR Brasimone, 40032 Camugnano, BO (Italy)

    2016-11-01

    Highlights: • Updated predictive capabilities of SIMMER-III code. • Verification of the implemented PbLi/Water chemical reactions. • Identification of code capabilities in modelling phenomena relevant to safety. • Validation against BLAST Test No. 5 experimental data successfully completed. • Need for new experimental campaign in support of code validation on LIFUS5/Mod3. - Abstract: The availability of a qualified system code for the deterministic safety analysis of the in-box LOCA postulated accident is of primary importance. Considering the renewed interest for the WCLL breeding blanket, such code shall be multi-phase, shall manage the thermodynamic interaction among the fluids, and shall include the exothermic chemical reaction between lithium-lead and water, generating oxides and hydrogen. The paper presents the implementation of the chemical correlations in SIMMER-III code, the verification of the code model in simple geometries and the first validation activity based on BLAST Test N°5 experimental data.

  2. Classification, thésaurus, ontologies, folksonomies : comparaisons du point de vue de la recherche ouverte d'information (ROI)

    OpenAIRE

    Zacklad , Manuel

    2007-01-01

    International audience; This paper compares several Knowledge Organisations Systems (classifications, thesaurus, formal ontology, semiotic ontology, folksonomy) according to different criteria in order to assess their relevance according to Open Information Research.; Cet article compare différents systèmes d'organisation des connaissances (classifications, thésaurus, ontologies formelles, ontologies sémiotiques, folksonomies) selon différents critères pour évaluer leur pertinence en regard d...

  3. On the International Agency for Research on Cancer classification of glyphosate as a probable human carcinogen.

    Science.gov (United States)

    Tarone, Robert E

    2018-01-01

    The recent classification by International Agency for Research on Cancer (IARC) of the herbicide glyphosate as a probable human carcinogen has generated considerable discussion. The classification is at variance with evaluations of the carcinogenic potential of glyphosate by several national and international regulatory bodies. The basis for the IARC classification is examined under the assumptions that the IARC criteria are reasonable and that the body of scientific studies determined by IARC staff to be relevant to the evaluation of glyphosate by the Monograph Working Group is sufficiently complete. It is shown that the classification of glyphosate as a probable human carcinogen was the result of a flawed and incomplete summary of the experimental evidence evaluated by the Working Group. Rational and effective cancer prevention activities depend on scientifically sound and unbiased assessments of the carcinogenic potential of suspected agents. Implications of the erroneous classification of glyphosate with respect to the IARC Monograph Working Group deliberative process are discussed.

  4. Accuracy of clinical coding for procedures in oral and maxillofacial surgery.

    Science.gov (United States)

    Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I

    2016-10-01

    Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Aesthetics-based classification of geological structures in outcrops for geotourism purposes: a tentative proposal

    Science.gov (United States)

    Mikhailenko, Anna V.; Nazarenko, Olesya V.; Ruban, Dmitry A.; Zayats, Pavel P.

    2017-03-01

    The current growth in geotourism requires an urgent development of classifications of geological features on the basis of criteria that are relevant to tourist perceptions. It appears that structure-related patterns are especially attractive for geotourists. Consideration of the main criteria by which tourists judge beauty and observations made in the geodiversity hotspot of the Western Caucasus allow us to propose a tentative aesthetics-based classification of geological structures in outcrops, with two classes and four subclasses. It is possible to distinguish between regular and quasi-regular patterns (i.e., striped and lined and contorted patterns) and irregular and complex patterns (paysage and sculptured patterns). Typical examples of each case are found both in the study area and on a global scale. The application of the proposed classification permits to emphasise features of interest to a broad range of tourists. Aesthetics-based (i.e., non-geological) classifications are necessary to take into account visions and attitudes of visitors.

  6. Implementation of the International Code of Marketing of Breastmilk Substitutes in the Eastern Mediterranean Region.

    Science.gov (United States)

    Al Jawaldeh, Ayoub; Sayed, Ghada

    2018-04-05

    Optimal breastfeeding practices and appropriate complementary feeding improve child health, survival and development. The countries of the Eastern Mediterranean Region have made significant strides in formulation and implementation of legislation to protect and promote breastfeeding based on The International Code of Marketing of Breast-milk Substitutes (the Code) and subsequent relevant World Health Assembly resolutions. To assess the implementation of the Code in the Region. Assessment was conducted by the World Health Organization (WHO) Regional Office for the Eastern Mediterranean using a WHO standard questionnaire. Seventeen countries in the Region have enacted legislation to protect breastfeeding. Only 6 countries have comprehensive legislation or other legal measures reflecting all or most provisions of the Code; 4 countries have legal measures incorporating many provisions of the Code; 7 countries have legal measures that contain a few provisions of the Code; 4 countries are currently studying the issue; and only 1 country has no measures in place. Further analysis of the legislation found that the text of articles in the laws fully reflected the Code articles in only 6 countries. Most countries need to revisit and amend existing national legislation to implement fully the Code and relevant World Health Assembly resolutions, supported by systematic monitoring and reporting. Copyright © World Health Organization (WHO) 2018. Some rights reserved. This work is available under the CC BY-NC-SA 3.0 IGO license (https://creativecommons.org/licenses/by-nc-sa/3.0/igo).

  7. Design Procedure of Graphite Components by ASME HTR Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ji-Ho; Jo, Chang Keun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, the ASME B and PV Code, Subsection HH, Subpart A, design procedure for graphite components of HTRs was reviewed and the differences from metal materials were remarked. The Korean VHTR has a prismatic core which is made of multiple graphite blocks, reflectors, and core supports. One of the design issues is the assessment of the structural integrity of the graphite components because the graphite is brittle and shows quite different behaviors from metals in high temperature environment. The American Society of Mechanical Engineers (ASME) issued the latest edition of the code for the high temperature reactors (HTR) in 2015. In this study, the ASME B and PV Code, Subsection HH, Subpart A, Graphite Materials was reviewed and the special features were remarked. Due the brittleness of graphites, the damage-tolerant design procedures different from the conventional metals were adopted based on semi-probabilistic approaches. The unique additional classification, SRC, is allotted to the graphite components and the full 3-D FEM or equivalent stress analysis method is required. In specific conditions, the oxidation and viscoelasticity analysis of material are required. The fatigue damage rule has not been established yet.

  8. Design Procedure of Graphite Components by ASME HTR Codes

    International Nuclear Information System (INIS)

    Kang, Ji-Ho; Jo, Chang Keun

    2016-01-01

    In this study, the ASME B and PV Code, Subsection HH, Subpart A, design procedure for graphite components of HTRs was reviewed and the differences from metal materials were remarked. The Korean VHTR has a prismatic core which is made of multiple graphite blocks, reflectors, and core supports. One of the design issues is the assessment of the structural integrity of the graphite components because the graphite is brittle and shows quite different behaviors from metals in high temperature environment. The American Society of Mechanical Engineers (ASME) issued the latest edition of the code for the high temperature reactors (HTR) in 2015. In this study, the ASME B and PV Code, Subsection HH, Subpart A, Graphite Materials was reviewed and the special features were remarked. Due the brittleness of graphites, the damage-tolerant design procedures different from the conventional metals were adopted based on semi-probabilistic approaches. The unique additional classification, SRC, is allotted to the graphite components and the full 3-D FEM or equivalent stress analysis method is required. In specific conditions, the oxidation and viscoelasticity analysis of material are required. The fatigue damage rule has not been established yet

  9. Quantum-capacity-approaching codes for the detected-jump channel

    International Nuclear Information System (INIS)

    Grassl, Markus; Wei Zhaohui; Ji Zhengfeng; Zeng Bei

    2010-01-01

    The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasures and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.

  10. Issues in Developing a Surveillance Case Definition for Nonfatal Suicide Attempt and Intentional Self-harm Using International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) Coded Data.

    Science.gov (United States)

    Hedegaard, Holly; Schoenbaum, Michael; Claassen, Cynthia; Crosby, Alex; Holland, Kristin; Proescholdbell, Scott

    2018-02-01

    Suicide and intentional self-harm are among the leading causes of death in the United States. To study this public health issue, epidemiologists and researchers often analyze data coded using the International Classification of Diseases (ICD). Prior to October 1, 2015, health care organizations and providers used the clinical modification of the Ninth Revision of ICD (ICD-9-CM) to report medical information in electronic claims data. The transition in October 2015 to use of the clinical modification of the Tenth Revision of ICD (ICD-10-CM) resulted in the need to update methods and selection criteria previously developed for ICD-9-CM coded data. This report provides guidance on the use of ICD-10-CM codes to identify cases of nonfatal suicide attempts and intentional self-harm in ICD-10-CM coded data sets. ICD-10-CM codes for nonfatal suicide attempts and intentional self-harm include: X71-X83, intentional self-harm due to drowning and submersion, firearms, explosive or thermal material, sharp or blunt objects, jumping from a high place, jumping or lying in front of a moving object, crashing of motor vehicle, and other specified means; T36-T50 with a 6th character of 2 (except for T36.9, T37.9, T39.9, T41.4, T42.7, T43.9, T45.9, T47.9, and T49.9, which are included if the 5th character is 2), intentional self-harm due to drug poisoning (overdose); T51-T65 with a 6th character of 2 (except for T51.9, T52.9, T53.9, T54.9, T56.9, T57.9, T58.0, T58.1, T58.9, T59.9, T60.9, T61.0, T61.1, T61.9, T62.9, T63.9, T64.0, T64.8, and T65.9, which are included if the 5th character is 2), intentional self-harm due to toxic effects of nonmedicinal substances; T71 with a 6th character of 2, intentional self-harm due to asphyxiation, suffocation, strangulation; and T14.91, Suicide attempt. Issues to consider when selecting records for nonfatal suicide attempts and intentional self-harm from ICD-10-CM coded administrative data sets are also discussed. All material appearing in this

  11. Feature generation and representations for protein-protein interaction classification.

    Science.gov (United States)

    Lan, Man; Tan, Chew Lim; Su, Jian

    2009-10-01

    Automatic detecting protein-protein interaction (PPI) relevant articles is a crucial step for large-scale biological database curation. The previous work adopted POS tagging, shallow parsing and sentence splitting techniques, but they achieved worse performance than the simple bag-of-words representation. In this paper, we generated and investigated multiple types of feature representations in order to further improve the performance of PPI text classification task. Besides the traditional domain-independent bag-of-words approach and the term weighting methods, we also explored other domain-dependent features, i.e. protein-protein interaction trigger keywords, protein named entities and the advanced ways of incorporating Natural Language Processing (NLP) output. The integration of these multiple features has been evaluated on the BioCreAtIvE II corpus. The experimental results showed that both the advanced way of using NLP output and the integration of bag-of-words and NLP output improved the performance of text classification. Specifically, in comparison with the best performance achieved in the BioCreAtIvE II IAS, the feature-level and classifier-level integration of multiple features improved the performance of classification 2.71% and 3.95%, respectively.

  12. Efficient DS-UWB MUD Algorithm Using Code Mapping and RVM

    Directory of Open Access Journals (Sweden)

    Pingyan Shi

    2016-01-01

    Full Text Available A hybrid multiuser detection (MUD using code mapping and a wrong code recognition based on relevance vector machine (RVM for direct sequence ultra wide band (DS-UWB system is developed to cope with the multiple access interference (MAI and the computational efficiency. A new MAI suppression mechanism is studied in the following steps: firstly, code mapping, an optimal decision function, is constructed and the output candidate code of the matched filter is mapped to a feature space by the function. In the feature space, simulation results show that the error codes caused by MAI and the single user mapped codes can be classified by a threshold which is related to SNR of the receiver. Then, on the base of code mapping, use RVM to distinguish the wrong codes from the right ones and finally correct them. Compared with the traditional MUD approaches, the proposed method can considerably improve the bit error ratio (BER performance due to its special MAI suppression mechanism. Simulation results also show that the proposed method can approximately achieve the BER performance of optimal multiuser detection (OMD and the computational complexity approximately equals the matched filter. Moreover, the proposed method is less sensitive to the number of users.

  13. Increasing asthma mortality in Denmark 1969-88 not a result of a changed coding practice

    DEFF Research Database (Denmark)

    Juel, K; Pedersen, P A

    1992-01-01

    We have studied asthma mortality in Denmark from 1969 to 1988. Age standardized mortality rates calculated in three age groups, 10-34, 35-59, and greater than or equal to 60 years, disclosed similar trends. Increasing mortality from asthma in the mid-1970s to 1988 was seen in all three age groups...... with higher mortality in 1979-88 as compared with 1969-78 of 95%, 55%, and 69%, respectively. Since the eighth revision of the International Classification of Diseases (ICD8) was used in Denmark over the entire 20-year period, changes in coding practice due to change of classification system cannot explain...

  14. Comparing Features for Classification of MEG Responses to Motor Imagery.

    Directory of Open Access Journals (Sweden)

    Hanna-Leena Halme

    Full Text Available Motor imagery (MI with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest.MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD, Morlet wavelets, short-time Fourier transform (STFT, common spatial patterns (CSP, filter-bank common spatial patterns (FBCSP, spatio-spectral decomposition (SSD, and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject.The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7% and MI-vs-rest (mean 81.3% classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%. There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results.We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction

  15. Updated United Nations Framework Classification for reserves and resources of extractive industries

    Science.gov (United States)

    Ahlbrandt, T.S.; Blaise, J.R.; Blystad, P.; Kelter, D.; Gabrielyants, G.; Heiberg, S.; Martinez, A.; Ross, J.G.; Slavov, S.; Subelj, A.; Young, E.D.

    2004-01-01

    The United Nations have studied how the oil and gas resource classification developed jointly by the SPE, the World Petroleum Congress (WPC) and the American Association of Petroleum Geologists (AAPG) could be harmonized with the United Nations Framework Classification (UNFC) for Solid Fuel and Mineral Resources (1). The United Nations has continued to build on this and other works, with support from many relevant international organizations, with the objective of updating the UNFC to apply to the extractive industries. The result is the United Nations Framework Classification for Energy and Mineral Resources (2) that this paper will present. Reserves and resources are categorized with respect to three sets of criteria: ??? Economic and commercial viability ??? Field project status and feasibility ??? The level of geologic knowledge The field project status criteria are readily recognized as the ones highlighted in the SPE/WPC/AAPG classification system of 2000. The geologic criteria absorb the rich traditions that form the primary basis for the Russian classification system, and the ones used to delimit, in part, proved reserves. Economic and commercial criteria facilitate the use of the classification in general, and reflect the commercial considerations used to delimit proved reserves in particular. The classification system will help to develop a common understanding of reserves and resources for all the extractive industries and will assist: ??? International and national resources management to secure supplies; ??? Industries' management of business processes to achieve efficiency in exploration and production; and ??? An appropriate basis for documenting the value of reserves and resources in financial statements.

  16. Nuclear model codes and related software distributed by the OECD/NEA Data Bank

    International Nuclear Information System (INIS)

    Sartori, E.

    1993-01-01

    Software and data for nuclear energy applications is acquired, tested and distributed by several information centres; in particular, relevant computer codes are distributed internationally by the OECD/NEA Data Bank (France) and by ESTSC and EPIC/RSIC (United States). This activity is coordinated among the centres and is extended outside the OECD area through an arrangement with the IAEA. This article covers more specifically the availability of nuclear model codes and also those codes which further process their results into data sets needed for specific nuclear application projects. (author). 2 figs

  17. It takes two—coincidence coding within the dual olfactory pathway of the honeybee

    OpenAIRE

    Brill, Martin F.; Meyer, Anneke; Rössler, Wolfgang

    2015-01-01

    To rapidly process biologically relevant stimuli, sensory systems have developed a broad variety of coding mechanisms like parallel processing and coincidence detection. Parallel processing (e.g., in the visual system), increases both computational capacity and processing speed by simultaneously coding different aspects of the same stimulus. Coincidence detection is an efficient way to integrate information from different sources. Coincidence has been shown to promote associative learning and...

  18. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  19. Classification of hydrocephalus: critical analysis of classification categories and advantages of "Multi-categorical Hydrocephalus Classification" (Mc HC).

    Science.gov (United States)

    Oi, Shizuo

    2011-10-01

    Hydrocephalus is a complex pathophysiology with disturbed cerebrospinal fluid (CSF) circulation. There are numerous numbers of classification trials published focusing on various criteria, such as associated anomalies/underlying lesions, CSF circulation/intracranial pressure patterns, clinical features, and other categories. However, no definitive classification exists comprehensively to cover the variety of these aspects. The new classification of hydrocephalus, "Multi-categorical Hydrocephalus Classification" (Mc HC), was invented and developed to cover the entire aspects of hydrocephalus with all considerable classification items and categories. Ten categories include "Mc HC" category I: onset (age, phase), II: cause, III: underlying lesion, IV: symptomatology, V: pathophysiology 1-CSF circulation, VI: pathophysiology 2-ICP dynamics, VII: chronology, VII: post-shunt, VIII: post-endoscopic third ventriculostomy, and X: others. From a 100-year search of publication related to the classification of hydrocephalus, 14 representative publications were reviewed and divided into the 10 categories. The Baumkuchen classification graph made from the round o'clock classification demonstrated the historical tendency of deviation to the categories in pathophysiology, either CSF or ICP dynamics. In the preliminary clinical application, it was concluded that "Mc HC" is extremely effective in expressing the individual state with various categories in the past and present condition or among the compatible cases of hydrocephalus along with the possible chronological change in the future.

  20. Classification

    Science.gov (United States)

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…