WorldWideScience

Sample records for surveillance text classifier

  1. A Customizable Text Classifier for Text Mining

    Directory of Open Access Journals (Sweden)

    Yun-liang Zhang

    2007-12-01

    Full Text Available Text mining deals with complex and unstructured texts. Usually a particular collection of texts that is specified to one or more domains is necessary. We have developed a customizable text classifier for users to mine the collection automatically. It derives from the sentence category of the HNC theory and corresponding techniques. It can start with a few texts, and it can adjust automatically or be adjusted by user. The user can also control the number of domains chosen and decide the standard with which to choose the texts based on demand and abundance of materials. The performance of the classifier varies with the user's choice.

  2. Classifying Written Texts Through Rhythmic Features

    NARCIS (Netherlands)

    Balint, Mihaela; Dascalu, Mihai; Trausan-Matu, Stefan

    2016-01-01

    Rhythm analysis of written texts focuses on literary analysis and it mainly considers poetry. In this paper we investigate the relevance of rhythmic features for categorizing texts in prosaic form pertaining to different genres. Our contribution is threefold. First, we define a set of rhythmic

  3. Recognition of pornographic web pages by classifying texts and images.

    Science.gov (United States)

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages.

  4. Statistical text classifier to detect specific type of medical incidents.

    Science.gov (United States)

    Wong, Zoie Shui-Yee; Akiyama, Masanori

    2013-01-01

    WHO Patient Safety has put focus to increase the coherence and expressiveness of patient safety classification with the foundation of International Classification for Patient Safety (ICPS). Text classification and statistical approaches has showed to be successful to identifysafety problems in the Aviation industryusing incident text information. It has been challenging to comprehend the taxonomy of medical incidents in a structured manner. Independent reporting mechanisms for patient safety incidents have been established in the UK, Canada, Australia, Japan, Hong Kong etc. This research demonstrates the potential to construct statistical text classifiers to detect specific type of medical incidents using incident text data. An illustrative example for classifying look-alike sound-alike (LASA) medication incidents using structured text from 227 advisories related to medication errors from Global Patient Safety Alerts (GPSA) is shown in this poster presentation. The classifier was built using logistic regression model. ROC curve and the AUC value indicated that this is a satisfactory good model.

  5. Deep Learning to Classify Radiology Free-Text Reports.

    Science.gov (United States)

    Chen, Matthew C; Ball, Robyn L; Yang, Lingyao; Moradzadeh, Nathaniel; Chapman, Brian E; Larson, David B; Langlotz, Curtis P; Amrhein, Timothy J; Lungren, Matthew P

    2018-03-01

    Purpose To evaluate the performance of a deep learning convolutional neural network (CNN) model compared with a traditional natural language processing (NLP) model in extracting pulmonary embolism (PE) findings from thoracic computed tomography (CT) reports from two institutions. Materials and Methods Contrast material-enhanced CT examinations of the chest performed between January 1, 1998, and January 1, 2016, were selected. Annotations by two human radiologists were made for three categories: the presence, chronicity, and location of PE. Classification of performance of a CNN model with an unsupervised learning algorithm for obtaining vector representations of words was compared with the open-source application PeFinder. Sensitivity, specificity, accuracy, and F1 scores for both the CNN model and PeFinder in the internal and external validation sets were determined. Results The CNN model demonstrated an accuracy of 99% and an area under the curve value of 0.97. For internal validation report data, the CNN model had a statistically significant larger F1 score (0.938) than did PeFinder (0.867) when classifying findings as either PE positive or PE negative, but no significant difference in sensitivity, specificity, or accuracy was found. For external validation report data, no statistical difference between the performance of the CNN model and PeFinder was found. Conclusion A deep learning CNN model can classify radiology free-text reports with accuracy equivalent to or beyond that of an existing traditional NLP model. © RSNA, 2017 Online supplemental material is available for this article.

  6. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  7. AN IMPLEMENTATION OF EIS-SVM CLASSIFIER USING RESEARCH ARTICLES FOR TEXT CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    B Ramesh

    2016-04-01

    Full Text Available Automatic text classification is a prominent research topic in text mining. The text pre-processing is a major role in text classifier. The efficiency of pre-processing techniques is increasing the performance of text classifier. In this paper, we are implementing ECAS stemmer, Efficient Instance Selection and Pre-computed Kernel Support Vector Machine for text classification using recent research articles. We are using better pre-processing techniques such as ECAS stemmer to find root word, Efficient Instance Selection for dimensionality reduction of text data and Pre-computed Kernel Support Vector Machine for classification of selected instances. In this experiments were performed on 750 research articles with three classes such as engineering article, medical articles and educational articles. The EIS-SVM classifier provides better performance in real-time research articles classification.

  8. Automatically classifying sentences in full-text biomedical articles into Introduction, Methods, Results and Discussion.

    Science.gov (United States)

    Agarwal, Shashank; Yu, Hong

    2009-12-01

    Biomedical texts can be typically represented by four rhetorical categories: Introduction, Methods, Results and Discussion (IMRAD). Classifying sentences into these categories can benefit many other text-mining tasks. Although many studies have applied different approaches for automatically classifying sentences in MEDLINE abstracts into the IMRAD categories, few have explored the classification of sentences that appear in full-text biomedical articles. We first evaluated whether sentences in full-text biomedical articles could be reliably annotated into the IMRAD format and then explored different approaches for automatically classifying these sentences into the IMRAD categories. Our results show an overall annotation agreement of 82.14% with a Kappa score of 0.756. The best classification system is a multinomial naïve Bayes classifier trained on manually annotated data that achieved 91.95% accuracy and an average F-score of 91.55%, which is significantly higher than baseline systems. A web version of this system is available online at-http://wood.ims.uwm.edu/full_text_classifier/.

  9. Mining free-text medical records for companion animal enteric syndrome surveillance.

    Science.gov (United States)

    Anholt, R M; Berezowski, J; Jamal, I; Ribble, C; Stephen, C

    2014-03-01

    Large amounts of animal health care data are present in veterinary electronic medical records (EMR) and they present an opportunity for companion animal disease surveillance. Veterinary patient records are largely in free-text without clinical coding or fixed vocabulary. Text-mining, a computer and information technology application, is needed to identify cases of interest and to add structure to the otherwise unstructured data. In this study EMR's were extracted from veterinary management programs of 12 participating veterinary practices and stored in a data warehouse. Using commercially available text-mining software (WordStat™), we developed a categorization dictionary that could be used to automatically classify and extract enteric syndrome cases from the warehoused electronic medical records. The diagnostic accuracy of the text-miner for retrieving cases of enteric syndrome was measured against human reviewers who independently categorized a random sample of 2500 cases as enteric syndrome positive or negative. Compared to the reviewers, the text-miner retrieved cases with enteric signs with a sensitivity of 87.6% (95%CI, 80.4-92.9%) and a specificity of 99.3% (95%CI, 98.9-99.6%). Automatic and accurate detection of enteric syndrome cases provides an opportunity for community surveillance of enteric pathogens in companion animals. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Comparisons and Selections of Features and Classifiers for Short Text Classification

    Science.gov (United States)

    Wang, Ye; Zhou, Zhi; Jin, Shan; Liu, Debin; Lu, Mi

    2017-10-01

    Short text is considerably different from traditional long text documents due to its shortness and conciseness, which somehow hinders the applications of conventional machine learning and data mining algorithms in short text classification. According to traditional artificial intelligence methods, we divide short text classification into three steps, namely preprocessing, feature selection and classifier comparison. In this paper, we have illustrated step-by-step how we approach our goals. Specifically, in feature selection, we compared the performance and robustness of the four methods of one-hot encoding, tf-idf weighting, word2vec and paragraph2vec, and in the classification part, we deliberately chose and compared Naive Bayes, Logistic Regression, Support Vector Machine, K-nearest Neighbor and Decision Tree as our classifiers. Then, we compared and analysed the classifiers horizontally with each other and vertically with feature selections. Regarding the datasets, we crawled more than 400,000 short text files from Shanghai and Shenzhen Stock Exchanges and manually labeled them into two classes, the big and the small. There are eight labels in the big class, and 59 labels in the small class.

  11. A linear-RBF multikernel SVM to classify big text corpora.

    Science.gov (United States)

    Romero, R; Iglesias, E L; Borrajo, L

    2015-01-01

    Support vector machine (SVM) is a powerful technique for classification. However, SVM is not suitable for classification of large datasets or text corpora, because the training complexity of SVMs is highly dependent on the input size. Recent developments in the literature on the SVM and other kernel methods emphasize the need to consider multiple kernels or parameterizations of kernels because they provide greater flexibility. This paper shows a multikernel SVM to manage highly dimensional data, providing an automatic parameterization with low computational cost and improving results against SVMs parameterized under a brute-force search. The model consists in spreading the dataset into cohesive term slices (clusters) to construct a defined structure (multikernel). The new approach is tested on different text corpora. Experimental results show that the new classifier has good accuracy compared with the classic SVM, while the training is significantly faster than several other SVM classifiers.

  12. Word2Vec inversion and traditional text classifiers for phenotyping lupus.

    Science.gov (United States)

    Turner, Clayton A; Jacobs, Alexander D; Marques, Cassios K; Oates, James C; Kamen, Diane L; Anderson, Paul E; Obeid, Jihad S

    2017-08-22

    Identifying patients with certain clinical criteria based on manual chart review of doctors' notes is a daunting task given the massive amounts of text notes in the electronic health records (EHR). This task can be automated using text classifiers based on Natural Language Processing (NLP) techniques along with pattern recognition machine learning (ML) algorithms. The aim of this research is to evaluate the performance of traditional classifiers for identifying patients with Systemic Lupus Erythematosus (SLE) in comparison with a newer Bayesian word vector method. We obtained clinical notes for patients with SLE diagnosis along with controls from the Rheumatology Clinic (662 total patients). Sparse bag-of-words (BOWs) and Unified Medical Language System (UMLS) Concept Unique Identifiers (CUIs) matrices were produced using NLP pipelines. These matrices were subjected to several different NLP classifiers: neural networks, random forests, naïve Bayes, support vector machines, and Word2Vec inversion, a Bayesian inversion method. Performance was measured by calculating accuracy and area under the Receiver Operating Characteristic (ROC) curve (AUC) of a cross-validated (CV) set and a separate testing set. We calculated the accuracy of the ICD-9 billing codes as a baseline to be 90.00% with an AUC of 0.900, the shallow neural network with CUIs to be 92.10% with an AUC of 0.970, the random forest with BOWs to be 95.25% with an AUC of 0.994, the random forest with CUIs to be 95.00% with an AUC of 0.979, and the Word2Vec inversion to be 90.03% with an AUC of 0.905. Our results suggest that a shallow neural network with CUIs and random forests with both CUIs and BOWs are the best classifiers for this lupus phenotyping task. The Word2Vec inversion method failed to significantly beat the ICD-9 code classification, but yielded promising results. This method does not require explicit features and is more adaptable to non-binary classification tasks. The Word2Vec inversion is

  13. Short text sentiment classification based on feature extension and ensemble classifier

    Science.gov (United States)

    Liu, Yang; Zhu, Xie

    2018-05-01

    With the rapid development of Internet social media, excavating the emotional tendencies of the short text information from the Internet, the acquisition of useful information has attracted the attention of researchers. At present, the commonly used can be attributed to the rule-based classification and statistical machine learning classification methods. Although micro-blog sentiment analysis has made good progress, there still exist some shortcomings such as not highly accurate enough and strong dependence from sentiment classification effect. Aiming at the characteristics of Chinese short texts, such as less information, sparse features, and diverse expressions, this paper considers expanding the original text by mining related semantic information from the reviews, forwarding and other related information. First, this paper uses Word2vec to compute word similarity to extend the feature words. And then uses an ensemble classifier composed of SVM, KNN and HMM to analyze the emotion of the short text of micro-blog. The experimental results show that the proposed method can make good use of the comment forwarding information to extend the original features. Compared with the traditional method, the accuracy, recall and F1 value obtained by this method have been improved.

  14. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    Science.gov (United States)

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as

  15. General Text-Chunk Localization in Scene Images using a Codebook-based Classifier

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus; Pruksasri, Potchara

    Text localization is a main portal to character recognition in scene images. The detection of text regions in an image is a great challenge. However, many locating methods use a bottom-up scheme that consumes relatively high computation to identify the text regions. Therefore, this paper presents a

  16. Classifying unstructed textual data using the Product Score Model: an alternative text mining algorithm

    NARCIS (Netherlands)

    He, Qiwei; Veldkamp, Bernard P.; Eggen, T.J.H.M.; Veldkamp, B.P.

    2012-01-01

    Unstructured textual data such as students’ essays and life narratives can provide helpful information in educational and psychological measurement, but often contain irregularities and ambiguities, which creates difficulties in analysis. Text mining techniques that seek to extract useful

  17. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    Science.gov (United States)

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Conclusions Machine learning algorithms can classify open-text feedback

  18. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy.

    Science.gov (United States)

    Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John

    2017-03-15

    Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high

  19. Text mining electronic hospital records to automatically classify admissions against disease: Measuring the impact of linking data sources.

    Science.gov (United States)

    Kocbek, Simon; Cavedon, Lawrence; Martinez, David; Bain, Christopher; Manus, Chris Mac; Haffari, Gholamreza; Zukerman, Ingrid; Verspoor, Karin

    2016-12-01

    Text and data mining play an important role in obtaining insights from Health and Hospital Information Systems. This paper presents a text mining system for detecting admissions marked as positive for several diseases: Lung Cancer, Breast Cancer, Colon Cancer, Secondary Malignant Neoplasm of Respiratory and Digestive Organs, Multiple Myeloma and Malignant Plasma Cell Neoplasms, Pneumonia, and Pulmonary Embolism. We specifically examine the effect of linking multiple data sources on text classification performance. Support Vector Machine classifiers are built for eight data source combinations, and evaluated using the metrics of Precision, Recall and F-Score. Sub-sampling techniques are used to address unbalanced datasets of medical records. We use radiology reports as an initial data source and add other sources, such as pathology reports and patient and hospital admission data, in order to assess the research question regarding the impact of the value of multiple data sources. Statistical significance is measured using the Wilcoxon signed-rank test. A second set of experiments explores aspects of the system in greater depth, focusing on Lung Cancer. We explore the impact of feature selection; analyse the learning curve; examine the effect of restricting admissions to only those containing reports from all data sources; and examine the impact of reducing the sub-sampling. These experiments provide better understanding of how to best apply text classification in the context of imbalanced data of variable completeness. Radiology questions plus patient and hospital admission data contribute valuable information for detecting most of the diseases, significantly improving performance when added to radiology reports alone or to the combination of radiology and pathology reports. Overall, linking data sources significantly improved classification performance for all the diseases examined. However, there is no single approach that suits all scenarios; the choice of the

  20. Surveillance

    DEFF Research Database (Denmark)

    Albrechtslund, Anders; Coeckelbergh, Mark; Matzner, Tobias

    Studying surveillance involves raising questions about the very nature of concepts such as information, technology, identity, space and power. Besides the maybe all too obvious ethical issues often discussed with regard to surveillance, there are several other angles and approaches that we should...... like to encourage. Therefore, our panel will focus on the philosophical, yet non-ethical issues of surveillance in order to stimulate an intense debate with the audience on the ethical implications of our enquiries. We also hope to provide a broader and deeper understanding of surveillance....

  1. Performance of svm, k-nn and nbc classifiers for text-independent speaker identification with and without modelling through merging models

    Directory of Open Access Journals (Sweden)

    Yussouf Nahayo

    2016-04-01

    Full Text Available This paper proposes some methods of robust text-independent speaker identification based on Gaussian Mixture Model (GMM. We implemented a combination of GMM model with a set of classifiers such as Support Vector Machine (SVM, K-Nearest Neighbour (K-NN, and Naive Bayes Classifier (NBC. In order to improve the identification rate, we developed a combination of hybrid systems by using validation technique. The experiments were performed on the dialect DR1 of the TIMIT corpus. The results have showed a better performance for the developed technique compared to the individual techniques.

  2. Identifying influenza-like illness presentation from unstructured general practice clinical narrative using a text classifier rule-based expert system versus a clinical expert.

    Science.gov (United States)

    MacRae, Jayden; Love, Tom; Baker, Michael G; Dowell, Anthony; Carnachan, Matthew; Stubbe, Maria; McBain, Lynn

    2015-10-06

    We designed and validated a rule-based expert system to identify influenza like illness (ILI) from routinely recorded general practice clinical narrative to aid a larger retrospective research study into the impact of the 2009 influenza pandemic in New Zealand. Rules were assessed using pattern matching heuristics on routine clinical narrative. The system was trained using data from 623 clinical encounters and validated using a clinical expert as a gold standard against a mutually exclusive set of 901 records. We calculated a 98.2 % specificity and 90.2 % sensitivity across an ILI incidence of 12.4 % measured against clinical expert classification. Peak problem list identification of ILI by clinical coding in any month was 9.2 % of all detected ILI presentations. Our system addressed an unusual problem domain for clinical narrative classification; using notational, unstructured, clinician entered information in a community care setting. It performed well compared with other approaches and domains. It has potential applications in real-time surveillance of disease, and in assisted problem list coding for clinicians. Our system identified ILI presentation with sufficient accuracy for use at a population level in the wider research study. The peak coding of 9.2 % illustrated the need for automated coding of unstructured narrative in our study.

  3. Classifying Microorganisms

    DEFF Research Database (Denmark)

    Sommerlund, Julie

    2006-01-01

    This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological characteris......This paper describes the coexistence of two systems for classifying organisms and species: a dominant genetic system and an older naturalist system. The former classifies species and traces their evolution on the basis of genetic characteristics, while the latter employs physiological...... characteristics. The coexistence of the classification systems does not lead to a conflict between them. Rather, the systems seem to co-exist in different configurations, through which they are complementary, contradictory and inclusive in different situations-sometimes simultaneously. The systems come...

  4. Surveillance and Critical Theory

    Directory of Open Access Journals (Sweden)

    Christian Fuchs

    2015-09-01

    Full Text Available In this comment, the author reflects on surveillance from a critical theory approach, his involvement in surveillance research and projects, and the status of the study of surveillance. The comment ascertains a lack of critical thinking about surveillance, questions the existence of something called “surveillance studies” as opposed to a critical theory of society, and reflects on issues such as Edward Snowden’s revelations, and Foucault and Marx in the context of surveillance.

  5. Carbon classified?

    DEFF Research Database (Denmark)

    Lippert, Ingmar

    2012-01-01

    . Using an actor- network theory (ANT) framework, the aim is to investigate the actors who bring together the elements needed to classify their carbon emission sources and unpack the heterogeneous relations drawn on. Based on an ethnographic study of corporate agents of ecological modernisation over...... a period of 13 months, this paper provides an exploration of three cases of enacting classification. Drawing on ANT, we problematise the silencing of a range of possible modalities of consumption facts and point to the ontological ethics involved in such performances. In a context of global warming...

  6. Ideology, Critique and Surveillance

    Directory of Open Access Journals (Sweden)

    Heidi Herzogenrath-Amelung

    2013-11-01

    Full Text Available The 2013 revelations concerning global surveillance programmes demonstrate in unprecedented clarity the need for Critical Theory of information and communication technologies (ICTs to address the mechanisms and implications of increasingly global, ubiquitous surveillance. This is all the more urgent because of the dominance of the “surveillance ideology” (the promise of security through surveillance that supports the political economy of surveillance. This paper asks which theoretical arguments and concepts can be useful for philosophically grounding a critique of this surveillance ideology. It begins by examining how the surveillance ideology works through language and introduces the concept of the ‘ideological packaging’ of ICTs to show how rhetoric surrounding the implementation of surveillance technologies reinforces the surveillance ideology. It then raises the problem of how ideology-critique can work if it relies on language itself and argues that Martin Heidegger’s philosophy can make a useful contribution to existing critical approaches to language.

  7. Intelligent Garbage Classifier

    Directory of Open Access Journals (Sweden)

    Ignacio Rodríguez Novelle

    2008-12-01

    Full Text Available IGC (Intelligent Garbage Classifier is a system for visual classification and separation of solid waste products. Currently, an important part of the separation effort is based on manual work, from household separation to industrial waste management. Taking advantage of the technologies currently available, a system has been built that can analyze images from a camera and control a robot arm and conveyor belt to automatically separate different kinds of waste.

  8. Influenza surveillance

    Directory of Open Access Journals (Sweden)

    Karolina Bednarska

    2016-04-01

    Full Text Available Influenza surveillance was established in 1947. From this moment WHO (World Health Organization has been coordinating international cooperation, with a goal of monitoring influenza virus activity, effective diagnostic of the circulating viruses and informing society about epidemics or pandemics, as well as about emergence of new subtypes of influenza virus type A. Influenza surveillance is an important task, because it enables people to prepare themselves for battle with the virus that is constantly mutating, what leads to circulation of new and often more virulent strains of influenza in human population. As vaccination is the most effective method of fighting the virus, one of the major tasks of GISRS is developing an optimal antigenic composition of the vaccine for the current epidemic season. European Influenza Surveillance Network (EISN has also developed over the years. EISN is running integrated epidemiological and virological influenza surveillance, to provide appropriate data to public health experts in member countries, to enable them undertaking relevant activities based on the current information about influenza activity. In close cooperation with GISRS and EISN are National Influenza Centres - national institutions designated by the Ministry of Health in each country.

  9. Fingerprint prediction using classifier ensembles

    CSIR Research Space (South Africa)

    Molale, P

    2011-11-01

    Full Text Available ); logistic discrimination (LgD), k-nearest neighbour (k-NN), artificial neural network (ANN), association rules (AR) decision tree (DT), naive Bayes classifier (NBC) and the support vector machine (SVM). The performance of several multiple classifier systems...

  10. Nutritional surveillance.

    Science.gov (United States)

    Mason, J B; Mitchell, J T

    1983-01-01

    The concept of nutritional surveillance is derived from disease surveillance, and means "to watch over nutrition, in order to make decisions that lead to improvements in nutrition in populations". Three distinct objectives have been defined for surveillance systems, primarily in relation to problems of malnutrition in developing countries: to aid long-term planning in health and development; to provide input for programme management and evaluation; and to give timely warning of the need for intervention to prevent critical deteriorations in food consumption. Decisions affecting nutrition are made at various administrative levels, and the uses of different types of nutritional surveillance information can be related to national policies, development programmes, public health and nutrition programmes, and timely warning and intervention programmes. The information should answer specific questions, for example concerning the nutritional status and trends of particular population groups.Defining the uses and users of the information is the first essential step in designing a system; this is illustrated with reference to agricultural and rural development planning, the health sector, and nutrition and social welfare programmes. The most usual data outputs are nutritional outcome indicators (e.g., prevalence of malnutrition among preschool children), disaggregated by descriptive or classifying variables, of which the commonest is simply administrative area. Often, additional "status" indicators, such as quality of housing or water supply, are presented at the same time. On the other hand, timely warning requires earlier indicators of the possibility of nutritional deterioration, and agricultural indicators are often the most appropriate.DATA COME FROM TWO MAIN TYPES OF SOURCE: administrative (e.g., clinics and schools) and household sample surveys. Each source has its own advantages and disadvantages: for example, administrative data often already exist, and can be

  11. Redefining syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Rebecca Katz

    2011-12-01

    Full Text Available With growing concerns about international spread of disease and expanding use of early disease detection surveillance methods, the field of syndromic surveillance has received increased attention over the last decade. The purpose of this article is to clarify the various meanings that have been assigned to the term syndromic surveillance and to propose a refined categorization of the characteristics of these systems. Existing literature and conference proceedings were examined on syndromic surveillance from 1998 to 2010, focusing on low- and middle-income settings. Based on the 36 unique definitions of syndromic surveillance found in the literature, five commonly accepted principles of syndromic surveillance systems were identified, as well as two fundamental categories: specific and non-specific disease detection. Ultimately, the proposed categorization of syndromic surveillance distinguishes between systems that focus on detecting defined syndromes or outcomes of interest and those that aim to uncover non-specific trends that suggest an outbreak may be occurring. By providing an accurate and comprehensive picture of this field’s capabilities, and differentiating among system types, a unified understanding of the syndromic surveillance field can be developed, encouraging the adoption, investment in, and implementation of these systems in settings that need bolstered surveillance capacity, particularly low- and middle-income countries.

  12. Classifying medical relations in clinical text via convolutional neural networks.

    Science.gov (United States)

    He, Bin; Guan, Yi; Dai, Rui

    2018-05-16

    Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method. Copyright © 2018. Published by Elsevier B.V.

  13. Introduction to surveillance studies

    CERN Document Server

    Petersen, JK

    2012-01-01

    Introduction & OverviewIntroduction Brief History of Surveillance Technologies & TechniquesOptical SurveillanceAerial Surveillance Audio Surveillance Radio-Wave SurveillanceGlobal Positioning Systems Sensors Computers & the Internet Data Cards Biochemical Surveillance Animal Surveillance Biometrics Genetics Practical ConsiderationsPrevalence of Surveillance Effectiveness of Surveillance Freedom & Privacy IssuesConstitutional Freedoms Privacy Safeguards & Intrusions ResourcesReferences Glossary Index

  14. Surveillance Culture

    DEFF Research Database (Denmark)

    2017-01-01

    What does it mean to live in a world full of surveillance? In this documentary film, we take a look at everyday life in Denmark and how surveillance technologies and practices influence our norms and social behaviour. Researched and directed by Btihaj Ajana and Anders Albrechtslund....

  15. The Copyright Surveillance Industry

    Directory of Open Access Journals (Sweden)

    Mike Zajko

    2015-09-01

    Full Text Available Creative works are now increasingly distributed as digital “content” through the internet, and copyright law has created powerful incentives to monitor and control these flows. This paper analyzes the surveillance industry that has emerged as a result. Copyright surveillance systems identify copyright infringement online and identify persons to hold responsible for infringing acts. These practices have raised fundamental questions about the nature of identification and attribution on the internet, as well as the increasing use of algorithms to make legal distinctions. New technologies have threatened the profits of some media industries through copyright infringement, but also enabled profitable forms of mass copyright surveillance and enforcement. Rather than a system of perfect control, copyright enforcement continues to be selective and uneven, but its broad reach results in systemic harm and provides opportunities for exploitation. It is only by scrutinizing copyright surveillance practices and copyright enforcement measures that we can evaluate these consequences.

  16. Sanitary surveillance and bioethics

    Directory of Open Access Journals (Sweden)

    Volnei Garrafa

    2017-08-01

    Full Text Available Regulatory practices in the field of health surveillance are indispensable. The aim of this study is to show ‒ taking the Brazilian National Surveillance Agency, governing body of sanitary surveillance in Brazil as a reference ‒ that bioethics provides public bodies a series of theoretical tools from the field of applied ethics for the proper exercise and control of these practices. To that end, the work uses two references of bioethics for the development of a comparative and supportive analysis to regulatory activities in the field of health surveillance: the Universal Declaration on Bioethics and Human Rights of Unesco and the theory of intervention bioethics. We conclude that organizations and staff working with regulatory activities can take advantage of the principles and frameworks proposed by bioethics, especially those related to the Declaration and the theory of intervention bioethics, the latter being set by the observation and use of the principles of prudence, precaution, protection and prevention.

  17. Surveillance Pleasures

    DEFF Research Database (Denmark)

    Albrechtslund, Anders

    The notorious intensification and digitalization of surveillance technologies and practices in today’s society has brought about numerous changes. These changes have been widely noticed, described and discussed across many academic disciplines. However, the contexts of entertainment, play...

  18. Classifying Returns as Extreme

    DEFF Research Database (Denmark)

    Christiansen, Charlotte

    2014-01-01

    I consider extreme returns for the stock and bond markets of 14 EU countries using two classification schemes: One, the univariate classification scheme from the previous literature that classifies extreme returns for each market separately, and two, a novel multivariate classification scheme tha...

  19. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  20. Classified facilities for environmental protection

    International Nuclear Information System (INIS)

    Anon.

    1993-02-01

    The legislation of the classified facilities governs most of the dangerous or polluting industries or fixed activities. It rests on the law of 9 July 1976 concerning facilities classified for environmental protection and its application decree of 21 September 1977. This legislation, the general texts of which appear in this volume 1, aims to prevent all the risks and the harmful effects coming from an installation (air, water or soil pollutions, wastes, even aesthetic breaches). The polluting or dangerous activities are defined in a list called nomenclature which subjects the facilities to a declaration or an authorization procedure. The authorization is delivered by the prefect at the end of an open and contradictory procedure after a public survey. In addition, the facilities can be subjected to technical regulations fixed by the Environment Minister (volume 2) or by the prefect for facilities subjected to declaration (volume 3). (A.B.)

  1. Development of a Machine Learning Algorithm for the Surveillance of Autism Spectrum Disorder.

    Directory of Open Access Journals (Sweden)

    Matthew J Maenner

    Full Text Available The Autism and Developmental Disabilities Monitoring (ADDM Network conducts population-based surveillance of autism spectrum disorder (ASD among 8-year old children in multiple US sites. To classify ASD, trained clinicians review developmental evaluations collected from multiple health and education sources to determine whether the child meets the ASD surveillance case criteria. The number of evaluations collected has dramatically increased since the year 2000, challenging the resources and timeliness of the surveillance system. We developed and evaluated a machine learning approach to classify case status in ADDM using words and phrases contained in children's developmental evaluations. We trained a random forest classifier using data from the 2008 Georgia ADDM site which included 1,162 children with 5,396 evaluations (601 children met ADDM ASD criteria using standard ADDM methods. The classifier used the words and phrases from the evaluations to predict ASD case status. We evaluated its performance on the 2010 Georgia ADDM surveillance data (1,450 children with 9,811 evaluations; 754 children met ADDM ASD criteria. We also estimated ASD prevalence using predictions from the classification algorithm. Overall, the machine learning approach predicted ASD case statuses that were 86.5% concordant with the clinician-determined case statuses (84.0% sensitivity, 89.4% predictive value positive. The area under the resulting receiver-operating characteristic curve was 0.932. Algorithm-derived ASD "prevalence" was 1.46% compared to the published (clinician-determined estimate of 1.55%. Using only the text contained in developmental evaluations, a machine learning algorithm was able to discriminate between children that do and do not meet ASD surveillance criteria at one surveillance site.

  2. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  3. Critical Surveillance Studies in the Information Society

    Directory of Open Access Journals (Sweden)

    Thomas Allmer

    2011-11-01

    Full Text Available The overall aim of this paper is to clarify how we can theorize and systemize economic surveillance. Surveillance studies scholars like David Lyon stress that economic surveillance such as monitoring consumers or the workplace are central aspects of surveillance societies. The approach that is advanced in this work recognizes the importance of the role of the economy in contemporary surveillance societies. The paper at hand constructs theoretically founded typologies in order to systemize the existing literature of surveillance studies and to analyze examples of surveillance. Therefore, it mainly is a theoretical approach combined with illustrative examples. This contribution contains a systematic discussion of the state of the art of surveillance and clarifies how different notions treat economic aspects of surveillance. In this work it is argued that the existing literature is insufficient for studying economic surveillance. In contrast, a typology of surveillance in the modern economy, which is based on foundations of a political economy approach, allows providing a systematic analysis of economic surveillance on the basis of current developments on the Internet. Finally, some political recommendations are drawn in order to overcome economic surveillance. This contribution can be fruitful for scholars who want to undertake a systematic analysis of surveillance in the modern economy and who want to study the field of surveillance critically.

  4. Combining multiple classifiers for age classification

    CSIR Research Space (South Africa)

    Van Heerden, C

    2009-11-01

    Full Text Available The authors compare several different classifier combination methods on a single task, namely speaker age classification. This task is well suited to combination strategies, since significantly different feature classes are employed. Support vector...

  5. Surveillance Angels

    NARCIS (Netherlands)

    Rothkrantz, L.J.M.

    2014-01-01

    The use of sensor networks has been proposed for military surveillance and environmental monitoring applications. Those systems are composed of a heterogeneous set of sensors to observe the environment. In centralised systems the observed data will be conveyed to the control room to process the

  6. Text Mining.

    Science.gov (United States)

    Trybula, Walter J.

    1999-01-01

    Reviews the state of research in text mining, focusing on newer developments. The intent is to describe the disparate investigations currently included under the term text mining and provide a cohesive structure for these efforts. A summary of research identifies key organizations responsible for pushing the development of text mining. A section…

  7. Stack filter classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  8. Hybrid Neuro-Fuzzy Classifier Based On Nefclass Model

    Directory of Open Access Journals (Sweden)

    Bogdan Gliwa

    2011-01-01

    Full Text Available The paper presents hybrid neuro-fuzzy classifier, based on NEFCLASS model, which wasmodified. The presented classifier was compared to popular classifiers – neural networks andk-nearest neighbours. Efficiency of modifications in classifier was compared with methodsused in original model NEFCLASS (learning methods. Accuracy of classifier was testedusing 3 datasets from UCI Machine Learning Repository: iris, wine and breast cancer wisconsin.Moreover, influence of ensemble classification methods on classification accuracy waspresented.

  9. Air surveillance

    International Nuclear Information System (INIS)

    Patton, G.W.

    1995-01-01

    This section of the 1994 Hanford Site Environmental Report summarizes the air surveillance and monitoring programs currently in operation at that Hanford Site. Atmospheric releases of pollutants from Hanford to the surrounding region are a potential source of human exposure. For that reason, both radioactive and nonradioactive materials in air are monitored at a number of locations. The influence of Hanford emissions on local radionuclide concentrations was evaluated by comparing concentrations measured at distant locations within the region to concentrations measured at the Site perimeter. This section discusses sample collection, analytical methods, and the results of the Hanford air surveillance program. A complete listing of all analytical results summarized in this section is reported separately by Bisping (1995)

  10. Air surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Patton, G.W.

    1995-06-01

    This section of the 1994 Hanford Site Environmental Report summarizes the air surveillance and monitoring programs currently in operation at that Hanford Site. Atmospheric releases of pollutants from Hanford to the surrounding region are a potential source of human exposure. For that reason, both radioactive and nonradioactive materials in air are monitored at a number of locations. The influence of Hanford emissions on local radionuclide concentrations was evaluated by comparing concentrations measured at distant locations within the region to concentrations measured at the Site perimeter. This section discusses sample collection, analytical methods, and the results of the Hanford air surveillance program. A complete listing of all analytical results summarized in this section is reported separately by Bisping (1995).

  11. Rinderpest surveillance

    International Nuclear Information System (INIS)

    2003-01-01

    Rinderpest is probably the most lethal virus disease of cattle and buffalo and can destroy whole populations; damaging economies; undermining food security and ruining the livelihood of farmers and pastoralists. The disease can be eradicated by vaccination and control of livestock movement. The Department of Technical Co-operation is sponsoring a programme, with technical support from the Joint FAO/IAEA Division to provide advice, training and materials to thirteen states through the 'Support for Rinderpest Surveillance in West Asia' project. (IAEA)

  12. Health surveillance

    International Nuclear Information System (INIS)

    1981-01-01

    The Code includes a number of requirements for the health surveillance of employees associated with the mining and milling of radioactive ores. This guideline is particularly directed at determining the level of fitness of employees and prospective employees, detecting any symptom which might contraindicate exposure to the environment encountered in mine/mill situations, examination of any employee who may have been exposed to radiation in excess of defined limits and the accumulation and provision of data on the health of employees

  13. A CLASSIFIER SYSTEM USING SMOOTH GRAPH COLORING

    Directory of Open Access Journals (Sweden)

    JORGE FLORES CRUZ

    2017-01-01

    Full Text Available Unsupervised classifiers allow clustering methods with less or no human intervention. Therefore it is desirable to group the set of items with less data processing. This paper proposes an unsupervised classifier system using the model of soft graph coloring. This method was tested with some classic instances in the literature and the results obtained were compared with classifications made with human intervention, yielding as good or better results than supervised classifiers, sometimes providing alternative classifications that considers additional information that humans did not considered.

  14. Naive Bayesian classifiers for multinomial features: a theoretical analysis

    CSIR Research Space (South Africa)

    Van Dyk, E

    2007-11-01

    Full Text Available The authors investigate the use of naive Bayesian classifiers for multinomial feature spaces and derive error estimates for these classifiers. The error analysis is done by developing a mathematical model to estimate the probability density...

  15. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  16. Health surveillance of radiological work

    International Nuclear Information System (INIS)

    Pauw, H.; Vliet, J.V.D.; Zuidema, H.

    1988-01-01

    Shielding x-ray devices and issuing film badges to radiological workers in 1936 can be considered the start of radiological protection in the Philips enterprises in the Netherlands. Shielding and equipment were constantly improved based upon the dosimetry results of the filmbadges. The problem of radioactive waste led to the foundation of a central Philips committee for radiological protection in 1956, which in 1960 also issued an internal license system in order to regulate the proper precautions to be taken : workplace design and layout, technological provisions and working procedures. An evaluation of all radiological work in 1971 learnt that a stricter health surveillance program was needed to follow up the precautions issued by the license. On one hand a health surveillance program was established and on the other hand all types of radiological work were classified. In this way an obligatory and optimal health surveillance program was issued for each type of radiological work

  17. Attaching Hollywood to a Surveillant Assemblage: Normalizing Discourses of Video Surveillance

    Directory of Open Access Journals (Sweden)

    Randy K Lippert

    2015-10-01

    Full Text Available This article examines video surveillance images in Hollywood film. It moves beyond previous accounts of video surveillance in relation to film by theoretically situating the use of these surveillance images in a broader “surveillant assemblage”. To this end, scenes from a sample of thirty-five (35 films of several genres are examined to discern dominant discourses and how they lend themselves to normalization of video surveillance. Four discourses are discovered and elaborated by providing examples from Hollywood films. While the films provide video surveillance with a positive associative association it is not without nuance and limitations. Thus, it is found that some forms of resistance to video surveillance are shown while its deterrent effect is not. It is ultimately argued that Hollywood film is becoming attached to a video surveillant assemblage discursively through these normalizing discourses as well as structurally to the extent actual video surveillance technology to produce the images is used.

  18. Strategies to Increase Accuracy in Text Classification

    NARCIS (Netherlands)

    D. Blommesteijn (Dennis)

    2014-01-01

    htmlabstractText classification via supervised learning involves various steps from processing raw data, features extraction to training and validating classifiers. Within these steps implementation decisions are critical to the resulting classifier accuracy. This paper contains a report of the

  19. Video Sensor Architecture for Surveillance Applications

    Directory of Open Access Journals (Sweden)

    José E. Simó

    2012-02-01

    Full Text Available This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  20. 75 FR 37253 - Classified National Security Information

    Science.gov (United States)

    2010-06-28

    ... ``Secret.'' (3) Each interior page of a classified document shall be marked at the top and bottom either... ``(TS)'' for Top Secret, ``(S)'' for Secret, and ``(C)'' for Confidential will be used. (2) Portions... from the informational text. (1) Conspicuously place the overall classification at the top and bottom...

  1. Data characteristics that determine classifier performance

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2006-11-01

    Full Text Available available at [11]. The kNN uses a LinearNN nearest neighbour search algorithm with an Euclidean distance metric [8]. The optimal k value is determined by performing 10-fold cross-validation. An optimal k value between 1 and 10 is used for Experiments 1... classifiers. 10-fold cross-validation is used to evaluate and compare the performance of the classifiers on the different data sets. 3.1. Artificial data generation Multivariate Gaussian distributions are used to generate artificial data sets. We use d...

  2. History of trichinellosis surveillance

    Directory of Open Access Journals (Sweden)

    Blancou J.

    2001-06-01

    Full Text Available The origin of trichinellosis, which existed in ancient times as testified by the discovery of parasite larvae on an Egyptian mummy, unfolded in several stages: discovery of encapsulated larvae (in the 1820s, identification and scientific description of these larvae (Paget Owen, 1835, followed by experimental infestations of animals (dogs, pigs, rabbits, mice or of humans as from 1850.The main occurrences of trichinellosis were followed with particular attention in Europe (Germany, Denmark, France, etc. and in the United States of America at the end of the XIXth century. They affected numerous domestic animal species (pigs, horses, etc. or wildlife and humans. Germany paid the heaviest toll with regard to the disease in humans, between 1860 and 1880, with several thousands of patients and more than 500 deaths.Different trichinellosis surveillance systems were set up in the relevant countries in the 1860s. In humans, this surveillance was carried out on affected living patients by a biopsy of the biceps muscles and subsequently by an analysis of eosinophilia (1895. In animals, surveillance was for a long time solely based on postmortem examination of the muscles of the affected animals. This method was used for the first time in 863 in Germany, and from the 1 890s, on several hundreds of thousands of pigs in Europe or in the United States of America.

  3. Classified

    CERN Multimedia

    Computer Security Team

    2011-01-01

    In the last issue of the Bulletin, we have discussed recent implications for privacy on the Internet. But privacy of personal data is just one facet of data protection. Confidentiality is another one. However, confidentiality and data protection are often perceived as not relevant in the academic environment of CERN.   But think twice! At CERN, your personal data, e-mails, medical records, financial and contractual documents, MARS forms, group meeting minutes (and of course your password!) are all considered to be sensitive, restricted or even confidential. And this is not all. Physics results, in particular when being preliminary and pending scrutiny, are sensitive, too. Just recently, an ATLAS collaborator copy/pasted the abstract of an ATLAS note onto an external public blog, despite the fact that this document was clearly marked as an "Internal Note". Such an act was not only embarrassing to the ATLAS collaboration, and had negative impact on CERN’s reputation --- i...

  4. Who is Surveilling Whom?

    DEFF Research Database (Denmark)

    Mortensen, Mette

    2014-01-01

    This article concerns the particular form of counter-surveillance termed “sousveillance”, which aims to turn surveillance at the institutions responsible for surveillance. Drawing on the theoretical perspectives “mediatization” and “aerial surveillance,” the article studies WikiLeaks’ publication...

  5. Reinforcement Learning Based Artificial Immune Classifier

    Directory of Open Access Journals (Sweden)

    Mehmet Karakose

    2013-01-01

    Full Text Available One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  6. Classifying Sluice Occurrences in Dialogue

    DEFF Research Database (Denmark)

    Baird, Austin; Hamza, Anissa; Hardt, Daniel

    2018-01-01

    perform manual annotation with acceptable inter-coder agreement. We build classifier models with Decision Trees and Naive Bayes, with accuracy of 67%. We deploy a classifier to automatically classify sluice occurrences in OpenSubtitles, resulting in a corpus with 1.7 million occurrences. This will support....... Despite this, the corpus can be of great use in research on sluicing and development of systems, and we are making the corpus freely available on request. Furthermore, we are in the process of improving the accuracy of sluice identification and annotation for the purpose of created a subsequent version...

  7. Quantum ensembles of quantum classifiers.

    Science.gov (United States)

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  8. IAEA safeguards and classified materials

    International Nuclear Information System (INIS)

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-01-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA's safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials

  9. Using Unlabeled Data to Improve Text Classification

    National Research Council Canada - National Science Library

    Nigam, Kamal P

    2001-01-01

    .... This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers...

  10. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    Directory of Open Access Journals (Sweden)

    Shehzad Khalid

    2014-01-01

    Full Text Available We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  11. Hybrid classifiers methods of data, knowledge, and classifier combination

    CERN Document Server

    Wozniak, Michal

    2014-01-01

    This book delivers a definite and compact knowledge on how hybridization can help improving the quality of computer classification systems. In order to make readers clearly realize the knowledge of hybridization, this book primarily focuses on introducing the different levels of hybridization and illuminating what problems we will face with as dealing with such projects. In the first instance the data and knowledge incorporated in hybridization were the action points, and then a still growing up area of classifier systems known as combined classifiers was considered. This book comprises the aforementioned state-of-the-art topics and the latest research results of the author and his team from Department of Systems and Computer Networks, Wroclaw University of Technology, including as classifier based on feature space splitting, one-class classification, imbalance data, and data stream classification.

  12. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  13. A Supervised Multiclass Classifier for an Autocoding System

    Directory of Open Access Journals (Sweden)

    Yukako Toko

    2017-11-01

    Full Text Available Classification is often required in various contexts, including in the field of official statistics. In the previous study, we have developed a multiclass classifier that can classify short text descriptions with high accuracy. The algorithm borrows the concept of the naïve Bayes classifier and is so simple that its structure is easily understandable. The proposed classifier has the following two advantages. First, the processing times for both learning and classifying are extremely practical. Second, the proposed classifier yields high-accuracy results for a large portion of a dataset. We have previously developed an autocoding system for the Family Income and Expenditure Survey in Japan that has a better performing classifier. While the original system was developed in Perl in order to improve the efficiency of the coding process of short Japanese texts, the proposed system is implemented in the R programming language in order to explore versatility and is modified to make the system easily applicable to English text descriptions, in consideration of the increasing number of R users in the field of official statistics. We are planning to publish the proposed classifier as an R-package. The proposed classifier would be generally applicable to other classification tasks including coding activities in the field of official statistics, and it would contribute greatly to improving their efficiency.

  14. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  15. Informatics enables public health surveillance

    Directory of Open Access Journals (Sweden)

    Scott J. N McNabb

    2017-01-01

    Full Text Available Over the past decade, the world has radically changed. New advances in information and communication technologies (ICT connect the world in ways never imagined. Public health informatics (PHI leveraged for public health surveillance (PHS, can enable, enhance, and empower essential PHS functions (i.e., detection, reporting, confirmation, analyses, feedback, response. However, the tail doesn't wag the dog; as such, ICT cannot (should not drive public health surveillance strengthening. Rather, ICT can serve PHS to more effectively empower core functions. In this review, we explore promising ICT trends for prevention, detection, and response, laboratory reporting, push notification, analytics, predictive surveillance, and using new data sources, while recognizing that it is the people, politics, and policies that most challenge progress for implementation of solutions.

  16. Knowledge Uncertainty and Composed Classifier

    Czech Academy of Sciences Publication Activity Database

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  17. Correlation Dimension-Based Classifier

    Czech Academy of Sciences Publication Activity Database

    Jiřina, Marcel; Jiřina jr., M.

    2014-01-01

    Roč. 44, č. 12 (2014), s. 2253-2263 ISSN 2168-2267 R&D Projects: GA MŠk(CZ) LG12020 Institutional support: RVO:67985807 Keywords : classifier * multidimensional data * correlation dimension * scaling exponent * polynomial expansion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  18. SOA-surveillance Nederland

    NARCIS (Netherlands)

    Rijlaarsdam J; Bosman A; Laar MJW van de; CIE

    2000-01-01

    In May 1999 a working group was started to evaluate the current surveillance systems for sexually transmitted diseases (STD) and to make suggestions for a renewed effective and efficient STD surveillance system in the Netherlands. The surveillance system has to provide insight into the prevalence

  19. Containment and surveillance devices

    International Nuclear Information System (INIS)

    Campbell, J.W.; Johnson, C.S.; Stieff, L.R.

    The growing acceptance of containment and surveillance as a means to increase safeguards effectiveness has provided impetus to the development of improved surveillance and containment devices. Five recently developed devices are described. The devices include one photographic and two television surveillance systems and two high security seals that can be verified while installed

  20. Mobile phones used for public health surveillance

    Directory of Open Access Journals (Sweden)

    Kebede Deribe

    2011-08-01

    Full Text Available In Darfur, the Ministry of Health, WHO and partners have developed a mobile phone-based infectious disease surveillance system for use where resources and facilities may be limited.

  1. Arabic text classification using Polynomial Networks

    Directory of Open Access Journals (Sweden)

    Mayy M. Al-Tahrawi

    2015-10-01

    Full Text Available In this paper, an Arabic statistical learning-based text classification system has been developed using Polynomial Neural Networks. Polynomial Networks have been recently applied to English text classification, but they were never used for Arabic text classification. In this research, we investigate the performance of Polynomial Networks in classifying Arabic texts. Experiments are conducted on a widely used Arabic dataset in text classification: Al-Jazeera News dataset. We chose this dataset to enable direct comparisons of the performance of Polynomial Networks classifier versus other well-known classifiers on this dataset in the literature of Arabic text classification. Results of experiments show that Polynomial Networks classifier is a competitive algorithm to the state-of-the-art ones in the field of Arabic text classification.

  2. Use of information barriers to protect classified information

    International Nuclear Information System (INIS)

    MacArthur, D.; Johnson, M.W.; Nicholas, N.J.; Whiteson, R.

    1998-01-01

    This paper discusses the detailed requirements for an information barrier (IB) for use with verification systems that employ intrusive measurement technologies. The IB would protect classified information in a bilateral or multilateral inspection of classified fissile material. Such a barrier must strike a balance between providing the inspecting party the confidence necessary to accept the measurement while protecting the inspected party's classified information. The authors discuss the structure required of an IB as well as the implications of the IB on detector system maintenance. A defense-in-depth approach is proposed which would provide assurance to the inspected party that all sensitive information is protected and to the inspecting party that the measurements are being performed as expected. The barrier could include elements of physical protection (such as locks, surveillance systems, and tamper indicators), hardening of key hardware components, assurance of capabilities and limitations of hardware and software systems, administrative controls, validation and verification of the systems, and error detection and resolution. Finally, an unclassified interface could be used to display and, possibly, record measurement results. The introduction of an IB into an analysis system may result in many otherwise innocuous components (detectors, analyzers, etc.) becoming classified and unavailable for routine maintenance by uncleared personnel. System maintenance and updating will be significantly simplified if the classification status of as many components as possible can be made reversible (i.e. the component can become unclassified following the removal of classified objects)

  3. Energy-Efficient Neuromorphic Classifiers.

    Science.gov (United States)

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  4. download full text

    African Journals Online (AJOL)

    Adopting a surveillance system for antibacterial use has therefore become a more realistic ..... Financial support was obtained from the African Poverty Related Infection ... classification and Defined Daily Dose system methodology in Canada.

  5. Microbiological Food Safety Surveillance in China

    Directory of Open Access Journals (Sweden)

    Xiaoyan Pei

    2015-08-01

    Full Text Available Microbiological food safety surveillance is a system that collects data regarding food contamination by foodborne pathogens, parasites, viruses, and other harmful microbiological factors. It helps to understand the spectrum of food safety, timely detect food safety hazards, and provide relevant data for food safety supervision, risk assessment, and standards-setting. The study discusses the microbiological surveillance of food safety in China, and introduces the policies and history of the national microbiological surveillance system. In addition, the function and duties of different organizations and institutions are provided in this work, as well as the generation and content of the surveillance plan, quality control, database, and achievement of the microbiological surveillance of food safety in China.

  6. Intelligent agents for adaptive security market surveillance

    Science.gov (United States)

    Chen, Kun; Li, Xin; Xu, Baoxun; Yan, Jiaqi; Wang, Huaiqing

    2017-05-01

    Market surveillance systems have increasingly gained in usage for monitoring trading activities in stock markets to maintain market integrity. Existing systems primarily focus on the numerical analysis of market activity data and generally ignore textual information. To fulfil the requirements of information-based surveillance, a multi-agent-based architecture that uses agent intercommunication and incremental learning mechanisms is proposed to provide a flexible and adaptive inspection process. A prototype system is implemented using the techniques of text mining and rule-based reasoning, among others. Based on experiments in the scalping surveillance scenario, the system can identify target information evidence up to 87.50% of the time and automatically identify 70.59% of cases depending on the constraints on the available information sources. The results of this study indicate that the proposed information surveillance system is effective. This study thus contributes to the market surveillance literature and has significant practical implications.

  7. 76 FR 34761 - Classified National Security Information

    Science.gov (United States)

    2011-06-14

    ... MARINE MAMMAL COMMISSION Classified National Security Information [Directive 11-01] AGENCY: Marine... Commission's (MMC) policy on classified information, as directed by Information Security Oversight Office... of Executive Order 13526, ``Classified National Security Information,'' and 32 CFR part 2001...

  8. Time series modeling for syndromic surveillance

    Directory of Open Access Journals (Sweden)

    Mandl Kenneth D

    2003-01-01

    Full Text Available Abstract Background Emergency department (ED based syndromic surveillance systems identify abnormally high visit rates that may be an early signal of a bioterrorist attack. For example, an anthrax outbreak might first be detectable as an unusual increase in the number of patients reporting to the ED with respiratory symptoms. Reliably identifying these abnormal visit patterns requires a good understanding of the normal patterns of healthcare usage. Unfortunately, systematic methods for determining the expected number of (ED visits on a particular day have not yet been well established. We present here a generalized methodology for developing models of expected ED visit rates. Methods Using time-series methods, we developed robust models of ED utilization for the purpose of defining expected visit rates. The models were based on nearly a decade of historical data at a major metropolitan academic, tertiary care pediatric emergency department. The historical data were fit using trimmed-mean seasonal models, and additional models were fit with autoregressive integrated moving average (ARIMA residuals to account for recent trends in the data. The detection capabilities of the model were tested with simulated outbreaks. Results Models were built both for overall visits and for respiratory-related visits, classified according to the chief complaint recorded at the beginning of each visit. The mean absolute percentage error of the ARIMA models was 9.37% for overall visits and 27.54% for respiratory visits. A simple detection system based on the ARIMA model of overall visits was able to detect 7-day-long simulated outbreaks of 30 visits per day with 100% sensitivity and 97% specificity. Sensitivity decreased with outbreak size, dropping to 94% for outbreaks of 20 visits per day, and 57% for 10 visits per day, all while maintaining a 97% benchmark specificity. Conclusions Time series methods applied to historical ED utilization data are an important tool

  9. Gearbox Condition Monitoring Using Advanced Classifiers

    Directory of Open Access Journals (Sweden)

    P. Večeř

    2010-01-01

    Full Text Available New efficient and reliable methods for gearbox diagnostics are needed in automotive industry because of growing demand for production quality. This paper presents the application of two different classifiers for gearbox diagnostics – Kohonen Neural Networks and the Adaptive-Network-based Fuzzy Interface System (ANFIS. Two different practical applications are presented. In the first application, the tested gearboxes are separated into two classes according to their condition indicators. In the second example, ANFIS is applied to label the tested gearboxes with a Quality Index according to the condition indicators. In both applications, the condition indicators were computed from the vibration of the gearbox housing. 

  10. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  11. Airborne Video Surveillance

    National Research Council Canada - National Science Library

    Blask, Steven

    2002-01-01

    The DARPA Airborne Video Surveillance (AVS) program was established to develop and promote technologies to make airborne video more useful, providing capabilities that achieve a UAV force multiplier...

  12. Handbook of surveillance technologies

    CERN Document Server

    Petersen, JK

    2012-01-01

    From officially sanctioned, high-tech operations to budget spy cameras and cell phone video, this updated and expanded edition of a bestselling handbook reflects the rapid and significant growth of the surveillance industry. The Handbook of Surveillance Technologies, Third Edition is the only comprehensive work to chronicle the background and current applications of the full-range of surveillance technologies--offering the latest in surveillance and privacy issues.Cutting-Edge--updates its bestselling predecessor with discussions on social media, GPS circuits in cell phones and PDAs, new GIS s

  13. Waste classifying and separation device

    International Nuclear Information System (INIS)

    Kakiuchi, Hiroki.

    1997-01-01

    A flexible plastic bags containing solid wastes of indefinite shape is broken and the wastes are classified. The bag cutting-portion of the device has an ultrasonic-type or a heater-type cutting means, and the cutting means moves in parallel with the transferring direction of the plastic bags. A classification portion separates and discriminates the plastic bag from the contents and conducts classification while rotating a classification table. Accordingly, the plastic bag containing solids of indefinite shape can be broken and classification can be conducted efficiently and reliably. The device of the present invention has a simple structure which requires small installation space and enables easy maintenance. (T.M.)

  14. Defining and Classifying Interest Groups

    DEFF Research Database (Denmark)

    Baroni, Laura; Carroll, Brendan; Chalmers, Adam

    2014-01-01

    The interest group concept is defined in many different ways in the existing literature and a range of different classification schemes are employed. This complicates comparisons between different studies and their findings. One of the important tasks faced by interest group scholars engaged...... in large-N studies is therefore to define the concept of an interest group and to determine which classification scheme to use for different group types. After reviewing the existing literature, this article sets out to compare different approaches to defining and classifying interest groups with a sample...... in the organizational attributes of specific interest group types. As expected, our comparison of coding schemes reveals a closer link between group attributes and group type in narrower classification schemes based on group organizational characteristics than those based on a behavioral definition of lobbying....

  15. Enhancing disease surveillance reporting using public transport in ...

    African Journals Online (AJOL)

    Enhancing disease surveillance reporting using public transport in Dodoma District, Central Tanzania. ... LEG Mboera, SF Rumisha, EJ Mwanemile, E Mziwanda, PK Mmbuji ... Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  16. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  17. Directed Activities Related to Text: Text Analysis and Text Reconstruction.

    Science.gov (United States)

    Davies, Florence; Greene, Terry

    This paper describes Directed Activities Related to Text (DART), procedures that were developed and are used in the Reading for Learning Project at the University of Nottingham (England) to enhance learning from texts and that fall into two broad categories: (1) text analysis procedures, which require students to engage in some form of analysis of…

  18. Soil and vegetation surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Antonio, E.J.

    1995-06-01

    Soil sampling and analysis evaluates long-term contamination trends and monitors environmental radionuclide inventories. This section of the 1994 Hanford Site Environmental Report summarizes the soil and vegetation surveillance programs which were conducted during 1994. Vegetation surveillance is conducted offsite to monitor atmospheric deposition of radioactive materials in areas not under cultivation and onsite at locations adjacent to potential sources of radioactivity.

  19. Between visibility and surveillance

    DEFF Research Database (Denmark)

    Uldam, Julie

    As activists move from alternative media platforms to commercial social media platforms they face increasing challenges in protecting their online security and privacy. While government surveillance of activists is well-documented in both scholarly research and the media, corporate surveillance...

  20. Reassembling Surveillance Creep

    DEFF Research Database (Denmark)

    Bøge, Ask Risom; Lauritsen, Peter

    2017-01-01

    We live in societies in which surveillance technologies are constantly introduced, are transformed, and spread to new practices for new purposes. How and why does this happen? In other words, why does surveillance “creep”? This question has received little attention either in theoretical developm......We live in societies in which surveillance technologies are constantly introduced, are transformed, and spread to new practices for new purposes. How and why does this happen? In other words, why does surveillance “creep”? This question has received little attention either in theoretical...... development or in empirical analyses. Accordingly, this article contributes to this special issue on the usefulness of Actor-Network Theory (ANT) by suggesting that ANT can advance our understanding of ‘surveillance creep’. Based on ANT’s model of translation and a historical study of the Danish DNA database......, we argue that surveillance creep involves reassembling the relations in surveillance networks between heterogeneous actors such as the watchers, the watched, laws, and technologies. Second, surveillance creeps only when these heterogeneous actors are adequately interested and aligned. However...

  1. Outdoor Air Quality Level Inference via Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2016-01-01

    Full Text Available Air pollution is a universal problem confronted by many developing countries. Because there are very few air quality monitoring stations in cities, it is difficult for people to know the exact air quality level anytime and anywhere. Fortunately, large amount of surveillance cameras have been deployed in the cities and can capture image densely and conveniently in the cities. In this case, this provides the possibility to utilize surveillance cameras as sensors to obtain data and predict the air quality level. To this end, we present a novel air quality level inference approach based on outdoor images. Firstly, we explore several features extracted from images as the robust representation for air quality prediction. Then, to effectively fuse these heterogeneous and complementary features, we adopt multikernel learning to learn an adaptive classifier for air quality level inference. In addition, to facilitate the research, we construct an Outdoor Air Quality Image Set (OAQIS dataset, which contains high quality registered and calibrated images with rich labels, that is, concentration of particles mass (PM, weather, temperature, humidity, and wind. Extensive experiments on the OAQIS dataset demonstrate the effectiveness of the proposed approach.

  2. TEXT CLASSIFICATION FOR AUTOMATIC DETECTION OF E-CIGARETTE USE AND USE FOR SMOKING CESSATION FROM TWITTER: A FEASIBILITY PILOT.

    Science.gov (United States)

    Aphinyanaphongs, Yin; Lulejian, Armine; Brown, Duncan Penfold; Bonneau, Richard; Krebs, Paul

    2016-01-01

    Rapid increases in e-cigarette use and potential exposure to harmful byproducts have shifted public health focus to e-cigarettes as a possible drug of abuse. Effective surveillance of use and prevalence would allow appropriate regulatory responses. An ideal surveillance system would collect usage data in real time, focus on populations of interest, include populations unable to take the survey, allow a breadth of questions to answer, and enable geo-location analysis. Social media streams may provide this ideal system. To realize this use case, a foundational question is whether we can detect e-cigarette use at all. This work reports two pilot tasks using text classification to identify automatically Tweets that indicate e-cigarette use and/or e-cigarette use for smoking cessation. We build and define both datasets and compare performance of 4 state of the art classifiers and a keyword search for each task. Our results demonstrate excellent classifier performance of up to 0.90 and 0.94 area under the curve in each category. These promising initial results form the foundation for further studies to realize the ideal surveillance solution.

  3. Window of Opportunity for New Disease Surveillance: Developing Keyword Lists for Monitoring Mental Health and Injury Through Syndromic Surveillance.

    Science.gov (United States)

    Lauper, Ursula; Chen, Jian-Hua; Lin, Shao

    2017-04-01

    Studies have documented the impact that hurricanes have on mental health and injury rates before, during, and after the event. Since timely tracking of these disease patterns is crucial to disaster planning, response, and recovery, syndromic surveillance keyword filters were developed by the New York State Department of Health to study the short- and long-term impacts of Hurricane Sandy. Emergency department syndromic surveillance is recognized as a valuable tool for informing public health activities during and immediately following a disaster. Data typically consist of daily visit reports from hospital emergency departments (EDs) of basic patient data and free-text chief complaints. To develop keyword lists, comparisons were made with existing CDC categories and then integrated with lists from the New York City and New Jersey health departments in a collaborative effort. Two comprehensive lists were developed, each containing multiple subcategories and over 100 keywords for both mental health and injury. The data classifiers using these keywords were used to assess impacts of Sandy on mental health and injuries in New York State. The lists will be validated by comparing the ED chief complaint keyword with the final ICD diagnosis code. (Disaster Med Public Health Preparedness. 2017;11:173-178).

  4. Binary naive Bayesian classifiers for correlated Gaussian features: a theoretical analysis

    CSIR Research Space (South Africa)

    Van Dyk, E

    2008-11-01

    Full Text Available classifier with Gaussian features while using any quadratic decision boundary. Therefore, the analysis is not restricted to Naive Bayesian classifiers alone and can, for instance, be used to calculate the Bayes error performance. We compare the analytical...

  5. Classifying Coding DNA with Nucleotide Statistics

    Directory of Open Access Journals (Sweden)

    Nicolas Carels

    2009-10-01

    Full Text Available In this report, we compared the success rate of classification of coding sequences (CDS vs. introns by Codon Structure Factor (CSF and by a method that we called Universal Feature Method (UFM. UFM is based on the scoring of purine bias (Rrr and stop codon frequency. We show that the success rate of CDS/intron classification by UFM is higher than by CSF. UFM classifies ORFs as coding or non-coding through a score based on (i the stop codon distribution, (ii the product of purine probabilities in the three positions of nucleotide triplets, (iii the product of Cytosine (C, Guanine (G, and Adenine (A probabilities in the 1st, 2nd, and 3rd positions of triplets, respectively, (iv the probabilities of G in 1st and 2nd position of triplets and (v the distance of their GC3 vs. GC2 levels to the regression line of the universal correlation. More than 80% of CDSs (true positives of Homo sapiens (>250 bp, Drosophila melanogaster (>250 bp and Arabidopsis thaliana (>200 bp are successfully classified with a false positive rate lower or equal to 5%. The method releases coding sequences in their coding strand and coding frame, which allows their automatic translation into protein sequences with 95% confidence. The method is a natural consequence of the compositional bias of nucleotides in coding sequences.

  6. A systematic comparison of supervised classifiers.

    Directory of Open Access Journals (Sweden)

    Diego Raphael Amancio

    Full Text Available Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM. In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.

  7. Composite Classifiers for Automatic Target Recognition

    National Research Council Canada - National Science Library

    Wang, Lin-Cheng

    1998-01-01

    ...) using forward-looking infrared (FLIR) imagery. Two existing classifiers, one based on learning vector quantization and the other on modular neural networks, are used as the building blocks for our composite classifiers...

  8. Aggregation Operator Based Fuzzy Pattern Classifier Design

    DEFF Research Database (Denmark)

    Mönks, Uwe; Larsen, Henrik Legind; Lohweg, Volker

    2009-01-01

    This paper presents a novel modular fuzzy pattern classifier design framework for intelligent automation systems, developed on the base of the established Modified Fuzzy Pattern Classifier (MFPC) and allows designing novel classifier models which are hardware-efficiently implementable....... The performances of novel classifiers using substitutes of MFPC's geometric mean aggregator are benchmarked in the scope of an image processing application against the MFPC to reveal classification improvement potentials for obtaining higher classification rates....

  9. 15 CFR 4.8 - Classified Information.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily the...

  10. A new entropy function for feature extraction with the refined scores as a classifier for the unconstrained ear verification

    Directory of Open Access Journals (Sweden)

    Mamta Bansal

    2017-05-01

    Full Text Available For high end security like surveillance there is a need for a robust system capable of verifying a person under the unconstrained conditions. This paper presents the ear based verification system using a new entropy function that changes not only the information gain function but also the information source values. This entropy function displays peculiar characteristics such as splitting into two modes. Two types of entropy features: Effective Gaussian Information source value and Effective Exponential Information source value functions are derived using the entropy function. To classify the entropy features we have devised refined scores (RS method that refines the scores generated using the Euclidean distance. The experimental results vindicate the superiority of proposed method over literature.

  11. A quick survey of text categorization algorithms

    Directory of Open Access Journals (Sweden)

    Dan MUNTEANU

    2007-12-01

    Full Text Available This paper contains an overview of basic formulations and approaches to text classification. This paper surveys the algorithms used in text categorization: handcrafted rules, decision trees, decision rules, on-line learning, linear classifier, Rocchio’s algorithm, k Nearest Neighbor (kNN, Support Vector Machines (SVM.

  12. Deployment Health Surveillance

    National Research Council Canada - National Science Library

    DeNicola, Anthony D

    2004-01-01

    ... of stress in causing chronic illness. The lack of comprehensive deployment health surveillance has made it difficult to determine possible causes of adverse health effects reported by Gulf War veterans...

  13. 522 Postmarket Surveillance Studies

    Data.gov (United States)

    U.S. Department of Health & Human Services — The 522 Postmarket Surveillance Studies Program encompasses design, tracking, oversight, and review responsibilities for studies mandated under section 522 of the...

  14. Strengthening foodborne disease surveillance in the WHO African

    African Journals Online (AJOL)

    OMS

    2012-06-04

    Jun 4, 2012 ... region including acute aflatoxicosis in Kenya in 2004 and bromide poisoning in ... Global Food Infections Network (GFN), has been supporting countries to strengthen ... The surveillance system uses standard case definitions for classifying .... Figure 4: Participating countries and training sites for foodborne.

  15. Text mining in the classification of digital documents

    Directory of Open Access Journals (Sweden)

    Marcial Contreras Barrera

    2016-11-01

    Full Text Available Objective: Develop an automated classifier for the classification of bibliographic material by means of the text mining. Methodology: The text mining is used for the development of the classifier, based on a method of type supervised, conformed by two phases; learning and recognition, in the learning phase, the classifier learns patterns across the analysis of bibliographical records, of the classification Z, belonging to library science, information sciences and information resources, recovered from the database LIBRUNAM, in this phase is obtained the classifier capable of recognizing different subclasses (LC. In the recognition phase the classifier is validated and evaluates across classification tests, for this end bibliographical records of the classification Z are taken randomly, classified by a cataloguer and processed by the automated classifier, in order to obtain the precision of the automated classifier. Results: The application of the text mining achieved the development of the automated classifier, through the method classifying documents supervised type. The precision of the classifier was calculated doing the comparison among the assigned topics manually and automated obtaining 75.70% of precision. Conclusions: The application of text mining facilitated the creation of automated classifier, allowing to obtain useful technology for the classification of bibliographical material with the aim of improving and speed up the process of organizing digital documents.

  16. Liberal luxury: Decentering Snowden, surveillance and privilege

    Directory of Open Access Journals (Sweden)

    Piro Rexhepi

    2016-11-01

    Full Text Available This paper reflects on the continued potency of veillance theories to traverse beyond the taxonomies of surveillance inside liberal democracies. It provides a commentary on the ability of sousveillance to destabilise and disrupt suer/violence by shifting its focus from the centre to the periphery, where Big Data surveillance is tantamount to sur/violence. In these peripheral political spaces, surveillance is not framed by concerns over privacy, democracy and civil society; rather, it is a matter of life and death, a technique of both biopolitical and thanatopolitical power. I argue that the universalist, and universalizing, debates over surveillance cannot be mapped through the anxieties of privileged middle classes as they would neither transcend nor make possible alternative ways of tackling the intersection of surveillance and violence so long as they are couched in the liberal concerns for democracy. I call this phenomenon “liberal luxury,” whereby debates over surveillance have over-emphasised liberal proclivities at the expense of disengaging those peripheral populations most severely affected by sur/violence.

  17. Reviewing surveillance activities in nuclear power plants

    International Nuclear Information System (INIS)

    1989-03-01

    This document provides guidance to Operational Safety Review Teams (OSARTs) for reviewing surveillance activities at a nuclear power plant. In addition, the document contains reference material to support the review of surveillance activities, to assist within the Technical Support area and to ensure consistency between individual reviews. Drafts of the document have already been used on several OSART missions and found to be useful. The document first considers the objectives of an excellent surveillance programme. Investigations to determine the quality of the surveillance programme are then discussed. The attributes of an excellent surveillance programme are listed. Advice follows on how to phrase questions so as to obtain an informative response on surveillance features. Finally, specific equipment is mentioned that should be considered when reviewing functional tests. Four annexes provide examples drawn from operating nuclear power plants. They were selected to supplement the main text of the document with the best international practices as found in OSART reviews. They should in no way limit the acceptance and development of alternative approaches that lead to equivalent or better results. Refs, figs and tabs

  18. SparkText: Biomedical Text Mining on Big Data Framework.

    Directory of Open Access Journals (Sweden)

    Zhan Ye

    Full Text Available Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment.In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM, and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes.This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.

  19. Twitter Influenza Surveillance: Quantifying Seasonal Misdiagnosis Patterns and their Impact on Surveillance Estimates.

    Science.gov (United States)

    Mowery, Jared

    2016-01-01

    Influenza (flu) surveillance using Twitter data can potentially save lives and increase efficiency by providing governments and healthcare organizations with greater situational awareness. However, research is needed to determine the impact of Twitter users' misdiagnoses on surveillance estimates. This study establishes the importance of Twitter users' misdiagnoses by showing that Twitter flu surveillance in the United States failed during the 2011-2012 flu season, estimates the extent of misdiagnoses, and tests several methods for reducing the adverse effects of misdiagnoses. Metrics representing flu prevalence, seasonal misdiagnosis patterns, diagnosis uncertainty, flu symptoms, and noise were produced using Twitter data in conjunction with OpenSextant for geo-inferencing, and a maximum entropy classifier for identifying tweets related to illness. These metrics were tested for correlations with World Health Organization (WHO) positive specimen counts of flu from 2011 to 2014. Twitter flu surveillance erroneously indicated a typical flu season during 2011-2012, even though the flu season peaked three months late, and erroneously indicated plateaus of flu tweets before the 2012-2013 and 2013-2014 flu seasons. Enhancements based on estimates of misdiagnoses removed the erroneous plateaus and increased the Pearson correlation coefficients by .04 and .23, but failed to correct the 2011-2012 flu season estimate. A rough estimate indicates that approximately 40% of flu tweets reflected misdiagnoses. Further research into factors affecting Twitter users' misdiagnoses, in conjunction with data from additional atypical flu seasons, is needed to enable Twitter flu surveillance systems to produce reliable estimates during atypical flu seasons.

  20. Colorectal Cancer Surveillance after Index Colonoscopy: Guidance from the Canadian Association of Gastroenterology

    Directory of Open Access Journals (Sweden)

    Desmond Leddin

    2013-01-01

    Full Text Available BACKGROUND: Differences between American (United States [US] and European guidelines for colonoscopy surveillance may create confusion for the practicing clinician. Under- or overutilization of surveillance colonoscopy can impact patient care.

  1. Text Maps: Helping Students Navigate Informational Texts.

    Science.gov (United States)

    Spencer, Brenda H.

    2003-01-01

    Notes that a text map is an instructional approach designed to help students gain fluency in reading content area materials. Discusses how the goal is to teach students about the important features of the material and how the maps can be used to build new understandings. Presents the procedures for preparing and using a text map. (SG)

  2. Classifying Transition Behaviour in Postural Activity Monitoring

    Directory of Open Access Journals (Sweden)

    James BRUSEY

    2009-10-01

    Full Text Available A few accelerometers positioned on different parts of the body can be used to accurately classify steady state behaviour, such as walking, running, or sitting. Such systems are usually built using supervised learning approaches. Transitions between postures are, however, difficult to deal with using posture classification systems proposed to date, since there is no label set for intermediary postures and also the exact point at which the transition occurs can sometimes be hard to pinpoint. The usual bypass when using supervised learning to train such systems is to discard a section of the dataset around each transition. This leads to poorer classification performance when the systems are deployed out of the laboratory and used on-line, particularly if the regimes monitored involve fast paced activity changes. Time-based filtering that takes advantage of sequential patterns is a potential mechanism to improve posture classification accuracy in such real-life applications. Also, such filtering should reduce the number of event messages needed to be sent across a wireless network to track posture remotely, hence extending the system’s life. To support time-based filtering, understanding transitions, which are the major event generators in a classification system, is a key. This work examines three approaches to post-process the output of a posture classifier using time-based filtering: a naïve voting scheme, an exponentially weighted voting scheme, and a Bayes filter. Best performance is obtained from the exponentially weighted voting scheme although it is suspected that a more sophisticated treatment of the Bayes filter might yield better results.

  3. A GIS-driven integrated real-time surveillance pilot system for national West Nile virus dead bird surveillance in Canada

    Directory of Open Access Journals (Sweden)

    Aramini Jeff

    2006-04-01

    Full Text Available Abstract Background An extensive West Nile virus surveillance program of dead birds, mosquitoes, horses, and human infection has been launched as a result of West Nile virus first being reported in Canada in 2001. Some desktop and web GIS have been applied to West Nile virus dead bird surveillance. There have been urgent needs for a comprehensive GIS services and real-time surveillance. Results A pilot system was developed to integrate real-time surveillance, real-time GIS, and Open GIS technology in order to enhance West Nile virus dead bird surveillance in Canada. Driven and linked by the newly developed real-time web GIS technology, this integrated real-time surveillance system includes conventional real-time web-based surveillance components, integrated real-time GIS components, and integrated Open GIS components. The pilot system identified the major GIS functions and capacities that may be important to public health surveillance. The six web GIS clients provide a wide range of GIS tools for public health surveillance. The pilot system has been serving Canadian national West Nile virus dead bird surveillance since 2005 and is adaptable to serve other disease surveillance. Conclusion This pilot system has streamlined, enriched and enhanced national West Nile virus dead bird surveillance in Canada, improved productivity, and reduced operation cost. Its real-time GIS technology, static map technology, WMS integration, and its integration with non-GIS real-time surveillance system made this pilot system unique in surveillance and public health GIS.

  4. Legionnaires’ disease Surveillance in Italy

    Directory of Open Access Journals (Sweden)

    Maria Luisa Ricci

    2004-12-01

    Full Text Available

    In the report presented, data on legionellosis diagnosed in the year 2003 in Italy and notified to the National Surveillance System are analysed. Overall, 617 cases were notified, of which 517 were confirmed and 46 were presumptive.

    The characteristics of the patients are very similar to those reported in the previous years in terms of male/female ratio, age–specific distribution, occupation, etc. Legionella pneumophila serogroup 1 was responsible for approximately 90% of the cases.

  5. SparkText: Biomedical Text Mining on Big Data Framework

    Science.gov (United States)

    He, Karen Y.; Wang, Kai

    2016-01-01

    Background Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. Results In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. Conclusions This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research. PMID:27685652

  6. SparkText: Biomedical Text Mining on Big Data Framework.

    Science.gov (United States)

    Ye, Zhan; Tafti, Ahmad P; He, Karen Y; Wang, Kai; He, Max M

    Many new biomedical research articles are published every day, accumulating rich information, such as genetic variants, genes, diseases, and treatments. Rapid yet accurate text mining on large-scale scientific literature can discover novel knowledge to better understand human diseases and to improve the quality of disease diagnosis, prevention, and treatment. In this study, we designed and developed an efficient text mining framework called SparkText on a Big Data infrastructure, which is composed of Apache Spark data streaming and machine learning methods, combined with a Cassandra NoSQL database. To demonstrate its performance for classifying cancer types, we extracted information (e.g., breast, prostate, and lung cancers) from tens of thousands of articles downloaded from PubMed, and then employed Naïve Bayes, Support Vector Machine (SVM), and Logistic Regression to build prediction models to mine the articles. The accuracy of predicting a cancer type by SVM using the 29,437 full-text articles was 93.81%. While competing text-mining tools took more than 11 hours, SparkText mined the dataset in approximately 6 minutes. This study demonstrates the potential for mining large-scale scientific articles on a Big Data infrastructure, with real-time update from new articles published daily. SparkText can be extended to other areas of biomedical research.

  7. Error minimizing algorithms for nearest eighbor classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Zimmer, G. Beate [TEXAS A& M

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  8. Pictorial binding: endeavor to classify

    Directory of Open Access Journals (Sweden)

    Zinchenko S.

    2015-01-01

    Full Text Available The article is devoted to the classification of bindings of the 1-19th centuries with a unique and untypical book binding decoration technique (encaustic, tempera and oil paintings. Analysis of design features, materials and techniques of art decoration made it possible to identify them as a separate type - pictorial bindings and divide them into four groups. The first group consists of Coptic bindings, decorated with icon-painting images in encaustic technique. The second group is made up of leather Western bindings of the 13-14th centuries, which have the decoration and technique of ornamentation close to iconography. The third group involves parchment bindings, ornamentation technique of which is closer to the miniature. The last group comprises bindings of East Slavic origin of the 15-19th centuries, decorated with icon-painting pictures made in the technique of tempera or oil painting. The proposed classification requires further basic research as several specific kinds of bindings have not yet been investigated

  9. Genomic Analysis and Surveillance of the Coronavirus Dominant in Ducks in China.

    Directory of Open Access Journals (Sweden)

    Qing-Ye Zhuang

    Full Text Available The genetic diversity, evolution, distribution, and taxonomy of some coronaviruses dominant in birds other than chickens remain enigmatic. In this study we sequenced the genome of a newly identified coronavirus dominant in ducks (DdCoV, and performed a large-scale surveillance of coronaviruses in chickens and ducks using a conserved RT-PCR assay. The viral genome harbors a tandem repeat which is rare in vertebrate RNA viruses. The repeat is homologous to some proteins of various cellular organisms, but its origin remains unknown. Many substitutions, insertions, deletions, and some frameshifts and recombination events have occurred in the genome of the DdCoV, as compared with the coronavirus dominant in chickens (CdCoV. The distances between DdCoV and CdCoV are large enough to separate them into different species within the genus Gammacoronavirus. Our surveillance demonstrated that DdCoVs and CdCoVs belong to different lineages and occupy different ecological niches, further supporting that they should be classified into different species. Our surveillance also demonstrated that DdCoVs and CdCoVs are prevalent in live poultry markets in some regions of China. In conclusion, this study shed novel insight into the genetic diversity, evolution, distribution, and taxonomy of the coronaviruses circulating in chickens and ducks.

  10. State surveillance as a threat to personal security of individuals

    Directory of Open Access Journals (Sweden)

    Sławomir Czapnik

    2015-12-01

    Full Text Available Changes in modern society are crucial to individuals. Article starts with analysis of control in nowadays societies. Then author tries to understand useful categories, as "Panopticon", "ban-opticon" and "synopticon". Last part is focused on stete surveillance, i.e. surveillance by American National Security Agency.

  11. A cardiorespiratory classifier of voluntary and involuntary electrodermal activity

    Directory of Open Access Journals (Sweden)

    Sejdic Ervin

    2010-02-01

    Full Text Available Abstract Background Electrodermal reactions (EDRs can be attributed to many origins, including spontaneous fluctuations of electrodermal activity (EDA and stimuli such as deep inspirations, voluntary mental activity and startling events. In fields that use EDA as a measure of psychophysiological state, the fact that EDRs may be elicited from many different stimuli is often ignored. This study attempts to classify observed EDRs as voluntary (i.e., generated from intentional respiratory or mental activity or involuntary (i.e., generated from startling events or spontaneous electrodermal fluctuations. Methods Eight able-bodied participants were subjected to conditions that would cause a change in EDA: music imagery, startling noises, and deep inspirations. A user-centered cardiorespiratory classifier consisting of 1 an EDR detector, 2 a respiratory filter and 3 a cardiorespiratory filter was developed to automatically detect a participant's EDRs and to classify the origin of their stimulation as voluntary or involuntary. Results Detected EDRs were classified with a positive predictive value of 78%, a negative predictive value of 81% and an overall accuracy of 78%. Without the classifier, EDRs could only be correctly attributed as voluntary or involuntary with an accuracy of 50%. Conclusions The proposed classifier may enable investigators to form more accurate interpretations of electrodermal activity as a measure of an individual's psychophysiological state.

  12. Surveillance of antibiotic resistance

    Science.gov (United States)

    Johnson, Alan P.

    2015-01-01

    Surveillance involves the collection and analysis of data for the detection and monitoring of threats to public health. Surveillance should also inform as to the epidemiology of the threat and its burden in the population. A further key component of surveillance is the timely feedback of data to stakeholders with a view to generating action aimed at reducing or preventing the public health threat being monitored. Surveillance of antibiotic resistance involves the collection of antibiotic susceptibility test results undertaken by microbiology laboratories on bacteria isolated from clinical samples sent for investigation. Correlation of these data with demographic and clinical data for the patient populations from whom the pathogens were isolated gives insight into the underlying epidemiology and facilitates the formulation of rational interventions aimed at reducing the burden of resistance. This article describes a range of surveillance activities that have been undertaken in the UK over a number of years, together with current interventions being implemented. These activities are not only of national importance but form part of the international response to the global threat posed by antibiotic resistance. PMID:25918439

  13. An Intelligent System For Arabic Text Categorization

    NARCIS (Netherlands)

    Syiam, M.M.; Tolba, Mohamed F.; Fayed, Z.T.; Abdel-Wahab, Mohamed S.; Ghoniemy, Said A.; Habib, Mena Badieh

    Text Categorization (classification) is the process of classifying documents into a predefined set of categories based on their content. In this paper, an intelligent Arabic text categorization system is presented. Machine learning algorithms are used in this system. Many algorithms for stemming and

  14. English Metafunction Analysis in Chemistry Text: Characterization of Scientific Text

    Directory of Open Access Journals (Sweden)

    Ahmad Amin Dalimunte, M.Hum

    2013-09-01

    Full Text Available The objectives of this research are to identify what Metafunctions are applied in chemistry text and how they characterize a scientific text. It was conducted by applying content analysis. The data for this research was a twelve-paragraph chemistry text. The data were collected by applying a documentary technique. The document was read and analyzed to find out the Metafunction. The data were analyzed by some procedures: identifying the types of process, counting up the number of the processes, categorizing and counting up the cohesion devices, classifying the types of modulation and determining modality value, finally counting up the number of sentences and clauses, then scoring the grammatical intricacy index. The findings of the research show that Material process (71of 100 is mostly used, circumstance of spatial location (26 of 56 is more dominant than the others. Modality (5 is less used in order to avoid from subjectivity. Impersonality is implied through less use of reference either pronouns (7 or demonstrative (7, conjunctions (60 are applied to develop ideas, and the total number of the clauses are found much more dominant (109 than the total number of the sentences (40 which results high grammatical intricacy index. The Metafunction found indicate that the chemistry text has fulfilled the characteristics of scientific or academic text which truly reflects it as a natural science.

  15. Hierarchical mixtures of naive Bayes classifiers

    NARCIS (Netherlands)

    Wiering, M.A.

    2002-01-01

    Naive Bayes classifiers tend to perform very well on a large number of problem domains, although their representation power is quite limited compared to more sophisticated machine learning algorithms. In this pa- per we study combining multiple naive Bayes classifiers by using the hierar- chical

  16. Comparing classifiers for pronunciation error detection

    NARCIS (Netherlands)

    Strik, H.; Truong, K.; Wet, F. de; Cucchiarini, C.

    2007-01-01

    Providing feedback on pronunciation errors in computer assisted language learning systems requires that pronunciation errors be detected automatically. In the present study we compare four types of classifiers that can be used for this purpose: two acoustic-phonetic classifiers (one of which employs

  17. Feature extraction for dynamic integration of classifiers

    NARCIS (Netherlands)

    Pechenizkiy, M.; Tsymbal, A.; Puuronen, S.; Patterson, D.W.

    2007-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. In this paper, we present an algorithm for the dynamic integration of classifiers in the space of extracted features (FEDIC). It is based on the technique

  18. Identifying aggressive prostate cancer foci using a DNA methylation classifier.

    Science.gov (United States)

    Mundbjerg, Kamilla; Chopra, Sameer; Alemozaffar, Mehrdad; Duymich, Christopher; Lakshminarasimhan, Ranjani; Nichols, Peter W; Aron, Manju; Siegmund, Kimberly D; Ukimura, Osamu; Aron, Monish; Stern, Mariana; Gill, Parkash; Carpten, John D; Ørntoft, Torben F; Sørensen, Karina D; Weisenberger, Daniel J; Jones, Peter A; Duddalwar, Vinay; Gill, Inderbir; Liang, Gangning

    2017-01-12

    Slow-growing prostate cancer (PC) can be aggressive in a subset of cases. Therefore, prognostic tools to guide clinical decision-making and avoid overtreatment of indolent PC and undertreatment of aggressive disease are urgently needed. PC has a propensity to be multifocal with several different cancerous foci per gland. Here, we have taken advantage of the multifocal propensity of PC and categorized aggressiveness of individual PC foci based on DNA methylation patterns in primary PC foci and matched lymph node metastases. In a set of 14 patients, we demonstrate that over half of the cases have multiple epigenetically distinct subclones and determine the primary subclone from which the metastatic lesion(s) originated. Furthermore, we develop an aggressiveness classifier consisting of 25 DNA methylation probes to determine aggressive and non-aggressive subclones. Upon validation of the classifier in an independent cohort, the predicted aggressive tumors are significantly associated with the presence of lymph node metastases and invasive tumor stages. Overall, this study provides molecular-based support for determining PC aggressiveness with the potential to impact clinical decision-making, such as targeted biopsy approaches for early diagnosis and active surveillance, in addition to focal therapy.

  19. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  20. Deconvolution When Classifying Noisy Data Involving Transformations.

    Science.gov (United States)

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-09-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  1. Deconvolution When Classifying Noisy Data Involving Transformations

    KAUST Repository

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2012-01-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online.

  2. Distributed data processing for public health surveillance

    Directory of Open Access Journals (Sweden)

    Yih Katherine

    2006-09-01

    Full Text Available Abstract Background Many systems for routine public health surveillance rely on centralized collection of potentially identifiable, individual, identifiable personal health information (PHI records. Although individual, identifiable patient records are essential for conditions for which there is mandated reporting, such as tuberculosis or sexually transmitted diseases, they are not routinely required for effective syndromic surveillance. Public concern about the routine collection of large quantities of PHI to support non-traditional public health functions may make alternative surveillance methods that do not rely on centralized identifiable PHI databases increasingly desirable. Methods The National Bioterrorism Syndromic Surveillance Demonstration Program (NDP is an example of one alternative model. All PHI in this system is initially processed within the secured infrastructure of the health care provider that collects and holds the data, using uniform software distributed and supported by the NDP. Only highly aggregated count data is transferred to the datacenter for statistical processing and display. Results Detailed, patient level information is readily available to the health care provider to elucidate signals observed in the aggregated data, or for ad hoc queries. We briefly describe the benefits and disadvantages associated with this distributed processing model for routine automated syndromic surveillance. Conclusion For well-defined surveillance requirements, the model can be successfully deployed with very low risk of inadvertent disclosure of PHI – a feature that may make participation in surveillance systems more feasible for organizations and more appealing to the individuals whose PHI they hold. It is possible to design and implement distributed systems to support non-routine public health needs if required.

  3. The plays and arts of surveillance: studying surveillance as entertainment

    NARCIS (Netherlands)

    Albrechtslund, Anders; Dubbeld, L.

    2006-01-01

    This paper suggests a direction in the development of Surveillance Studies that goes beyond current attention for the caring, productive and enabling aspects of surveillance practices. That is, surveillance could be considered not just as positively protective, but even as a comical, playful,

  4. A comprehensive review on intelligent surveillance systems

    Directory of Open Access Journals (Sweden)

    Sutrisno Warsono Ibrahim

    2016-05-01

    Full Text Available Intelligent surveillance system (ISS has received growing attention due to the increasing demand on security and safety. ISS is able to automatically analyze image, video, audio or other type of surveillance data without or with limited human intervention. The recent developments in sensor devices, computer vision, and machine learning have an important role in enabling such intelligent system. This paper aims to provide general overview of intelligent surveillance system and discuss some possible sensor modalities and their fusion scenarios such as visible camera (CCTV, infrared camera, thermal camera and radar. This paper also discusses main processing steps in ISS: background-foreground segmentation, object detection and classification, tracking, and behavioral analysis.

  5. Epidemiological Concepts Regarding Disease Monitoring and Surveillance

    Directory of Open Access Journals (Sweden)

    Christensen Jette

    2001-03-01

    Full Text Available Definitions of epidemiological concepts regarding disease monitoring and surveillance can be found in textbooks on veterinary epidemiology. This paper gives a review of how the concepts: monitoring, surveillance, and disease control strategies are defined. Monitoring and surveillance systems (MO&SS involve measurements of disease occurrence, and the design of the monitoring determines which types of disease occurrence measures can be applied. However, the knowledge of the performance of diagnostic tests (sensitivity and specificity is essential to estimate the true occurrence of the disease. The terms, disease control programme (DCP or disease eradication programme (DEP, are defined, and the steps of DCP/DEP are described to illustrate that they are a process rather than a static MO&SS.

  6. Surveillance and Resilience in Theory and Practice

    Directory of Open Access Journals (Sweden)

    Charles D. Raab

    2015-09-01

    Full Text Available Surveillance is often used as a tool in resilience strategies towards the threat posed by terrorist attacks and other serious crime. “Resilience” is a contested term with varying and ambiguous meaning in governmental, business and social discourses, and it is not clear how it relates to other terms that characterise processes or states of being. Resilience is often assumed to have positive connotations, but critics view it with great suspicion, regarding it as a neo-liberal governmental strategy. However, we argue that surveillance, introduced in the name of greater security, may itself erode social freedoms and public goods such as privacy, paradoxically requiring societal resilience, whether precautionary or in mitigation of the harms it causes to the public goods of free societies. This article develops new models and extends existing ones to describe resilience processes unfolding over time and in anticipation of, or in reaction to, adversities of different kinds and severity, and explores resilience both on the plane of abstract analysis and in the context of societal responses to mass surveillance. The article thus focuses upon surveillance as a special field for conceptual analysis and modelling of situations, and for evaluating contemporary developments in “surveillance societies”.

  7. Conic surveillance evasion

    NARCIS (Netherlands)

    Lewin, J.; Olsder, G.J.

    1979-01-01

    A surveillance-evasion differential game of degree with a detection zone in the shape of a two-dimensional cone is posed. The nature of the optimal strategies and the singular phenomena of the value function are described and correlated to subsets of the space of all possible parameter combinations,

  8. Laser surveillance system (LASSY)

    International Nuclear Information System (INIS)

    Boeck, H.; Hammer, J.

    1988-01-01

    The development progress during the reporting period 1988 of the laser surveillance system of spent fuel pools is summarized. The present engineered system comes close to a final version for field application as all technical questions have been solved in 1988. 14 figs., 1 tab. (Author)

  9. Laser surveillance system (LASSY)

    International Nuclear Information System (INIS)

    Boeck, H.

    1991-09-01

    Laser Surveillance System (LASSY) is a beam of laser light which scans a plane above the water or under-water in a spent-fuel pond. The system can detect different objects and estimates its coordinates and distance as well. LASSY can operate in stand-alone configuration or in combination with a video surveillance to trigger signal to a videorecorder. The recorded information on LASSY computer's disk comprises date, time, start and stop angle of detected alarm, the size of the disturbance indicated in number of deviated points and some other information. The information given by the laser system cannot be fully substituted by TV camera pictures since the scanning beam creates a horizontal surveillance plan. The engineered prototype laser system long-term field test has been carried out in Soluggia (Italy) and has shown its feasibility and reliability under the conditions of real spent fuel storage pond. The verification of the alarm table on the LASSY computer with the recorded video pictures of TV surveillance system confirmed that all alarm situations have been detected. 5 refs

  10. Infectieziekten Surveillance Informatie Systeem

    NARCIS (Netherlands)

    Sprenger MJW; van Pelt W; CIE

    1994-01-01

    In the Netherlands an electronic network has been proposed for structured data transfer and communication concerning the control of infectious diseases. This project has been baptized ISIS (Infectious diseases Surveillance Information System). It is an initiative of the Dutch Government. ISIS

  11. Surveillance and Communication

    DEFF Research Database (Denmark)

    Bøge, Ask Risom; Albrechtslund, Anders; Lauritsen, Peter

    2017-01-01

    , and acquaintances are up to on social media. In turn, they also leave trails of digital footprints that may be collected and analyzed by governments, businesses, or hackers. The imperceptible nature of this new surveillance raises some pressing concerns about our digital lives as our data doubles increasingly...

  12. Logarithmic learning for generalized classifier neural network.

    Science.gov (United States)

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. COMPARISON OF SVM AND FUZZY CLASSIFIER FOR AN INDIAN SCRIPT

    Directory of Open Access Journals (Sweden)

    M. J. Baheti

    2012-01-01

    Full Text Available With the advent of technological era, conversion of scanned document (handwritten or printed into machine editable format has attracted many researchers. This paper deals with the problem of recognition of Gujarati handwritten numerals. Gujarati numeral recognition requires performing some specific steps as a part of preprocessing. For preprocessing digitization, segmentation, normalization and thinning are done with considering that the image have almost no noise. Further affine invariant moments based model is used for feature extraction and finally Support Vector Machine (SVM and Fuzzy classifiers are used for numeral classification. . The comparison of SVM and Fuzzy classifier is made and it can be seen that SVM procured better results as compared to Fuzzy Classifier.

  14. Defending Malicious Script Attacks Using Machine Learning Classifiers

    Directory of Open Access Journals (Sweden)

    Nayeem Khan

    2017-01-01

    Full Text Available The web application has become a primary target for cyber criminals by injecting malware especially JavaScript to perform malicious activities for impersonation. Thus, it becomes an imperative to detect such malicious code in real time before any malicious activity is performed. This study proposes an efficient method of detecting previously unknown malicious java scripts using an interceptor at the client side by classifying the key features of the malicious code. Feature subset was obtained by using wrapper method for dimensionality reduction. Supervised machine learning classifiers were used on the dataset for achieving high accuracy. Experimental results show that our method can efficiently classify malicious code from benign code with promising results.

  15. High dimensional classifiers in the imbalanced case

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Jensen, Jens Ledet

    We consider the binary classification problem in the imbalanced case where the number of samples from the two groups differ. The classification problem is considered in the high dimensional case where the number of variables is much larger than the number of samples, and where the imbalance leads...... to a bias in the classification. A theoretical analysis of the independence classifier reveals the origin of the bias and based on this we suggest two new classifiers that can handle any imbalance ratio. The analytical results are supplemented by a simulation study, where the suggested classifiers in some...

  16. Congenital rubella syndrome surveillance as a platform for surveillance of other congenital infections, Peru, 2004-2007.

    Science.gov (United States)

    Whittembury, Alvaro; Galdos, Jorge; Lugo, María; Suárez-Ognio, Luis; Ortiz, Ana; Cabezudo, Edwin; Martínez, Mario; Castillo-Solórzano, Carlos; Andrus, Jon Kim

    2011-09-01

    Rubella during pregnancy can cause serious fetal abnormalities and death. Peru has had integrated measles/rubella surveillance since 2000 but did not implement congenital rubella syndrome (CRS) surveillance until 2004, in accordance with the Pan American Health Organization recommendations for rubella elimination. The article describes the experience from the CRS sentinel surveillance system in Peru. Peru has maintained a national sentinel surveillance system for reporting confirmed and suspected CRS cases since 2004. A surveillance protocol was implemented with standardized case definitions and instruments in the selected sentinel sites. Each sentinel site completes their case investigations and report forms and sends the reports to the Health Region Epidemiology Department, which forwards the data to the national Epidemiology Department. CRS surveillance data were analyzed for the period 2004-2007. During the period 2004-2007, 16 health facilities, which are located in 9 of the 33 health regions, representing the 3 main geographical areas (coast, mountain, and jungle), were included as sentinel sites for the CRS surveillance. A total of 2061 suspected CRS cases were reported to the system. Of these, 11 were classified as CRS and 23 as congenital rubella infection. Factors significantly associated with rubella vertical transmission were: (1) in the mother, maternal history of rash during pregnancy (odds ratio [OR], 12.0; 95% confidence interval [CI], 3.8-37.8); (2) and in the infant, pigmentary retinopathy (OR, 18.4; 95% CI, 3.2-104.6), purpura (OR, 14.7; 95% CI, 2.8-78.3), and developmental delay (OR, 4.4; 95% CI, 1.75-11.1). The surveillance system has been able to identify rubella vertical transmission, reinforcing the evidence that rubella was a public health problem in Peru. This system may serve as a platform to implement surveillance for other congenital infections in Peru.

  17. Text-Fabric

    NARCIS (Netherlands)

    Roorda, Dirk

    2016-01-01

    Text-Fabric is a Python3 package for Text plus Annotations. It provides a data model, a text file format, and a binary format for (ancient) text plus (linguistic) annotations. The emphasis of this all is on: data processing; sharing data; and contributing modules. A defining characteristic is that

  18. Contextual Text Mining

    Science.gov (United States)

    Mei, Qiaozhu

    2009-01-01

    With the dramatic growth of text information, there is an increasing need for powerful text mining systems that can automatically discover useful knowledge from text. Text is generally associated with all kinds of contextual information. Those contexts can be explicit, such as the time and the location where a blog article is written, and the…

  19. XML and Free Text.

    Science.gov (United States)

    Riggs, Ken Roger

    2002-01-01

    Discusses problems with marking free text, text that is either natural language or semigrammatical but unstructured, that prevent well-formed XML from marking text for readily available meaning. Proposes a solution to mark meaning in free text that is consistent with the intended simplicity of XML versus SGML. (Author/LRW)

  20. The role of supplementary environmental surveillance to complement acute flaccid paralysis surveillance for wild poliovirus in Pakistan - 2011-2013.

    Directory of Open Access Journals (Sweden)

    Tori L Cowger

    Full Text Available More than 99% of poliovirus infections are non-paralytic and therefore, not detected by acute flaccid paralysis (AFP surveillance. Environmental surveillance (ES can detect circulating polioviruses from sewage without relying on clinical presentation. With extensive ES and continued circulation of polioviruses, Pakistan presents a unique opportunity to quantify the impact of ES as a supplement to AFP surveillance on overall completeness and timeliness of poliovirus detection.Genetic, geographic and temporal data were obtained for all wild poliovirus (WPV isolates detected in Pakistan from January 2011 through December 2013. We used viral genetics to assess gaps in AFP surveillance and ES as measured by detection of 'orphan viruses' (≥1.5% different in VP1 capsid nucleotide sequence. We compared preceding detection of closely related circulating isolates (≥99% identity detected by AFP surveillance or ES to determine which surveillance system first detected circulation before the presentation of each polio case.A total of 1,127 WPV isolates were detected by AFP surveillance and ES in Pakistan from 2011-2013. AFP surveillance and ES combined exhibited fewer gaps (i.e., % orphan viruses in detection than AFP surveillance alone (3.3% vs. 7.7%, respectively. ES detected circulation before AFP surveillance in nearly 60% of polio cases (200 of 346. For polio cases reported from provinces conducting ES, ES detected circulation nearly four months sooner on average (117.6 days than did AFP surveillance.Our findings suggest ES in Pakistan is providing earlier, more sensitive detection of wild polioviruses than AFP surveillance alone. Overall, targeted ES through strategic selection of sites has important implications in the eradication endgame strategy.

  1. Current Directional Protection of Series Compensated Line Using Intelligent Classifier

    Directory of Open Access Journals (Sweden)

    M. Mollanezhad Heydarabadi

    2016-12-01

    Full Text Available Current inversion condition leads to incorrect operation of current based directional relay in power system with series compensated device. Application of the intelligent system for fault direction classification has been suggested in this paper. A new current directional protection scheme based on intelligent classifier is proposed for the series compensated line. The proposed classifier uses only half cycle of pre-fault and post fault current samples at relay location to feed the classifier. A lot of forward and backward fault simulations under different system conditions upon a transmission line with a fixed series capacitor are carried out using PSCAD/EMTDC software. The applicability of decision tree (DT, probabilistic neural network (PNN and support vector machine (SVM are investigated using simulated data under different system conditions. The performance comparison of the classifiers indicates that the SVM is a best suitable classifier for fault direction discriminating. The backward faults can be accurately distinguished from forward faults even under current inversion without require to detect of the current inversion condition.

  2. Detection of microaneurysms in retinal images using an ensemble classifier

    Directory of Open Access Journals (Sweden)

    M.M. Habib

    2017-01-01

    Full Text Available This paper introduces, and reports on the performance of, a novel combination of algorithms for automated microaneurysm (MA detection in retinal images. The presence of MAs in retinal images is a pathognomonic sign of Diabetic Retinopathy (DR which is one of the leading causes of blindness amongst the working age population. An extensive survey of the literature is presented and current techniques in the field are summarised. The proposed technique first detects an initial set of candidates using a Gaussian Matched Filter and then classifies this set to reduce the number of false positives. A Tree Ensemble classifier is used with a set of 70 features (the most commons features in the literature. A new set of 32 MA groundtruth images (with a total of 256 labelled MAs based on images from the MESSIDOR dataset is introduced as a public dataset for benchmarking MA detection algorithms. We evaluate our algorithm on this dataset as well as another public dataset (DIARETDB1 v2.1 and compare it against the best available alternative. Results show that the proposed classifier is superior in terms of eliminating false positive MA detection from the initial set of candidates. The proposed method achieves an ROC score of 0.415 compared to 0.2636 achieved by the best available technique. Furthermore, results show that the classifier model maintains consistent performance across datasets, illustrating the generalisability of the classifier and that overfitting does not occur.

  3. Towards One Health disease surveillance: The Southern African Centre for Infectious Disease Surveillance approach

    Directory of Open Access Journals (Sweden)

    Esron D. Karimuribo

    2012-06-01

    Full Text Available Africa has the highest burden of infectious diseases in the world and yet the least capacity for its risk management. It has therefore become increasingly important to search for ‘fit-for- purpose’ approaches to infectious disease surveillance and thereby targeted disease control. The fact that the majority of human infectious diseases are originally of animal origin means we have to consider One Health (OH approaches which require inter-sectoral collaboration for custom-made infectious disease surveillance in the endemic settings of Africa. A baseline survey was conducted to assess the current status and performance of human and animal health surveillance systems and subsequently a strategy towards OH surveillance system was developed. The strategy focused on assessing the combination of participatory epidemiological approaches and the deployment of mobile technologies to enhance the effectiveness of disease alerts and surveillance at the point of occurrence, which often lies in remote areas. We selected three study sites, namely the Ngorongoro, Kagera River basin and Zambezi River basin ecosystems. We have piloted and introduced the next-generation Android mobile phones running the EpiCollect application developed by Imperial College to aid geo-spatial and clinical data capture and transmission of this data from the field to the remote Information Technology (IT servers at the research hubs for storage, analysis, feedback and reporting. We expect that the combination of participatory epidemiology and technology will significantly improve OH disease surveillance in southern Africa.

  4. World Alliance for Risk Factor Surveillance White Paper on Surveillance and Health Promotion

    Directory of Open Access Journals (Sweden)

    Stefano Campostrini

    2015-02-01

    Full Text Available This is not a research paper on risk factor surveillance. It is an effort by a key group of researchers and practitioners of risk factor surveillance to define the current state of the art and to identify the key issues involved in the current practice of behavioral risk factor surveillance. Those of us who are the principal authors have worked and carried out research in this area for some three decades. As a result of a series of global meetings beginning in 1999 and continuing every two years since then, a collective working group of the International Union of Health Promotion and Education (IUHPE was formed under the name World Alliance of Risk Factor Surveillance (WARFS. Under this banner the organization sought to write a comprehensive statement on the importance of surveillance to health promotion and public health. This paper, which has been revised and reviewed by established peers in the field, is the result. It provides the reader with a clear summary of the major issues that need to be considered by any and all seeking to carry out behavioral risk factor surveillance.

  5. Cyber Surveillance for Flood Disasters

    Directory of Open Access Journals (Sweden)

    Shi-Wei Lo

    2015-01-01

    Full Text Available Regional heavy rainfall is usually caused by the influence of extreme weather conditions. Instant heavy rainfall often results in the flooding of rivers and the neighboring low-lying areas, which is responsible for a large number of casualties and considerable property loss. The existing precipitation forecast systems mostly focus on the analysis and forecast of large-scale areas but do not provide precise instant automatic monitoring and alert feedback for individual river areas and sections. Therefore, in this paper, we propose an easy method to automatically monitor the flood object of a specific area, based on the currently widely used remote cyber surveillance systems and image processing methods, in order to obtain instant flooding and waterlogging event feedback. The intrusion detection mode of these surveillance systems is used in this study, wherein a flood is considered a possible invasion object. Through the detection and verification of flood objects, automatic flood risk-level monitoring of specific individual river segments, as well as the automatic urban inundation detection, has become possible. The proposed method can better meet the practical needs of disaster prevention than the method of large-area forecasting. It also has several other advantages, such as flexibility in location selection, no requirement of a standard water-level ruler, and a relatively large field of view, when compared with the traditional water-level measurements using video screens. The results can offer prompt reference for appropriate disaster warning actions in small areas, making them more accurate and effective.

  6. Radioactivity surveillance in Peruvian fishmeal

    International Nuclear Information System (INIS)

    Lopez, Edith; Osores, Jose; Gonzales, Susana; Martinez, Jorge; Jara, Raul

    2008-01-01

    Full text: Fishmeal is a derived product of fish which is widely used to feed livestock. It is the brown flour obtained after cooking, pressing, drying and milling whole fish and food fish trimmings. Use of whole fish is almost exclusively from small, bony species of pelagic fish (generally living in the surface waters or middle depths of the sea), for which there is little or no demand for human consumption. In many cases, it constitutes the main source of protein in the diet of livestock. Traditionally, Peru has been a producer and exporter country of fish and its derived products. It is considered one of the top producers of fish worldwide. In Peru, anchovy (Engraulis ringens) is by far the most important species for fishmeal production. As part of the Peruvian national program of environmental surveillance, samples of fishmeal taken from different places of sampling (plants of production located in the northern coast of Peru) were measured and analyzed by HpGe gamma spectrometry. This study shows the results of radioactivity surveillance in Peruvian fishmeal, focusing in the contents of 137 Cs, which indicates that the levels of this radionuclide in the samples are below the order of the minimum detectable concentration (Bq/kg). These results are consistent with those obtained by the UK Food Standards Agency in 1999. According to many international regulations, the level of 137 Cs in foodstuff must be below 600 Bq/kg. (author)

  7. Scintillation mitigation for long-range surveillance video

    CSIR Research Space (South Africa)

    Delport, JP

    2010-09-01

    Full Text Available Atmospheric turbulence is a naturally occurring phenomenon that can severely degrade the quality of long-range surveillance video footage. Major effects include image blurring, image warping and temporal wavering of objects in the scene. Mitigating...

  8. Arabic Handwriting Recognition Using Neural Network Classifier

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... an OCR using Neural Network classifier preceded by a set of preprocessing .... Artificial Neural Networks (ANNs), which we adopt in this research, consist of ... advantage and disadvantages of each technique. In [9],. Khemiri ...

  9. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  10. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  11. Neural Network Classifiers for Local Wind Prediction.

    Science.gov (United States)

    Kretzschmar, Ralf; Eckert, Pierre; Cattani, Daniel; Eggimann, Fritz

    2004-05-01

    This paper evaluates the quality of neural network classifiers for wind speed and wind gust prediction with prediction lead times between +1 and +24 h. The predictions were realized based on local time series and model data. The selection of appropriate input features was initiated by time series analysis and completed by empirical comparison of neural network classifiers trained on several choices of input features. The selected input features involved day time, yearday, features from a single wind observation device at the site of interest, and features derived from model data. The quality of the resulting classifiers was benchmarked against persistence for two different sites in Switzerland. The neural network classifiers exhibited superior quality when compared with persistence judged on a specific performance measure, hit and false-alarm rates.

  12. SpectraClassifier 1.0: a user friendly, automated MRS-based classifier-development system

    Directory of Open Access Journals (Sweden)

    Julià-Sapé Margarida

    2010-02-01

    Full Text Available Abstract Background SpectraClassifier (SC is a Java solution for designing and implementing Magnetic Resonance Spectroscopy (MRS-based classifiers. The main goal of SC is to allow users with minimum background knowledge of multivariate statistics to perform a fully automated pattern recognition analysis. SC incorporates feature selection (greedy stepwise approach, either forward or backward, and feature extraction (PCA. Fisher Linear Discriminant Analysis is the method of choice for classification. Classifier evaluation is performed through various methods: display of the confusion matrix of the training and testing datasets; K-fold cross-validation, leave-one-out and bootstrapping as well as Receiver Operating Characteristic (ROC curves. Results SC is composed of the following modules: Classifier design, Data exploration, Data visualisation, Classifier evaluation, Reports, and Classifier history. It is able to read low resolution in-vivo MRS (single-voxel and multi-voxel and high resolution tissue MRS (HRMAS, processed with existing tools (jMRUI, INTERPRET, 3DiCSI or TopSpin. In addition, to facilitate exchanging data between applications, a standard format capable of storing all the information needed for a dataset was developed. Each functionality of SC has been specifically validated with real data with the purpose of bug-testing and methods validation. Data from the INTERPRET project was used. Conclusions SC is a user-friendly software designed to fulfil the needs of potential users in the MRS community. It accepts all kinds of pre-processed MRS data types and classifies them semi-automatically, allowing spectroscopists to concentrate on interpretation of results with the use of its visualisation tools.

  13. Classification of protein-protein interaction full-text documents using text and citation network features.

    Science.gov (United States)

    Kolchinsky, Artemy; Abi-Haidar, Alaa; Kaur, Jasleen; Hamed, Ahmed Abdeen; Rocha, Luis M

    2010-01-01

    We participated (as Team 9) in the Article Classification Task of the Biocreative II.5 Challenge: binary classification of full-text documents relevant for protein-protein interaction. We used two distinct classifiers for the online and offline challenges: 1) the lightweight Variable Trigonometric Threshold (VTT) linear classifier we successfully introduced in BioCreative 2 for binary classification of abstracts and 2) a novel Naive Bayes classifier using features from the citation network of the relevant literature. We supplemented the supplied training data with full-text documents from the MIPS database. The lightweight VTT classifier was very competitive in this new full-text scenario: it was a top-performing submission in this task, taking into account the rank product of the Area Under the interpolated precision and recall Curve, Accuracy, Balanced F-Score, and Matthew's Correlation Coefficient performance measures. The novel citation network classifier for the biomedical text mining domain, while not a top performing classifier in the challenge, performed above the central tendency of all submissions, and therefore indicates a promising new avenue to investigate further in bibliome informatics.

  14. Consistency Analysis of Nearest Subspace Classifier

    OpenAIRE

    Wang, Yi

    2015-01-01

    The Nearest subspace classifier (NSS) finds an estimation of the underlying subspace within each class and assigns data points to the class that corresponds to its nearest subspace. This paper mainly studies how well NSS can be generalized to new samples. It is proved that NSS is strongly consistent under certain assumptions. For completeness, NSS is evaluated through experiments on various simulated and real data sets, in comparison with some other linear model based classifiers. It is also ...

  15. E-text

    DEFF Research Database (Denmark)

    Finnemann, Niels Ole

    2018-01-01

    text can be defined by taking as point of departure the digital format in which everything is represented in the binary alphabet. While the notion of text, in most cases, lends itself to be independent of medium and embodiment, it is also often tacitly assumed that it is, in fact, modeled around...... the print medium, rather than written text or speech. In late 20th century, the notion of text was subject to increasing criticism as in the question raised within literary text theory: is there a text in this class? At the same time, the notion was expanded by including extra linguistic sign modalities...

  16. Texting on the Move

    Science.gov (United States)

    ... text. What's the Big Deal? The problem is multitasking. No matter how young and agile we are, ... on something other than the road. In fact, driving while texting (DWT) can be more dangerous than ...

  17. Text Coherence in Translation

    Science.gov (United States)

    Zheng, Yanping

    2009-01-01

    In the thesis a coherent text is defined as a continuity of senses of the outcome of combining concepts and relations into a network composed of knowledge space centered around main topics. And the author maintains that in order to obtain the coherence of a target language text from a source text during the process of translation, a translator can…

  18. Surface-water surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Saldi, K.A.; Dirkes, R.L.; Blanton, M.L.

    1995-06-01

    This section of the 1994 Hanford Site Environmental Report summarizes the Surface water on and near the Hanford Site is monitored to determine the potential effects of Hanford operations. Surface water at Hanford includes the Columbia River, riverbank springs, ponds located on the Hanford Site, and offsite water systems directly east and across the Columbia River from the Hanford Site, and offsite water systems directly east and across the Columbia River from the Hanford Site. Columbia River sediments are also included in this discussion. Tables 5.3.1 and 5.3.2 summarize the sampling locations, sample types, sampling frequencies, and sample analyses included in surface-water surveillance activities during 1994. Sample locations are also identified in Figure 5.3.1. This section describes the surveillance effort and summarizes the results for these aquatic environments. Detailed analytical results are reported by Bisping (1995).

  19. Water radiological surveillance (II)

    International Nuclear Information System (INIS)

    Pablo San Martin de, M.

    2008-01-01

    This paper summarizes the characteristics of the Environmental Surveillance Radiological Networks (ESRN) currently operating in CEDEX. In the first part, the Spanish Continental Waters ESRN has been presented. This second one describes Spanish Costal Waters ESRN and the High Sensitivity Networks in Continental and Marine Waters. It also presents the Radiological Surveillance of Drinking Waters that CEDEX carries out in waters of public consumption management by the Canal de Isabel II (CYII) and by the Mancomunity of Canals Taibilla (M.C.T.). The legislation applicable in each case is reviewed as well. Due to its extension the article has been divided into two parts. As Spanish Continental Waters ESRN has been reviewed in the first part, the others ESRN are discussed in this second one. (Author) 10 refs

  20. Disaster prevention surveillance system

    International Nuclear Information System (INIS)

    Nara, Satoru; Kamiya, Eisei

    2001-01-01

    Fuji Electric Co., Ltd. has supplied many management systems to nuclear reactor institution. 'The nuclear countermeasures-against-calamities special-measures' was enforced. A nuclear entrepreneur has devised the measure about expansion prevention and restoration of a calamity while it endeavors after prevention of generating of a nuclear calamity. Our company have supplied the 'disaster prevention surveillance system' to the Japan Atomic Energy Research Institute Tokai Research Establishment aiming at strengthening of the monitoring function at the time (after the accident) of the accident used as one of the above-mentioned measures. A 'disaster prevention surveillance system' can share the information on the accident spot in an on-site command place, an activity headquarters, and support organizations, when the serious accident happens. This system is composed of various sensors (temperature, pressure and radiation), cameras, computers and network. (author)

  1. Surface-water surveillance

    International Nuclear Information System (INIS)

    Saldi, K.A.; Dirkes, R.L.; Blanton, M.L.

    1995-01-01

    This section of the 1994 Hanford Site Environmental Report summarizes the Surface water on and near the Hanford Site is monitored to determine the potential effects of Hanford operations. Surface water at Hanford includes the Columbia River, riverbank springs, ponds located on the Hanford Site, and offsite water systems directly east and across the Columbia River from the Hanford Site, and offsite water systems directly east and across the Columbia River from the Hanford Site. Columbia River sediments are also included in this discussion. Tables 5.3.1 and 5.3.2 summarize the sampling locations, sample types, sampling frequencies, and sample analyses included in surface-water surveillance activities during 1994. Sample locations are also identified in Figure 5.3.1. This section describes the surveillance effort and summarizes the results for these aquatic environments. Detailed analytical results are reported by Bisping (1995)

  2. Medical Surveillance Monthly Report

    Science.gov (United States)

    2016-12-01

    Illness Prevention and Sun Safety. “Sun Safety.” https:// phc.amedd.army.mil/ topics /discond/hipss/Pages/ SunSafety.aspx. Accessed on 7 December 2016. 22...febrile illness; however, after its wide- spread introduction into immunologically MSMR Vol. 23 No. 12 December 2016 Page 8 naïve populations, a...October 2016 (data as of 22 November 2016) MSMR’s Invitation to Readers Medical Surveillance Monthly Report (MSMR) invites readers to submit topics for

  3. Internet and Surveillance

    DEFF Research Database (Denmark)

    The Internet has been transformed in the past years from a system primarily oriented on information provision into a medium for communication and community-building. The notion of “Web 2.0”, social software, and social networking sites such as Facebook, Twitter and MySpace have emerged in this co......The Internet has been transformed in the past years from a system primarily oriented on information provision into a medium for communication and community-building. The notion of “Web 2.0”, social software, and social networking sites such as Facebook, Twitter and MySpace have emerged...... institutions have a growing interest in accessing this personal data. Here, contributors explore this changing landscape by addressing topics such as commercial data collection by advertising, consumer sites and interactive media; self-disclosure in the social web; surveillance of file-sharers; privacy...... in the age of the internet; civil watch-surveillance on social networking sites; and networked interactive surveillance in transnational space. This book is a result of a research action launched by the intergovernmental network COST (European Cooperation in Science and Technology)....

  4. A systems biology-based classifier for hepatocellular carcinoma diagnosis.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available AIM: The diagnosis of hepatocellular carcinoma (HCC in the early stage is crucial to the application of curative treatments which are the only hope for increasing the life expectancy of patients. Recently, several large-scale studies have shed light on this problem through analysis of gene expression profiles to identify markers correlated with HCC progression. However, those marker sets shared few genes in common and were poorly validated using independent data. Therefore, we developed a systems biology based classifier by combining the differential gene expression with topological features of human protein interaction networks to enhance the ability of HCC diagnosis. METHODS AND RESULTS: In the Oncomine platform, genes differentially expressed in HCC tissues relative to their corresponding normal tissues were filtered by a corrected Q value cut-off and Concept filters. The identified genes that are common to different microarray datasets were chosen as the candidate markers. Then, their networks were analyzed by GeneGO Meta-Core software and the hub genes were chosen. After that, an HCC diagnostic classifier was constructed by Partial Least Squares modeling based on the microarray gene expression data of the hub genes. Validations of diagnostic performance showed that this classifier had high predictive accuracy (85.88∼92.71% and area under ROC curve (approximating 1.0, and that the network topological features integrated into this classifier contribute greatly to improving the predictive performance. Furthermore, it has been demonstrated that this modeling strategy is not only applicable to HCC, but also to other cancers. CONCLUSION: Our analysis suggests that the systems biology-based classifier that combines the differential gene expression and topological features of human protein interaction network may enhance the diagnostic performance of HCC classifier.

  5. Evaluation of health surveillance activities of hajj 2013 in the hajj embarkation Palangkaraya

    Directory of Open Access Journals (Sweden)

    Elvan Virgo Hoesea

    2014-05-01

    Full Text Available ABSTRACT Meningococcal meningitis and MERS-CoV is a disease that can be transmitted to a wary pilgrim considering the high incidence of both diseases in the Middle East region. This study was conducted to evaluate the surveillance activities conducted at embarkation Palangkaraya pilgrimage between 2013 and assess the surveillance activities based on the attributes of surveillance and barriers that occur in the implementation of activities. Experiment was conducted with descriptive design using quantitative approach. Questionnaires were completed at 6 implementing surveillance activities. Interviews were conducted to obtain information about the variables under study includes data collection, processing, analysis and interpretation, dissemination of information and surveillance attributes such as simplicity, flexibility, acceptability, sensitivity, positive predictive value, representatif, timeliness, data quality and data stability. Implementation health surveillance in the hajj embarkation Palangkaraya in 2013 showed all stages of the surveillance activities have been conducted in accordance with the procedures as well as evaluating surveillance activities in accordance attribute shows all the attributes of surveillance can be assessed, unless the sensitivity and positive predictive value because no cases of meningococcal meningitis. Conclusion that the implementation of health surveillance activities Hajj has been running quite well based approach to surveillance and surveillance attributes. The report has been used by the agency activities related to the activities of hajj embarkation. Need to increase the quantity and quality of manpower resources and facilities Keywords: disease transmission, hajj health surveillance, assessment                             attributes

  6. Vocabulary Constraint on Texts

    Directory of Open Access Journals (Sweden)

    C. Sutarsyah

    2008-01-01

    Full Text Available This case study was carried out in the English Education Department of State University of Malang. The aim of the study was to identify and describe the vocabulary in the reading text and to seek if the text is useful for reading skill development. A descriptive qualitative design was applied to obtain the data. For this purpose, some available computer programs were used to find the description of vocabulary in the texts. It was found that the 20 texts containing 7,945 words are dominated by low frequency words which account for 16.97% of the words in the texts. The high frequency words occurring in the texts were dominated by function words. In the case of word levels, it was found that the texts have very limited number of words from GSL (General Service List of English Words (West, 1953. The proportion of the first 1,000 words of GSL only accounts for 44.6%. The data also show that the texts contain too large proportion of words which are not in the three levels (the first 2,000 and UWL. These words account for 26.44% of the running words in the texts.  It is believed that the constraints are due to the selection of the texts which are made of a series of short-unrelated texts. This kind of text is subject to the accumulation of low frequency words especially those of content words and limited of words from GSL. It could also defeat the development of students' reading skills and vocabulary enrichment.

  7. History and evolution of surveillance in public health

    Directory of Open Access Journals (Sweden)

    Varun Kumar

    2014-01-01

    Full Text Available The modern concept of surveillance has evolved over the centuries. Public health surveillance provides the scientific database essential for decision making and appropriate public health action. It is considered as the best public health tool to prevent the occurrence of epidemics and is the backbone of public health programs and provides information so that effective action can be taken in controlling and preventing diseases of public health importance. This article reviews the history of evolution of public health surveillance from historical perspective: from Hippocrates, Black Death and quarantine, recording of vital events for the first time, first field investigation, legislations that were developed over time and modern concepts in public health surveillance. Eradication of small pox is an important achievement in public health surveillance but the recent Severe Acute Respiratory Syndrome (SARS and Influenza pandemics suggest still there is a room for improvement. Recently new global disease surveillance networks like FluNet and DengueNet were developed as internet sites for monitoring influenza and dengue information. In spite of these developments, global public health surveillance still remains unevenly distributed. There is a need for increased international cooperation to address the global needs of public health surveillance.

  8. Evaluation of two surveillance methods for surgical site infection

    Directory of Open Access Journals (Sweden)

    M. Haji Abdolbaghi

    2006-08-01

    Full Text Available Background: Surgical wound infection surveillance is an important facet of hospital infection control processes. There are several surveillance methods for surgical site infections. The objective of this study is to evaluate the accuracy of two different surgical site infection surveillance methods. Methods: In this prospective cross sectional study 3020 undergoing surgey in general surgical wards of Imam Khomeini hospital were included. Surveillance methods consisted of review of medical records for postoperative fever and review of nursing daily note for prescription of antibiotics postoperatively and during patient’s discharge. Review of patient’s history and daily records and interview with patient’s surgeon and the head-nurse of the ward considered as a gold standard for surveillance. Results: The postoperative antibiotic consumption especially when considering its duration is a proper method for surgical wound infection surveillance. Accomplishments of a prospective study with postdischarge follow up until 30 days after surgery is recommended. Conclusion: The result of this study showed that postoperative antibiotic surveillance method specially with consideration of the antibiotic usage duration is a proper method for surgical site infection surveillance in general surgery wards. Accomplishments of a prospective study with post discharge follow up until 30 days after surgery is recommended.

  9. Surveillance, Snowden, and Big Data: Capacities, consequences, critique

    Directory of Open Access Journals (Sweden)

    David Lyon

    2014-07-01

    Full Text Available The Snowden revelations about National Security Agency surveillance, starting in 2013, along with the ambiguous complicity of internet companies and the international controversies that followed provide a perfect segue into contemporary conundrums of surveillance and Big Data. Attention has shifted from late C20th information technologies and networks to a C21st focus on data, currently crystallized in “Big Data.” Big Data intensifies certain surveillance trends associated with information technology and networks, and is thus implicated in fresh but fluid configurations. This is considered in three main ways: One, the capacities of Big Data (including metadata intensify surveillance by expanding interconnected datasets and analytical tools. Existing dynamics of influence, risk-management, and control increase their speed and scope through new techniques, especially predictive analytics. Two, while Big Data appears to be about size, qualitative change in surveillance practices is also perceptible, accenting consequences. Important trends persist – the control motif, faith in technology, public-private synergies, and user-involvement – but the future-orientation increasingly severs surveillance from history and memory and the quest for pattern-discovery is used to justify unprecedented access to data. Three, the ethical turn becomes more urgent as a mode of critique. Modernity's predilection for certain definitions of privacy betrays the subjects of surveillance who, so far from conforming to the abstract, disembodied image of both computing and legal practices, are engaged and embodied users-in-relation whose activities both fuel and foreclose surveillance.

  10. Dictionaries for text production

    DEFF Research Database (Denmark)

    Fuertes-Olivera, Pedro; Bergenholtz, Henning

    2018-01-01

    Dictionaries for Text Production are information tools that are designed and constructed for helping users to produce (i.e. encode) texts, both oral and written texts. These can be broadly divided into two groups: (a) specialized text production dictionaries, i.e., dictionaries that only offer...... a small amount of lexicographic data, most or all of which are typically used in a production situation, e.g. synonym dictionaries, grammar and spelling dictionaries, collocation dictionaries, concept dictionaries such as the Longman Language Activator, which is advertised as the World’s First Production...... Dictionary; (b) general text production dictionaries, i.e., dictionaries that offer all or most of the lexicographic data that are typically used in a production situation. A review of existing production dictionaries reveals that there are many specialized text production dictionaries but only a few general...

  11. Instant Sublime Text starter

    CERN Document Server

    Haughee, Eric

    2013-01-01

    A starter which teaches the basic tasks to be performed with Sublime Text with the necessary practical examples and screenshots. This book requires only basic knowledge of the Internet and basic familiarity with any one of the three major operating systems, Windows, Linux, or Mac OS X. However, as Sublime Text 2 is primarily a text editor for writing software, many of the topics discussed will be specifically relevant to software development. That being said, the Sublime Text 2 Starter is also suitable for someone without a programming background who may be looking to learn one of the tools of

  12. Wavelet classifier used for diagnosing shock absorbers in cars

    Directory of Open Access Journals (Sweden)

    Janusz GARDULSKI

    2007-01-01

    Full Text Available The paper discusses some commonly used methods of hydraulic absorbertesting. Disadvantages of the methods are described. A vibro-acoustic method is presented and recommended for practical use on existing test rigs. The method is based on continuous wavelet analysis combined with neural classifier and 25-neuron, one-way, three-layer back propagation network. The analysis satisfies the intended aim.

  13. Linguistics in Text Interpretation

    DEFF Research Database (Denmark)

    Togeby, Ole

    2011-01-01

    A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'.......A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'....

  14. LocText

    DEFF Research Database (Denmark)

    Cejuela, Juan Miguel; Vinchurkar, Shrikant; Goldberg, Tatyana

    2018-01-01

    trees and was trained and evaluated on a newly improved LocTextCorpus. Combined with an automatic named-entity recognizer, LocText achieved high precision (P = 86%±4). After completing development, we mined the latest research publications for three organisms: human (Homo sapiens), budding yeast...

  15. Systematic text condensation

    DEFF Research Database (Denmark)

    Malterud, Kirsti

    2012-01-01

    To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies.......To present background, principles, and procedures for a strategy for qualitative analysis called systematic text condensation and discuss this approach compared with related strategies....

  16. The Perfect Text.

    Science.gov (United States)

    Russo, Ruth

    1998-01-01

    A chemistry teacher describes the elements of the ideal chemistry textbook. The perfect text is focused and helps students draw a coherent whole out of the myriad fragments of information and interpretation. The text would show chemistry as the central science necessary for understanding other sciences and would also root chemistry firmly in the…

  17. Text 2 Mind Map

    OpenAIRE

    Iona, John

    2017-01-01

    This is a review of the web resource 'Text 2 Mind Map' www.Text2MindMap.com. It covers what the resource is, and how it might be used in Library and education context, in particular for School Librarians.

  18. Text File Comparator

    Science.gov (United States)

    Kotler, R. S.

    1983-01-01

    File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level.

  19. Classifier Fusion With Contextual Reliability Evaluation.

    Science.gov (United States)

    Liu, Zhunga; Pan, Quan; Dezert, Jean; Han, Jun-Wei; He, You

    2018-05-01

    Classifier fusion is an efficient strategy to improve the classification performance for the complex pattern recognition problem. In practice, the multiple classifiers to combine can have different reliabilities and the proper reliability evaluation plays an important role in the fusion process for getting the best classification performance. We propose a new method for classifier fusion with contextual reliability evaluation (CF-CRE) based on inner reliability and relative reliability concepts. The inner reliability, represented by a matrix, characterizes the probability of the object belonging to one class when it is classified to another class. The elements of this matrix are estimated from the -nearest neighbors of the object. A cautious discounting rule is developed under belief functions framework to revise the classification result according to the inner reliability. The relative reliability is evaluated based on a new incompatibility measure which allows to reduce the level of conflict between the classifiers by applying the classical evidence discounting rule to each classifier before their combination. The inner reliability and relative reliability capture different aspects of the classification reliability. The discounted classification results are combined with Dempster-Shafer's rule for the final class decision making support. The performance of CF-CRE have been evaluated and compared with those of main classical fusion methods using real data sets. The experimental results show that CF-CRE can produce substantially higher accuracy than other fusion methods in general. Moreover, CF-CRE is robust to the changes of the number of nearest neighbors chosen for estimating the reliability matrix, which is appealing for the applications.

  20. Classifying sows' activity types from acceleration patterns

    DEFF Research Database (Denmark)

    Cornou, Cecile; Lundbye-Christensen, Søren

    2008-01-01

    An automated method of classifying sow activity using acceleration measurements would allow the individual sow's behavior to be monitored throughout the reproductive cycle; applications for detecting behaviors characteristic of estrus and farrowing or to monitor illness and welfare can be foreseen....... This article suggests a method of classifying five types of activity exhibited by group-housed sows. The method involves the measurement of acceleration in three dimensions. The five activities are: feeding, walking, rooting, lying laterally and lying sternally. Four time series of acceleration (the three...

  1. A survey of decision tree classifier methodology

    Science.gov (United States)

    Safavian, S. R.; Landgrebe, David

    1991-01-01

    Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  2. Bayesian Classifier for Medical Data from Doppler Unit

    Directory of Open Access Journals (Sweden)

    J. Málek

    2006-01-01

    Full Text Available Nowadays, hand-held ultrasonic Doppler units (probes are often used for noninvasive screening of atherosclerosis in the arteries of the lower limbs. The mean velocity of blood flow in time and blood pressures are measured on several positions on each lower limb. By listening to the acoustic signal generated by the device or by reading the signal displayed on screen, a specialist can detect peripheral arterial disease (PAD.This project aims to design software that will be able to analyze data from such a device and classify it into several diagnostic classes. At the Department of Functional Diagnostics at the Regional Hospital in Liberec a database of several hundreds signals was collected. In cooperation with the specialist, the signals were manually classified into four classes. For each class, selected signal features were extracted and then used for training a Bayesian classifier. Another set of signals was used for evaluating and optimizing the parameters of the classifier. Slightly above 84 % of successfully recognized diagnostic states, was recently achieved on the test data. 

  3. Data Stream Classification Based on the Gamma Classifier

    Directory of Open Access Journals (Sweden)

    Abril Valeria Uriarte-Arcia

    2015-01-01

    Full Text Available The ever increasing data generation confronts us with the problem of handling online massive amounts of information. One of the biggest challenges is how to extract valuable information from these massive continuous data streams during single scanning. In a data stream context, data arrive continuously at high speed; therefore the algorithms developed to address this context must be efficient regarding memory and time management and capable of detecting changes over time in the underlying distribution that generated the data. This work describes a novel method for the task of pattern classification over a continuous data stream based on an associative model. The proposed method is based on the Gamma classifier, which is inspired by the Alpha-Beta associative memories, which are both supervised pattern recognition models. The proposed method is capable of handling the space and time constrain inherent to data stream scenarios. The Data Streaming Gamma classifier (DS-Gamma classifier implements a sliding window approach to provide concept drift detection and a forgetting mechanism. In order to test the classifier, several experiments were performed using different data stream scenarios with real and synthetic data streams. The experimental results show that the method exhibits competitive performance when compared to other state-of-the-art algorithms.

  4. Building an automated SOAP classifier for emergency department reports.

    Science.gov (United States)

    Mowery, Danielle; Wiebe, Janyce; Visweswaran, Shyam; Harkema, Henk; Chapman, Wendy W

    2012-02-01

    Information extraction applications that extract structured event and entity information from unstructured text can leverage knowledge of clinical report structure to improve performance. The Subjective, Objective, Assessment, Plan (SOAP) framework, used to structure progress notes to facilitate problem-specific, clinical decision making by physicians, is one example of a well-known, canonical structure in the medical domain. Although its applicability to structuring data is understood, its contribution to information extraction tasks has not yet been determined. The first step to evaluating the SOAP framework's usefulness for clinical information extraction is to apply the model to clinical narratives and develop an automated SOAP classifier that classifies sentences from clinical reports. In this quantitative study, we applied the SOAP framework to sentences from emergency department reports, and trained and evaluated SOAP classifiers built with various linguistic features. We found the SOAP framework can be applied manually to emergency department reports with high agreement (Cohen's kappa coefficients over 0.70). Using a variety of features, we found classifiers for each SOAP class can be created with moderate to outstanding performance with F(1) scores of 93.9 (subjective), 94.5 (objective), 75.7 (assessment), and 77.0 (plan). We look forward to expanding the framework and applying the SOAP classification to clinical information extraction tasks. Copyright © 2011. Published by Elsevier Inc.

  5. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  6. A Gene Expression Classifier of Node-Positive Colorectal Cancer

    Directory of Open Access Journals (Sweden)

    Paul F. Meeh

    2009-10-01

    Full Text Available We used digital long serial analysis of gene expression to discover gene expression differences between node-negative and node-positive colorectal tumors and developed a multigene classifier able to discriminate between these two tumor types. We prepared and sequenced long serial analysis of gene expression libraries from one node-negative and one node-positive colorectal tumor, sequenced to a depth of 26,060 unique tags, and identified 262 tags significantly differentially expressed between these two tumors (P < 2 x 10-6. We confirmed the tag-to-gene assignments and differential expression of 31 genes by quantitative real-time polymerase chain reaction, 12 of which were elevated in the node-positive tumor. We analyzed the expression levels of these 12 upregulated genes in a validation panel of 23 additional tumors and developed an optimized seven-gene logistic regression classifier. The classifier discriminated between node-negative and node-positive tumors with 86% sensitivity and 80% specificity. Receiver operating characteristic analysis of the classifier revealed an area under the curve of 0.86. Experimental manipulation of the function of one classification gene, Fibronectin, caused profound effects on invasion and migration of colorectal cancer cells in vitro. These results suggest that the development of node-positive colorectal cancer occurs in part through elevated epithelial FN1 expression and suggest novel strategies for the diagnosis and treatment of advanced disease.

  7. Localizing genes to cerebellar layers by classifying ISH images.

    Directory of Open Access Journals (Sweden)

    Lior Kirsch

    Full Text Available Gene expression controls how the brain develops and functions. Understanding control processes in the brain is particularly hard since they involve numerous types of neurons and glia, and very little is known about which genes are expressed in which cells and brain layers. Here we describe an approach to detect genes whose expression is primarily localized to a specific brain layer and apply it to the mouse cerebellum. We learn typical spatial patterns of expression from a few markers that are known to be localized to specific layers, and use these patterns to predict localization for new genes. We analyze images of in-situ hybridization (ISH experiments, which we represent using histograms of local binary patterns (LBP and train image classifiers and gene classifiers for four layers of the cerebellum: the Purkinje, granular, molecular and white matter layer. On held-out data, the layer classifiers achieve accuracy above 94% (AUC by representing each image at multiple scales and by combining multiple image scores into a single gene-level decision. When applied to the full mouse genome, the classifiers predict specific layer localization for hundreds of new genes in the Purkinje and granular layers. Many genes localized to the Purkinje layer are likely to be expressed in astrocytes, and many others are involved in lipid metabolism, possibly due to the unusual size of Purkinje cells.

  8. Zum Bildungspotenzial biblischer Texte

    Directory of Open Access Journals (Sweden)

    Theis, Joachim

    2017-11-01

    Full Text Available Biblical education as a holistic process goes far beyond biblical learning. It must be understood as a lifelong process, in which both biblical texts and their understanders operate appropriating their counterpart in a dialogical way. – Neither does the recipient’s horizon of understanding appear as an empty room, which had to be filled with the text only, nor is the latter a dead material one could only examine cognitively. The recipient discovers the meaning of the biblical text recomposing it by existential appropriation. So the text is brought to live in each individual reality. Both scientific insights and subjective structures as well as the understanders’ community must be included to avoid potential one-sidednesses. Unfortunately, a special negative association obscures the approach of the bible very often: Still biblical work as part of religious education appears in a cognitively oriented habit, which is neither regarding the vitality and sovereignty of the biblical texts nor the students’ desire for meaning. Moreover, the bible is getting misused for teaching moral terms or pontifications. Such downfalls can be disrupted by biblical didactics which are empowerment didactics. Regarding the sovereignty of biblical texts, these didactics assist the understander with his/her individuation by opening the texts with focus on the understander’s otherness. Thus each the text and the recipient become subjects in a dialogue. The approach of the Biblical-Enabling-Didactics leads the Bible to become always new a book of life. Understanding them from within their hermeneutics, empowerment didactics could be raised to the principle of biblical didactics in general and grow into an essential element of holistic education.

  9. Wallops Ship Surveillance System

    Science.gov (United States)

    Smith, Donna C.

    2011-01-01

    Approved as a Wallops control center backup system, the Wallops Ship Surveillance Software is a day-of-launch risk analysis tool for spaceport activities. The system calculates impact probabilities and displays ship locations relative to boundary lines. It enables rapid analysis of possible flight paths to preclude the need to cancel launches and allow execution of launches in a timely manner. Its design is based on low-cost, large-customer- base elements including personal computers, the Windows operating system, C/C++ object-oriented software, and network interfaces. In conformance with the NASA software safety standard, the system is designed to ensure that it does not falsely report a safe-for-launch condition. To improve the current ship surveillance method, the system is designed to prevent delay of launch under a safe-for-launch condition. A single workstation is designated the controller of the official ship information and the official risk analysis. Copies of this information are shared with other networked workstations. The program design is divided into five subsystems areas: 1. Communication Link -- threads that control the networking of workstations; 2. Contact List -- a thread that controls a list of protected item (ocean vessel) information; 3. Hazard List -- threads that control a list of hazardous item (debris) information and associated risk calculation information; 4. Display -- threads that control operator inputs and screen display outputs; and 5. Archive -- a thread that controls archive file read and write access. Currently, most of the hazard list thread and parts of other threads are being reused as part of a new ship surveillance system, under the SureTrak project.

  10. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L.E.

    1993-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). Samples are routinely collected and analyzed to determine the quality of air, surface water, ground water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Drinking Water Project, and Ground-Water Surveillance Project.

  11. EST: Evading Scientific Text.

    Science.gov (United States)

    Ward, Jeremy

    2001-01-01

    Examines chemical engineering students' attitudes to text and other parts of English language textbooks. A questionnaire was administered to a group of undergraduates. Results reveal one way students get around the problem of textbook reading. (Author/VWL)

  12. nal Sesotho texts

    African Journals Online (AJOL)

    with literary texts written in indigenous South African languages. The project ... Homi Bhabha uses the words of Salman Rushdie to underline the fact that new .... I could not conceptualise an African-language-to-African-language dictionary. An.

  13. Self-surveillance

    DEFF Research Database (Denmark)

    Albrechtslund, Anders

    Gadgets and applications are increasingly being developed and used for tracking, quantifying, and documenting everyday life activities and especially health and fitness devices such as GPS-enabled sports watches are well-known and popular. However, self-surveillance practices involving networked...... pressure, fitness activities, sleep cycles, etc. can be broadcasted, e.g. as tweets on Twitter or status updates on Facebook. Such quantification practices with monitoring technologies become co-producing when individuals constitute themselves as subjects engaging in self-tracking, self-care, and self...

  14. Surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

  15. GSFC Supplier Surveillance

    Science.gov (United States)

    Kelly, Michael P.

    2011-01-01

    Topics covered include: Develop Program/Project Quality Assurance Surveillance Plans The work activities performed by the developer and/or his suppliers are subject to evaluation and audit by government-designated representatives. CSO supports project by selecting on-site supplier representative s by one of several methods: (1) a Defense Contract Management Agency (DCMA) person via a Letter Of Delegation (LOD), (2) an independent assurance contractor (IAC) via a contract Audits, Assessments, and Assurance (A3) Contract Code 300 Mission Assurance Support Contract (MASC)

  16. Plagiarism in Academic Texts

    Directory of Open Access Journals (Sweden)

    Marta Eugenia Rojas-Porras

    2012-08-01

    Full Text Available The ethical and social responsibility of citing the sources in a scientific or artistic work is undeniable. This paper explores, in a preliminary way, academic plagiarism in its various forms. It includes findings based on a forensic analysis. The purpose of this paper is to raise awareness on the importance of considering these details when writing and publishing a text. Hopefully, this analysis may put the issue under discussion.

  17. Machine Translation from Text

    Science.gov (United States)

    Habash, Nizar; Olive, Joseph; Christianson, Caitlin; McCary, John

    Machine translation (MT) from text, the topic of this chapter, is perhaps the heart of the GALE project. Beyond being a well defined application that stands on its own, MT from text is the link between the automatic speech recognition component and the distillation component. The focus of MT in GALE is on translating from Arabic or Chinese to English. The three languages represent a wide range of linguistic diversity and make the GALE MT task rather challenging and exciting.

  18. 75 FR 707 - Classified National Security Information

    Science.gov (United States)

    2010-01-05

    ... classified at one of the following three levels: (1) ``Top Secret'' shall be applied to information, the... exercise this authority. (2) ``Top Secret'' original classification authority may be delegated only by the... official has been delegated ``Top Secret'' original classification authority by the agency head. (4) Each...

  19. Neural Network Classifier Based on Growing Hyperspheres

    Czech Academy of Sciences Publication Activity Database

    Jiřina Jr., Marcel; Jiřina, Marcel

    2000-01-01

    Roč. 10, č. 3 (2000), s. 417-428 ISSN 1210-0552. [Neural Network World 2000. Prague, 09.07.2000-12.07.2000] Grant - others:MŠMT ČR(CZ) VS96047; MPO(CZ) RP-4210 Institutional research plan: AV0Z1030915 Keywords : neural network * classifier * hyperspheres * big -dimensional data Subject RIV: BA - General Mathematics

  20. Histogram deconvolution - An aid to automated classifiers

    Science.gov (United States)

    Lorre, J. J.

    1983-01-01

    It is shown that N-dimensional histograms are convolved by the addition of noise in the picture domain. Three methods are described which provide the ability to deconvolve such noise-affected histograms. The purpose of the deconvolution is to provide automated classifiers with a higher quality N-dimensional histogram from which to obtain classification statistics.

  1. Classifying web pages with visual features

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Lupascu, T.; Filipe, J.; Cordeiro, J.

    2010-01-01

    To automatically classify and process web pages, current systems use the textual content of those pages, including both the displayed content and the underlying (HTML) code. However, a very important feature of a web page is its visual appearance. In this paper, we show that using generic visual

  2. Active prospective surveillance study with post-discharge surveillance of surgical site infections in Cambodia

    Directory of Open Access Journals (Sweden)

    José Guerra

    2015-05-01

    Full Text Available Summary: Barriers to the implementation of the Centers for Disease Control and Prevention (CDC guidelines for surgical site infection (SSI surveillance have been described in resource-limited settings. This study aimed to estimate the SSI incidence rate in a Cambodian hospital and to compare different modalities of SSI surveillance. We performed an active prospective study with post-discharge surveillance. During the hospital stay, trained surveyors collected the CDC criteria to identify SSI by direct examination of the surgical site. After discharge, a card was given to each included patient to be presented to all practitioners examining the surgical site. Among 167 patients, direct examination of the surgical site identified a cumulative incidence rate of 14 infections per 100 patients. An independent review of medical charts presented a sensitivity of 16%. The sensitivity of the purulent drainage criterion to detect SSIs was 83%. After hospital discharge, 87% of the patients provided follow-up data, and nine purulent drainages were reported by a practitioner (cumulative incidence rate: 20%. Overall, the incidence rate was dependent on the surveillance modalities. The review of medical charts to identify SSIs during hospitalization was not effective; the use of a follow-up card with phone calls for post-discharge surveillance was effective. Keywords: Surgical wound infection, Cambodia, Infection control, Developing countries, Follow-up studies, Feasibility studies

  3. Mapping HIV/STI behavioural surveillance in Europe

    Directory of Open Access Journals (Sweden)

    Lert France

    2010-10-01

    Full Text Available Abstract Background Used in conjunction with biological surveillance, behavioural surveillance provides data allowing for a more precise definition of HIV/STI prevention strategies. In 2008, mapping of behavioural surveillance in EU/EFTA countries was performed on behalf of the European Centre for Disease prevention and Control. Method Nine questionnaires were sent to all 31 member States and EEE/EFTA countries requesting data on the overall behavioural and second generation surveillance system and on surveillance in the general population, youth, men having sex with men (MSM, injecting drug users (IDU, sex workers (SW, migrants, people living with HIV/AIDS (PLWHA, and sexually transmitted infection (STI clinics patients. Requested data included information on system organisation (e.g. sustainability, funding, institutionalisation, topics covered in surveys and main indicators. Results Twenty-eight of the 31 countries contacted supplied data. Sixteen countries reported an established behavioural surveillance system, and 13 a second generation surveillance system (combination of biological surveillance of HIV/AIDS and STI with behavioural surveillance. There were wide differences as regards the year of survey initiation, number of populations surveyed, data collection methods used, organisation of surveillance and coordination with biological surveillance. The populations most regularly surveyed are the general population, youth, MSM and IDU. SW, patients of STI clinics and PLWHA are surveyed less regularly and in only a small number of countries, and few countries have undertaken behavioural surveys among migrant or ethnic minorities populations. In many cases, the identification of populations with risk behaviour and the selection of populations to be included in a BS system have not been formally conducted, or are incomplete. Topics most frequently covered are similar across countries, although many different indicators are used. In most

  4. Surface Environmental Surveillance Procedures Manual

    International Nuclear Information System (INIS)

    Hanf, Robert W.; Poston, Ted M.

    2000-01-01

    Shows and explains certain procedures needed for surface environmental surveillance. Hanford Site environmental surveillance is conducted by the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy (DOE) under the Surface Environmental Surveillance Project (SESP). The basic requirements for site surveillance are set fourth in DOE Order 5400.1, General Environmental Protection Program Requirements. Guidance for the SESP is provided in DOE Order 5484.1, Environmental Protection, Safety, and Health Protection Information Reporting Requirements and DOE Order 5400.5, Radiation Protection of the Public and Environment. Guidelines for environmental surveillance activities are provided in DOE/EH-0173T, Environmental Regulatory Guide for Radiological Effluent Monitoring and Environmental Surveillance. An environmental monitoring plan for the Hanford Site is outlined in DOE/RL 91-50 Rev. 2, Environmental Monitoring Plan, United States Department of Energy, Richland Operations Office. Environmental surveillance data are used in assessing the impact of current and past site operations on human health and the environment, demonstrating compliance with applicable local, state, and federal environmental regulations, and verifying the adequacy of containment and effluent controls. SESP sampling schedules are reviewed, revised, and published each calendar year in the Hanford Site Environmental Surveillance Master Sampling Schedule. Environmental samples are collected by SESP staff in accordance with the approved sample collection procedures documented in this manual. Personnel training requirements are documented in SESP-TP-01 Rev.2, Surface Environmental Surveillance Project Training Program.

  5. Malaria Surveillance - United States, 2015.

    Science.gov (United States)

    Mace, Kimberly E; Arguin, Paul M; Tan, Kathrine R

    2018-05-04

    malaria cases diagnosed in the United States has been increasing since the mid-1970s, the number of cases decreased by 208 from 2014 to 2015. Among the regions of acquisition (Africa, West Africa, Asia, Central America, the Caribbean, South America, Oceania, and the Middle East), the only region with significantly fewer imported cases in 2015 compared with 2014 was West Africa (781 versus 969). Plasmodium falciparum, P. vivax, P. ovale, and P. malariae were identified in 67.4%, 11.7%, 4.1%, and 3.1% of cases, respectively. Less than 1% of patients were infected by two species. The infecting species was unreported or undetermined in 12.9% of cases. CDC provided diagnostic assistance for 13.1% of patients with confirmed cases and tested 15.0% of P. falciparum specimens for antimalarial resistance markers. Of the U.S. resident patients who reported purpose of travel, 68.4% were visiting friends or relatives. A lower proportion of U.S. residents with malaria reported taking any chemoprophylaxis in 2015 (26.5%) compared with 2014 (32.5%), and adherence was poor in this group. Among the U.S residents for whom information on chemoprophylaxis use and travel region were known, 95.3% of patients with malaria did not adhere to or did not take a CDC-recommended chemoprophylaxis regimen. Among women with malaria, 32 were pregnant, and none had adhered to chemoprophylaxis. A total of 23 malaria cases occurred among U.S. military personnel in 2015. Three cases of malaria were imported from the approximately 3,000 military personnel deployed to an Ebola-affected country; two of these were not P. falciparum species, and one species was unspecified. Among all reported cases in 2015, 17.1% were classified as severe illnesses and 11 persons died, compared with an average of 6.1 deaths per year during 2000-2014. In 2015, CDC received 153 P. falciparum-positive samples for surveillance of antimalarial resistance markers (although certain loci were untestable for some samples); genetic

  6. Power and Surveillance in Video Games

    Directory of Open Access Journals (Sweden)

    Héctor Puente Bienvenido

    2014-08-01

    Full Text Available In this article we explore the history of video games (focusing on multiplayer ones, from the perspective of power relationships and the ways in which authority has been excesiced by the game industry and game players over time. From a hierarchical system of power and domain to the increasing flatness of the current structure, we address the systems of control and surveillance. We will finish our display assessing the emergent forms of production and relationships between players and developers.

  7. Extended surveillance as a support to PLIM

    International Nuclear Information System (INIS)

    Walle, Eric van

    2002-01-01

    Full text: The safe exploitation of the reactor pressure vessel was and is always a major concern in nuclear power plant life management. At present, issues like Plant Life Extension, where utilities look into the possibility of license renewal after 40 years of operation, are becoming relevant in the USA. In other countries PLIM beyond the design life of the NPP could also be desirable from the economic viewpoint. The limiting factor could, however, be the integrity of the reactor pressure vessel. The reactor pressure vessel surveillance procedures as defined by regulatory legislation is limited and can be supplemented with valuable information that can be extracted in parallel to conventional surveillance testing or through additional testing on surveillance material. This is justified for several reasons: 1. The current methodology is semi-empirical, contains flaws and is in a number of cases over conservative. Without giving in on safety, we need to try and understand the material behavior more fundamentally; 2. Some reactor surveillance materials demonstrate inconsistent behavior with respect to the overall trend. These materials are called 'outlier' materials. But are they really outliers or is this connected to the indexing methodology used? 3. Additional data, for example the results of instrumented Charpy-V impact tests, have been obtained on many surveillance test specimens and are not adequately exploited in the actual surveillance methodology; 4. Scientific research provides substantial information and understanding of degradation mechanisms in reactor pressure vessel steels. Although we will not concentrate on this topic, the development of powerful microscopic investigation techniques, like FEGSTEM, APFIM, SANS, positron annihilation, internal friction, ... led to an intensified development of radiation damage modelling and are an input to micromechanical modelling. Moreover, due to the ever increasing computer power, additional multi-scale (time and

  8. Classifying features in CT imagery: accuracy for some single- and multiple-species classifiers

    Science.gov (United States)

    Daniel L. Schmoldt; Jing He; A. Lynn Abbott

    1998-01-01

    Our current approach to automatically label features in CT images of hardwood logs classifies each pixel of an image individually. These feature classifiers use a back-propagation artificial neural network (ANN) and feature vectors that include a small, local neighborhood of pixels and the distance of the target pixel to the center of the log. Initially, this type of...

  9. N-CDAD in Canada: Results of the Canadian Nosocomial Infection Surveillance Program 1997 N-CDAD Prevalence Surveillance Project

    Directory of Open Access Journals (Sweden)

    Meaghen Hyland

    2001-01-01

    Full Text Available BACKGROUND: A 1996 preproject survey among Canadian Hospital Epidemiology Committee (CHEC sites revealed variations in the prevention, detection, management and surveillance of Clostridium difficile-associated diarrhea (CDAD. Facilities wanted to establish national rates of nosocomially acquired CDAD (N-CDAD to understand the impact of control or prevention measures, and the burden of N-CDAD on health care resources. The CHEC, in collaboration with the Laboratory Centre for Disease Control (Health Canada and under the Canadian Nosocomial Infection Surveillance Program, undertook a prevalence surveillance project among selected hospitals throughout Canada.

  10. Catheter Associated Urinary Tract Infection Based on Surveillance Attributes in RSU Haji Surabaya

    Directory of Open Access Journals (Sweden)

    Spica Redina Vebrilian

    2017-03-01

    Full Text Available Surveillance system is instrumental in reducing the incidence of nosocomial infection. The implementation of this surveillance system is necessary in the hospital. Surveillance CAUTI is one of the focus prevention and infection control program in RSU Haji Surabaya 2015. The success of surveillance system highly depends on the association of attributes inside it. Surveillance attributes are indicator that describes the characteristics ofsurveillance system. In 2015, there was a delay in the collection of data reports which exceeds the prescribed time limit and there was also a lot of blank space in the confi rmation sheet. It affects the surveillance system in RSU Haji Surabaya. The purpose of this research is to evaluate the surveillance CAUTI based on the surveillance attributes in RSU Haji Surabaya2015. This research is a descriptive evaluative research. Subjects in this study are the surveillance attributes (simplicity, flexibility, acceptability, sensitivity, positive predictive value, representativeness, timeliness, data quality, and stability CAUTI in RSU Haji Surabaya, while survey respondents are IPCN, IPCLN, and head nurse. Data collected by interview and documentation study. The results showed that the attributes of surveillance is already has simplicity, high acceptability, high sensitivity, high positive predictive value, representative, and high stability. However, other attributes were not fl exible, not timeliness, and has a low data quality. Alternative solutions that can be done are to improve the regulatory function in every unit, establish standardization of hospital data, and manage reward and punishment system. Keywords: surveillance system, surveillance attributes, evaluation, nosocomial infections, CAUTI

  11. Sonoma Persistent Surveillance System

    Energy Technology Data Exchange (ETDEWEB)

    Pennington, D M

    2006-03-24

    Sonoma offers the first cost-effective, broad-area, high-resolution, real-time motion imagery system for surveillance applications. Sonoma is unique in its ability to provide continuous, real-time video imagery of an area the size of a small city with resolutions sufficient to track 8,000 moving objects in the field of view. At higher resolutions and over smaller areas, Sonoma can even track the movement of individual people. The visual impact of the data available from Sonoma is already causing a paradigm shift in the architecture and operation of other surveillance systems. Sonoma is expected to cost just one-tenth the price of comparably sized sensor systems. Cameras mounted on an airborne platform constantly monitor an area, feeding data to the ground for real-time analysis. Sonoma was designed to provide real-time data for actionable intelligence in situations such as monitoring traffic, special events, border security, and harbors. If a Sonoma system had been available in the aftermath of the Katrina and Rita hurricanes, emergency responders would have had real-time information on roads, water levels, and traffic conditions, perhaps saving many lives.

  12. TEXT Energy Storage System

    International Nuclear Information System (INIS)

    Weldon, W.F.; Rylander, H.G.; Woodson, H.H.

    1977-01-01

    The Texas Experimental Tokamak (TEXT) Enery Storage System, designed by the Center for Electromechanics (CEM), consists of four 50 MJ, 125 V homopolar generators and their auxiliaries and is designed to power the toroidal and poloidal field coils of TEXT on a two-minute duty cycle. The four 50 MJ generators connected in series were chosen because they represent the minimum cost configuration and also represent a minimal scale up from the successful 5.0 MJ homopolar generator designed, built, and operated by the CEM

  13. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  14. Disassembly and Sanitization of Classified Matter

    International Nuclear Information System (INIS)

    Stockham, Dwight J.; Saad, Max P.

    2008-01-01

    The Disassembly Sanitization Operation (DSO) process was implemented to support weapon disassembly and disposition by using recycling and waste minimization measures. This process was initiated by treaty agreements and reconfigurations within both the DOD and DOE Complexes. The DOE is faced with disassembling and disposing of a huge inventory of retired weapons, components, training equipment, spare parts, weapon maintenance equipment, and associated material. In addition, regulations have caused a dramatic increase in the need for information required to support the handling and disposition of these parts and materials. In the past, huge inventories of classified weapon components were required to have long-term storage at Sandia and at many other locations throughout the DoE Complex. These materials are placed in onsite storage unit due to classification issues and they may also contain radiological and/or hazardous components. Since no disposal options exist for this material, the only choice was long-term storage. Long-term storage is costly and somewhat problematic, requiring a secured storage area, monitoring, auditing, and presenting the potential for loss or theft of the material. Overall recycling rates for materials sent through the DSO process have enabled 70 to 80% of these components to be recycled. These components are made of high quality materials and once this material has been sanitized, the demand for the component metals for recycling efforts is very high. The DSO process for NGPF, classified components established the credibility of this technique for addressing the long-term storage requirements of the classified weapons component inventory. The success of this application has generated interest from other Sandia organizations and other locations throughout the complex. Other organizations are requesting the help of the DSO team and the DSO is responding to these requests by expanding its scope to include Work-for- Other projects. For example

  15. New mathematical cuneiform texts

    CERN Document Server

    Friberg, Jöran

    2016-01-01

    This monograph presents in great detail a large number of both unpublished and previously published Babylonian mathematical texts in the cuneiform script. It is a continuation of the work A Remarkable Collection of Babylonian Mathematical Texts (Springer 2007) written by Jöran Friberg, the leading expert on Babylonian mathematics. Focussing on the big picture, Friberg explores in this book several Late Babylonian arithmetical and metro-mathematical table texts from the sites of Babylon, Uruk and Sippar, collections of mathematical exercises from four Old Babylonian sites, as well as a new text from Early Dynastic/Early Sargonic Umma, which is the oldest known collection of mathematical exercises. A table of reciprocals from the end of the third millennium BC, differing radically from well-documented but younger tables of reciprocals from the Neo-Sumerian and Old-Babylonian periods, as well as a fragment of a Neo-Sumerian clay tablet showing a new type of a labyrinth are also discussed. The material is presen...

  16. The Emar Lexical Texts

    NARCIS (Netherlands)

    Gantzert, Merijn

    2011-01-01

    This four-part work provides a philological analysis and a theoretical interpretation of the cuneiform lexical texts found in the Late Bronze Age city of Emar, in present-day Syria. These word and sign lists, commonly dated to around 1100 BC, were almost all found in the archive of a single school.

  17. Text Induced Spelling Correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word

  18. Texts and Readers.

    Science.gov (United States)

    Iser, Wolfgang

    1980-01-01

    Notes that, since fictional discourse need not reflect prevailing systems of meaning and norms or values, readers gain detachment from their own presuppositions; by constituting and formulating text-sense, readers are constituting and formulating their own cognition and becoming aware of the operations for doing so. (FL)

  19. Documents and legal texts

    International Nuclear Information System (INIS)

    2017-01-01

    This section treats of the following documents and legal texts: 1 - Belgium 29 June 2014 - Act amending the Act of 22 July 1985 on Third-Party Liability in the Field of Nuclear Energy; 2 - Belgium, 7 December 2016. - Act amending the Act of 22 July 1985 on Third-Party Liability in the Field of Nuclear Energy

  20. Scoring and Classifying Examinees Using Measurement Decision Theory

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2009-04-01

    Full Text Available This paper describes and evaluates the use of measurement decision theory (MDT to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1 the classification accuracy of tests scored using decision theory; (2 the effectiveness of different sequential testing procedures; and (3 the number of items needed to make a classification. A large percentage of examinees can be classified accurately with very few items using decision theory. A Java Applet for self instruction and software for generating, calibrating and scoring MDT data are provided.

  1. MAMMOGRAMS ANALYSIS USING SVM CLASSIFIER IN COMBINED TRANSFORMS DOMAIN

    Directory of Open Access Journals (Sweden)

    B.N. Prathibha

    2011-02-01

    Full Text Available Breast cancer is a primary cause of mortality and morbidity in women. Reports reveal that earlier the detection of abnormalities, better the improvement in survival. Digital mammograms are one of the most effective means for detecting possible breast anomalies at early stages. Digital mammograms supported with Computer Aided Diagnostic (CAD systems help the radiologists in taking reliable decisions. The proposed CAD system extracts wavelet features and spectral features for the better classification of mammograms. The Support Vector Machines classifier is used to analyze 206 mammogram images from Mias database pertaining to the severity of abnormality, i.e., benign and malign. The proposed system gives 93.14% accuracy for discrimination between normal-malign and 87.25% accuracy for normal-benign samples and 89.22% accuracy for benign-malign samples. The study reveals that features extracted in hybrid transform domain with SVM classifier proves to be a promising tool for analysis of mammograms.

  2. Security Enrichment in Intrusion Detection System Using Classifier Ensemble

    Directory of Open Access Journals (Sweden)

    Uma R. Salunkhe

    2017-01-01

    Full Text Available In the era of Internet and with increasing number of people as its end users, a large number of attack categories are introduced daily. Hence, effective detection of various attacks with the help of Intrusion Detection Systems is an emerging trend in research these days. Existing studies show effectiveness of machine learning approaches in handling Intrusion Detection Systems. In this work, we aim to enhance detection rate of Intrusion Detection System by using machine learning technique. We propose a novel classifier ensemble based IDS that is constructed using hybrid approach which combines data level and feature level approach. Classifier ensembles combine the opinions of different experts and improve the intrusion detection rate. Experimental results show the improved detection rates of our system compared to reference technique.

  3. Lung Nodule Detection in CT Images using Neuro Fuzzy Classifier

    Directory of Open Access Journals (Sweden)

    M. Usman Akram

    2013-07-01

    Full Text Available Automated lung cancer detection using computer aided diagnosis (CAD is an important area in clinical applications. As the manual nodule detection is very time consuming and costly so computerized systems can be helpful for this purpose. In this paper, we propose a computerized system for lung nodule detection in CT scan images. The automated system consists of two stages i.e. lung segmentation and enhancement, feature extraction and classification. The segmentation process will result in separating lung tissue from rest of the image, and only the lung tissues under examination are considered as candidate regions for detecting malignant nodules in lung portion. A feature vector for possible abnormal regions is calculated and regions are classified using neuro fuzzy classifier. It is a fully automatic system that does not require any manual intervention and experimental results show the validity of our system.

  4. A Bayesian Classifier for X-Ray Pulsars Recognition

    Directory of Open Access Journals (Sweden)

    Hao Liang

    2016-01-01

    Full Text Available Recognition for X-ray pulsars is important for the problem of spacecraft’s attitude determination by X-ray Pulsar Navigation (XPNAV. By using the nonhomogeneous Poisson model of the received photons and the minimum recognition error criterion, a classifier based on the Bayesian theorem is proposed. For X-ray pulsars recognition with unknown Doppler frequency and initial phase, the features of every X-ray pulsar are extracted and the unknown parameters are estimated using the Maximum Likelihood (ML method. Besides that, a method to recognize unknown X-ray pulsars or X-ray disturbances is proposed. Simulation results certificate the validity of the proposed Bayesian classifier.

  5. Lung Nodule Image Classification Based on Local Difference Pattern and Combined Classifier

    Directory of Open Access Journals (Sweden)

    Keming Mao

    2016-01-01

    Full Text Available This paper proposes a novel lung nodule classification method for low-dose CT images. The method includes two stages. First, Local Difference Pattern (LDP is proposed to encode the feature representation, which is extracted by comparing intensity difference along circular regions centered at the lung nodule. Then, the single-center classifier is trained based on LDP. Due to the diversity of feature distribution for different class, the training images are further clustered into multiple cores and the multicenter classifier is constructed. The two classifiers are combined to make the final decision. Experimental results on public dataset show the superior performance of LDP and the combined classifier.

  6. Comparing cosmic web classifiers using information theory

    International Nuclear Information System (INIS)

    Leclercq, Florent; Lavaux, Guilhem; Wandelt, Benjamin; Jasche, Jens

    2016-01-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  7. Design of Robust Neural Network Classifiers

    DEFF Research Database (Denmark)

    Larsen, Jan; Andersen, Lars Nonboe; Hintz-Madsen, Mads

    1998-01-01

    This paper addresses a new framework for designing robust neural network classifiers. The network is optimized using the maximum a posteriori technique, i.e., the cost function is the sum of the log-likelihood and a regularization term (prior). In order to perform robust classification, we present...... a modified likelihood function which incorporates the potential risk of outliers in the data. This leads to the introduction of a new parameter, the outlier probability. Designing the neural classifier involves optimization of network weights as well as outlier probability and regularization parameters. We...... suggest to adapt the outlier probability and regularisation parameters by minimizing the error on a validation set, and a simple gradient descent scheme is derived. In addition, the framework allows for constructing a simple outlier detector. Experiments with artificial data demonstrate the potential...

  8. Comparing cosmic web classifiers using information theory

    Energy Technology Data Exchange (ETDEWEB)

    Leclercq, Florent [Institute of Cosmology and Gravitation (ICG), University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth PO1 3FX (United Kingdom); Lavaux, Guilhem; Wandelt, Benjamin [Institut d' Astrophysique de Paris (IAP), UMR 7095, CNRS – UPMC Université Paris 6, Sorbonne Universités, 98bis boulevard Arago, F-75014 Paris (France); Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: lavaux@iap.fr, E-mail: j.jasche@tum.de, E-mail: wandelt@iap.fr [Excellence Cluster Universe, Technische Universität München, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  9. Detection of Fundus Lesions Using Classifier Selection

    Science.gov (United States)

    Nagayoshi, Hiroto; Hiramatsu, Yoshitaka; Sako, Hiroshi; Himaga, Mitsutoshi; Kato, Satoshi

    A system for detecting fundus lesions caused by diabetic retinopathy from fundus images is being developed. The system can screen the images in advance in order to reduce the inspection workload on doctors. One of the difficulties that must be addressed in completing this system is how to remove false positives (which tend to arise near blood vessels) without decreasing the detection rate of lesions in other areas. To overcome this difficulty, we developed classifier selection according to the position of a candidate lesion, and we introduced new features that can distinguish true lesions from false positives. A system incorporating classifier selection and these new features was tested in experiments using 55 fundus images with some lesions and 223 images without lesions. The results of the experiments confirm the effectiveness of the proposed system, namely, degrees of sensitivity and specificity of 98% and 81%, respectively.

  10. Classifying objects in LWIR imagery via CNNs

    Science.gov (United States)

    Rodger, Iain; Connor, Barry; Robertson, Neil M.

    2016-10-01

    The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.

  11. 2012 Sexually Transmitted Diseases Surveillance

    Science.gov (United States)

    ... Data Appendix Tables A1 - A4 STD Surveillance Case Definitions Contributors Related Links STD Home STD Data & Statistics NCHHSTP Atlas Interactive STD Data - 1996-2013 STD Health Equity HIV/AIDS Surveillance & Statistics Follow STD STD on Twitter STD on Facebook File Formats Help: How do I view different ...

  12. [Population surveillance of coronary heart disease].

    Science.gov (United States)

    Ben Romdhane, Habiba; Bougatef, Souha; Skhiri, Hajer; Gharbi, Donia; Haouala, Habib; Achour, Noureddine

    2005-05-01

    A cross-sectional population survey was carried out in the Ariana region in 2000-01. The aim of this study is to report the prevalence of CHD as indicated by ECG Minnesota coding. A randomly selected sample included 1837 adults 40-70 years. Data on socio-economic status, demographic, medical history, health behaviour, clinical and biological investigations were recorded. Risk factors (hypertension, dyslipedemia, obesity, diabetes) are defined according to WHO criterias. Standard supine 12 lead ECGs were recorded. All ECGs are red and classified according to the Minnesota codes criteria on CHD probable, CHD possible and on Major abnormalities and minor abnormalities. CHD prevalence was higher on women. Major abnormalities are more common on women (20.6% vs 13%), while minor abnormalities prevalence was higher on men (15.5% vs 7.5%) (p<0.0001). The prevalence increased with age in both genders. This study tested how feasible is the population approach on CVDs surveillance. It highlighted the burden of cardiovascular diseases and support that women are at risk as men are. The value of ECG findings must be integrated in the cardiovascular diseases surveillance to identify high risk population.

  13. Monitors for the surveillance of NPP components

    International Nuclear Information System (INIS)

    Giera, H.D.; Grabner, A.; Hessel, G.; Koeppen, H.E.; Liewers, P.; Schumann, P.; Weiss, F.P.; Kunze, U.; Pfeiffer, G.

    1985-01-01

    Noise diagnostics have reached a level where it is possible and efficient to integrate this method as far as possible into the control and safety system of the NPP. The communication between the noise diagnostic system and the plant operator is the main problem of integration. It is necessary to refine the diagnostic results in such a manner that the operator can use them without being skilled in noise analysis respectively without contacting a noise specialist. Moreover, in this way the noise specialist can be released from routine surveillance. For selected processes which have already intensively been investigated because of their inherent risk this can be achieved by means of autonomously working monitors. Independently the monitors perform signal processing and diagnosis. In general this means that they classify the technical condition of the monitored component into one of the two categories: ''normal'' or ''anomalous''. The result will be annunciated to the plant operator who will in the first step of the development contact the noise specialist only if anomalies have occurred in order to clarify the cause. At the NPP ''Bruno Leuschner'' Greifswald, three hardware monitors for loose parts detection, control rod surveillance and main coolant pump diagnosis are being tested. Additionally a so-called software monitor for diagnosing the pressure vessel vibrations is in preparation. The techniques and the hardware used for the monitors as well as planned further improvements of the integration of noise diagnostics into the control and safety system are discussed in this paper. (author)

  14. Video sensor architecture for surveillance applications.

    Science.gov (United States)

    Sánchez, Jordi; Benet, Ginés; Simó, José E

    2012-01-01

    This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.

  15. Automated intelligent video surveillance system for ships

    Science.gov (United States)

    Wei, Hai; Nguyen, Hieu; Ramu, Prakash; Raju, Chaitanya; Liu, Xiaoqing; Yadegar, Jacob

    2009-05-01

    To protect naval and commercial ships from attack by terrorists and pirates, it is important to have automatic surveillance systems able to detect, identify, track and alert the crew on small watercrafts that might pursue malicious intentions, while ruling out non-threat entities. Radar systems have limitations on the minimum detectable range and lack high-level classification power. In this paper, we present an innovative Automated Intelligent Video Surveillance System for Ships (AIVS3) as a vision-based solution for ship security. Capitalizing on advanced computer vision algorithms and practical machine learning methodologies, the developed AIVS3 is not only capable of efficiently and robustly detecting, classifying, and tracking various maritime targets, but also able to fuse heterogeneous target information to interpret scene activities, associate targets with levels of threat, and issue the corresponding alerts/recommendations to the man-in- the-loop (MITL). AIVS3 has been tested in various maritime scenarios and shown accurate and effective threat detection performance. By reducing the reliance on human eyes to monitor cluttered scenes, AIVS3 will save the manpower while increasing the accuracy in detection and identification of asymmetric attacks for ship protection.

  16. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  17. An Autonomous Mobile Robotic System for Surveillance of Indoor Environments

    Directory of Open Access Journals (Sweden)

    Donato Di Paola

    2010-02-01

    Full Text Available The development of intelligent surveillance systems is an active research area. In this context, mobile and multi-functional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. We propose a system able to handle autonomously general-purpose tasks and complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums and buildings, and monitoring of safety equipment.

  18. An Autonomous Mobile Robotic System for Surveillance of Indoor Environments

    Directory of Open Access Journals (Sweden)

    Donato Di Paola

    2010-03-01

    Full Text Available The development of intelligent surveillance systems is an active research area. In this context, mobile and multi-functional robots are generally adopted as means to reduce the environment structuring and the number of devices needed to cover a given area. Nevertheless, the number of different sensors mounted on the robot, and the number of complex tasks related to exploration, monitoring, and surveillance make the design of the overall system extremely challenging. In this paper, we present our autonomous mobile robot for surveillance of indoor environments. We propose a system able to handle autonomously general-purpose tasks and complex surveillance issues simultaneously. It is shown that the proposed robotic surveillance scheme successfully addresses a number of basic problems related to environment mapping, localization and autonomous navigation, as well as surveillance tasks, like scene processing to detect abandoned or removed objects and people detection and following. The feasibility of the approach is demonstrated through experimental tests using a multisensor platform equipped with a monocular camera, a laser scanner, and an RFID device. Real world applications of the proposed system include surveillance of wide areas (e.g. airports and museums and buildings, and monitoring of safety equipment.

  19. Evaluation of the national Notifiable Diseases Surveillance System for dengue fever in Taiwan, 2010-2012.

    Directory of Open Access Journals (Sweden)

    Caoimhe McKerr

    2015-03-01

    Full Text Available In Taiwan, around 1,500 cases of dengue fever are reported annually and incidence has been increasing over time. A national web-based Notifiable Diseases Surveillance System (NDSS has been in operation since 1997 to monitor incidence and trends and support case and outbreak management. We present the findings of an evaluation of the NDSS to ascertain the extent to which dengue fever surveillance objectives are being achieved.We extracted the NDSS data on all laboratory-confirmed dengue fever cases reported during 1 January 2010 to 31 December 2012 to assess and describe key system attributes based on the Centers for Disease Control and Prevention surveillance evaluation guidelines. The system's structure and processes were delineated and operational staff interviewed using a semi-structured questionnaire. Crude and age-adjusted incidence rates were calculated and key demographic variables were summarised to describe reporting activity. Data completeness and validity were described across several variables.Of 5,072 laboratory-confirmed dengue fever cases reported during 2010-2012, 4,740 (93% were reported during July to December. The system was judged to be simple due to its minimal reporting steps. Data collected on key variables were correctly formatted and usable in > 90% of cases, demonstrating good data completeness and validity. The information collected was considered relevant by users with high acceptability. Adherence to guidelines for 24-hour reporting was 99%. Of 720 cases (14% recorded as travel-related, 111 (15% had an onset >14 days after return, highlighting the potential for misclassification. Information on hospitalization was missing for 22% of cases. The calculated PVP was 43%.The NDSS for dengue fever surveillance is a robust, well maintained and acceptable system that supports the collection of complete and valid data needed to achieve the surveillance objectives. The simplicity of the system engenders compliance leading to

  20. Learning for VMM + WTA Embedded Classifiers

    Science.gov (United States)

    2016-03-31

    Learning for VMM + WTA Embedded Classifiers Jennifer Hasler and Sahil Shah Electrical and Computer Engineering Georgia Institute of Technology...enabling correct classification of each novel acoustic signal (generator, idle car, and idle truck ). The classification structure requires, after...measured on our SoC FPAA IC. The test input is composed of signals from urban environment for 3 objects (generator, idle car, and idle truck

  1. Bayes classifiers for imbalanced traffic accidents datasets.

    Science.gov (United States)

    Mujalli, Randa Oqab; López, Griselda; Garach, Laura

    2016-03-01

    Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A Bayesian classifier for symbol recognition

    OpenAIRE

    Barrat , Sabine; Tabbone , Salvatore; Nourrissier , Patrick

    2007-01-01

    URL : http://www.buyans.com/POL/UploadedFile/134_9977.pdf; International audience; We present in this paper an original adaptation of Bayesian networks to symbol recognition problem. More precisely, a descriptor combination method, which enables to improve significantly the recognition rate compared to the recognition rates obtained by each descriptor, is presented. In this perspective, we use a simple Bayesian classifier, called naive Bayes. In fact, probabilistic graphical models, more spec...

  3. Smart sensing surveillance system

    Science.gov (United States)

    Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen

    2010-04-01

    An effective public safety sensor system for heavily-populated applications requires sophisticated and geographically-distributed infrastructures, centralized supervision, and deployment of large-scale security and surveillance networks. Artificial intelligence in sensor systems is a critical design to raise awareness levels, improve the performance of the system and adapt to a changing scenario and environment. In this paper, a highly-distributed, fault-tolerant, and energy-efficient Smart Sensing Surveillance System (S4) is presented to efficiently provide a 24/7 and all weather security operation in crowded environments or restricted areas. Technically, the S4 consists of a number of distributed sensor nodes integrated with specific passive sensors to rapidly collect, process, and disseminate heterogeneous sensor data from near omni-directions. These distributed sensor nodes can cooperatively work to send immediate security information when new objects appear. When the new objects are detected, the S4 will smartly select the available node with a Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR camera to track the objects and capture associated imagery. The S4 provides applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. Other imaging processes can be updated to meet specific requirements and operations. In the S4, all the sensor nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology. This UWB RF technology can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The Service Oriented Architecture of S4 enables remote applications to interact with the S4

  4. Smart sensing surveillance system

    Science.gov (United States)

    Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen

    2010-04-01

    Unattended ground sensor (UGS) networks have been widely used in remote battlefield and other tactical applications over the last few decades due to the advances of the digital signal processing. The UGS network can be applied in a variety of areas including border surveillance, special force operations, perimeter and building protection, target acquisition, situational awareness, and force protection. In this paper, a highly-distributed, fault-tolerant, and energyefficient Smart Sensing Surveillance System (S4) is presented to efficiently provide 24/7 and all weather security operation in a situation management environment. The S4 is composed of a number of distributed nodes to collect, process, and disseminate heterogeneous sensor data. Nearly all S4 nodes have passive sensors to provide rapid omnidirectional detection. In addition, Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR cameras are integrated to selected nodes to track the objects and capture associated imagery. These S4 camera-connected nodes will provide applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. In the S4, all the nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology, which can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The S4 utilizes a Service Oriented Architecture such that remote applications can interact with the S4 network and use the specific presentation methods. The S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded

  5. [Entomological surveillance in Mauritius].

    Science.gov (United States)

    Gopaul, R

    1995-01-01

    The entomological surveillance is an essential link in the fight against malaria in Mauritius. Because of the large number of malaria-infected travellers in Mauritius and the presence of the vector Anopheles arabiensis, the risk of local transmission is very real. The medical entomology division together with the malaria control unit and the health appointees exert a rigorous entomological surveillance of malaria. Field agents make entomological investigations of pilot villages and around the harbor and airport, where there have been cases of malaria, in addition to a few randomly chosen regions. All of the inhabited regions are accessible because of a good highway infrastructure, which enables a complete coverage for the entomological prospectives. Entomological controls are also conducted in the airplanes and the ships. All of the captured mosquitos and the harvested larva are transferred to a laboratory for identification, dissection or sensibility tests, etc. The larva of A. arabiensis have not yet developed resistance to Temephos and the adults are still sensitive to DDT. Thus, the larval habitats are treated with Temephos and DDT is sprayed in the residences where there have been native cases of malaria. The entomology division studies the ecology and the evolution of the larval habitats, as well as the impact of the anti-larval fight on the anophelene density. In addition to the chemical fight, a biological control is being tried with larva-eating fish such as Lebistes and Tilapia. In general, the anophelene density in Mauritius is low, but after the big summer rains, especially during a period of cyclones, there is a considerable increase of larval habitats and consequently a higher number of A. arabiensis. Therefore during this season, it is necessary to make an even more rigorous entomological surveillance. A. arabiensis has a strong exophile tendency even if it is endophage and exophage. This mosquito is zoophile, mostly towards cattle, and the

  6. Pixel Classification of SAR ice images using ANFIS-PSO Classifier

    Directory of Open Access Journals (Sweden)

    G. Vasumathi

    2016-12-01

    Full Text Available Synthetic Aperture Radar (SAR is playing a vital role in taking extremely high resolution radar images. It is greatly used to monitor the ice covered ocean regions. Sea monitoring is important for various purposes which includes global climate systems and ship navigation. Classification on the ice infested area gives important features which will be further useful for various monitoring process around the ice regions. Main objective of this paper is to classify the SAR ice image that helps in identifying the regions around the ice infested areas. In this paper three stages are considered in classification of SAR ice images. It starts with preprocessing in which the speckled SAR ice images are denoised using various speckle removal filters; comparison is made on all these filters to find the best filter in speckle removal. Second stage includes segmentation in which different regions are segmented using K-means and watershed segmentation algorithms; comparison is made between these two algorithms to find the best in segmenting SAR ice images. The last stage includes pixel based classification which identifies and classifies the segmented regions using various supervised learning classifiers. The algorithms includes Back propagation neural networks (BPN, Fuzzy Classifier, Adaptive Neuro Fuzzy Inference Classifier (ANFIS classifier and proposed ANFIS with Particle Swarm Optimization (PSO classifier; comparison is made on all these classifiers to propose which classifier is best suitable for classifying the SAR ice image. Various evaluation metrics are performed separately at all these three stages.

  7. Strategy as Texts

    DEFF Research Database (Denmark)

    Obed Madsen, Søren

    of the strategy into four categories. Second, the managers produce new texts based on the original strategy document by using four different ways of translation models. The study’s findings contribute to three areas. Firstly, it shows that translation is more than a sociological process. It is also...... a craftsmanship that requires knowledge and skills, which unfortunately seems to be overlooked in both the literature and in practice. Secondly, it shows that even though a strategy text is in singular, the translation makes strategy plural. Thirdly, the article proposes a way to open up the black box of what......This article shows empirically how managers translate a strategy plan at an individual level. By analysing how managers in three organizations translate strategies, it identifies that the translation happens in two steps: First, the managers decipher the strategy by coding the different parts...

  8. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    Directory of Open Access Journals (Sweden)

    Yinan Zhang

    2016-01-01

    Full Text Available Diabetic retinopathy (DR screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  9. Surveillance theory and its implications for law

    NARCIS (Netherlands)

    Timan, Tjerk; Galic, Masa; Koops, Bert-Jaap; Brownsword, Roger; Scotford, Eloise; Yeung, Karen

    2017-01-01

    This chapter provides an overview of key surveillance theories and their implications for law and regulation. It presents three stages of theories that characterise changes in thinking about surveillance in society and the disciplining, controlling, and entertaining functions of surveillance.

  10. Reporting and Surveillance for Norovirus Outbreaks

    Science.gov (United States)

    ... Vaccine Surveillance Network (NVSN) Foodborne Diseases Active Surveillance Network (FoodNet) National Outbreak Reporting System (NORS) Estimates of Foodborne Illness in the United States CDC's Vessel Sanitation Program CDC Feature: Surveillance for Norovirus Outbreaks Top ...

  11. Optimization of short amino acid sequences classifier

    Science.gov (United States)

    Barcz, Aleksy; Szymański, Zbigniew

    This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers. SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the SPDE algorithm to the task of selecting classifiers with best performance measures values.

  12. SVM classifier on chip for melanoma detection.

    Science.gov (United States)

    Afifi, Shereen; GholamHosseini, Hamid; Sinha, Roopak

    2017-07-01

    Support Vector Machine (SVM) is a common classifier used for efficient classification with high accuracy. SVM shows high accuracy for classifying melanoma (skin cancer) clinical images within computer-aided diagnosis systems used by skin cancer specialists to detect melanoma early and save lives. We aim to develop a medical low-cost handheld device that runs a real-time embedded SVM-based diagnosis system for use in primary care for early detection of melanoma. In this paper, an optimized SVM classifier is implemented onto a recent FPGA platform using the latest design methodology to be embedded into the proposed device for realizing online efficient melanoma detection on a single system on chip/device. The hardware implementation results demonstrate a high classification accuracy of 97.9% and a significant acceleration factor of 26 from equivalent software implementation on an embedded processor, with 34% of resources utilization and 2 watts for power consumption. Consequently, the implemented system meets crucial embedded systems constraints of high performance and low cost, resources utilization and power consumption, while achieving high classification accuracy.

  13. Total process surveillance: (TOPS)

    International Nuclear Information System (INIS)

    Millar, J.H.P.

    1992-01-01

    A Total Process Surveillance system is under development which can provide, in real-time, additional process information from a limited number of raw measurement signals. This is achieved by using a robust model based observer to generate estimates of the process' internal states. The observer utilises the analytical reduncancy among a diverse range of transducers and can thus accommodate off-normal conditions which lead to transducer loss or damage. The modular hierarchical structure of the system enables the maximum amount of information to be assimilated from the available instrument signals no matter how diverse. This structure also constitutes a data reduction path thus reducing operator cognitive overload from a large number of varying, and possibly contradictory, raw plant signals. (orig.)

  14. Surveillance of the environmental radioactivity

    International Nuclear Information System (INIS)

    Schneider, Th.; Gitzinger, C.; Jaunet, P.; Eberbach, F.; Clavel, B.; Hemidy, P.Y.; Perrier, G.; Kiper, Ch.; Peres, J.M.; Josset, M.; Calvez, M.; Leclerc, M.; Leclerc, E.; Aubert, C.; Levelut, M.N.; Debayle, Ch.; Mayer, St.; Renaud, Ph.; Leprieur, F.; Petitfrere, M.; Catelinois, O.; Monfort, M.; Baron, Y.; Target, A.

    2008-01-01

    The objective of these days was to present the organisation of the surveillance of the environmental radioactivity and to allow an experience sharing and a dialog on this subject between the different actors of the radiation protection in france. The different presentations were as follow: evolution and stakes of the surveillance of radioactivity in environment; the part of the European commission, regulatory aspects; the implementation of the surveillance: the case of Germany; Strategy and logic of environmental surveillance around the EDF national centers of energy production; environmental surveillance: F.B.F.C. site of Romans on Isere; steps of the implementation 'analysis for release decree at the F.B.F.C./C.E.R.C.A. laboratory of Romans; I.R.S.N. and the environmental surveillance: situation and perspectives; the part of a non institutional actor, the citizenship surveillance done by A.C.R.O.; harmonization of sampling methods: the results of inter operators G.T. sampling; sustainable observatory of environment: data traceability and samples conservation; inter laboratories tests of radioactivity measurements; national network of environmental radioactivity measurement: laboratories agreements; the networks of environmental radioactivity telemetry: modernization positioning; programme of observation and surveillance of surface environment and installations of the H.A.-M.A.V.L. project (high activity and long life medium activity); Evolution of radionuclides concentration in environment and adaptation of measurements techniques to the surveillance needs; the national network of radioactivity measurement in environment; modes of data restoration of surveillance: the results of the Loire environment pilot action; method of sanitary impacts estimation in the area of ionizing radiations; the radiological impact of atmospheric nuclear tests in French Polynesia; validation of models by the measure; network of measurement and alert management of the atmospheric

  15. Layout-aware text extraction from full-text PDF of scientific articles

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Cartic

    2012-05-01

    Full Text Available Abstract Background The Portable Document Format (PDF is the most commonly used file format for online scientific publications. The absence of effective means to extract text from these PDF files in a layout-aware manner presents a significant challenge for developers of biomedical text mining or biocuration informatics systems that use published literature as an information source. In this paper we introduce the ‘Layout-Aware PDF Text Extraction’ (LA-PDFText system to facilitate accurate extraction of text from PDF files of research articles for use in text mining applications. Results Our paper describes the construction and performance of an open source system that extracts text blocks from PDF-formatted full-text research articles and classifies them into logical units based on rules that characterize specific sections. The LA-PDFText system focuses only on the textual content of the research articles and is meant as a baseline for further experiments into more advanced extraction methods that handle multi-modal content, such as images and graphs. The system works in a three-stage process: (1 Detecting contiguous text blocks using spatial layout processing to locate and identify blocks of contiguous text, (2 Classifying text blocks into rhetorical categories using a rule-based method and (3 Stitching classified text blocks together in the correct order resulting in the extraction of text from section-wise grouped blocks. We show that our system can identify text blocks and classify them into rhetorical categories with Precision1 = 0.96% Recall = 0.89% and F1 = 0.91%. We also present an evaluation of the accuracy of the block detection algorithm used in step 2. Additionally, we have compared the accuracy of the text extracted by LA-PDFText to the text from the Open Access subset of PubMed Central. We then compared this accuracy with that of the text extracted by the PDF2Text system, 2commonly used to extract text from PDF

  16. Malaria Surveillance - United States, 2014.

    Science.gov (United States)

    Mace, Kimberly E; Arguin, Paul M

    2017-05-26

    . Less than 1.0% of patients were infected with two species. The infecting species was unreported or undetermined in 11.7% of cases. CDC provided diagnostic assistance for 14.2% of confirmed cases and tested 12.0% of P. falciparum specimens for antimalarial resistance markers. Of patients who reported purpose of travel, 57.5% were visiting friends and relatives (VFR). Among U.S. residents for whom information on chemoprophylaxis use and travel region was known, 7.8% reported that they initiated and adhered to a chemoprophylaxis drug regimen recommended by CDC for the regions to which they had traveled. Thirty-two cases were among pregnant women, none of whom had adhered to chemoprophylaxis. Among all reported cases, 17.0% were classified as severe illness, and five persons with malaria died. CDC received 137 P. falciparum-positive samples for the detection of antimalarial resistance markers (although some loci for chloroquine and mefloquine were untestable for up to nine samples). Of the 137 samples tested, 131 (95.6%) had genetic polymorphisms associated with pyrimethamine drug resistance, 96 (70.0%) with sulfadoxine resistance, 77 (57.5%) with chloroquine resistance, three (2.3%) with mefloquine drug resistance, one (html). Malaria infections can be fatal if not diagnosed and treated promptly with antimalarial medications appropriate for the patient's age and medical history, likely country of malaria acquisition, and previous use of antimalarial chemoprophylaxis. Recent molecular laboratory advances have enabled CDC to identify and conduct molecular surveillance of antimalarial drug resistance markers (https://www.cdc.gov/malaria/features/ars.html) and improve the ability of CDC to track, guide treatment, and manage drug resistance in malaria parasites both domestically and globally. For this effort to be successful, specimens should be submitted for all cases diagnosed in the United States. Clinicians should consult CDC Guidelines for Treatment of Malaria in the

  17. Secure surveillance videotapes

    International Nuclear Information System (INIS)

    Resnik, W.M.; Kadner, S.P.; Olsen, R.; Chitumbo, K.; Pepper, S.

    1995-01-01

    With assistance from the US Program for Technical Assistance to IAEA Safeguards (POTAS), Aquila Technologies Group developed the Tamper-Resistant Analog Media (TRAM-1000) system to provide standard VHS surveillance video tapes with an enhanced tamper-indicating capability. This project represents further implementation of the partnership approach in facilities including light water reactors with MOX facilities. These facilities use Uniplex Digiquad system video tapes. The partnership approach ensures that one organization can exchange the tapes in a machine without the presence of the other, without losing continuity of information. The TRAM-1000 system development project was accomplished in two stages. In the first stage of the project, the original system delivered to the IAEA, consists of three parts: (1) the tamper detection unit, (2) a specially augmented VHS video tape, and (3) an HP-95 reader. The tamper detection unit houses a VACOSS active fiber-optic seal and an electronic identification tag (E-TAG) reader. In the second stage of the project, the original TRAM-1000 was modified to its current design based on agency input. After delivery of the original TRAM-1000 system to the IAEA, it was reviewed by inspectors. The inspectors felt that the initial system's tape storage/transport method could be simplified. Rather than threading the fiber through the tape spindles, the inspectors suggested that the tape be placed in a bag capable of being sealed. Also, a more flexible fiber-optic cable was recommended. As a result of these suggestions, Aquila developed a tamper-proof bag specifically for holding a surveillance video tape and sealable with a VACOSS fiber optical seal

  18. Reading Authentic Texts

    DEFF Research Database (Denmark)

    Balling, Laura Winther

    2013-01-01

    Most research on cognates has focused on words presented in isolation that are easily defined as cognate between L1 and L2. In contrast, this study investigates what counts as cognate in authentic texts and how such cognates are read. Participants with L1 Danish read news articles in their highly...... proficient L2, English, while their eye-movements were monitored. The experiment shows a cognate advantage for morphologically simple words, but only when cognateness is defined relative to translation equivalents that are appropriate in the context. For morphologically complex words, a cognate disadvantage...... word predictability indexed by the conditional probability of each word....

  19. Documents and legal texts

    International Nuclear Information System (INIS)

    2016-01-01

    This section treats of the following documents and legal texts: 1 - Brazil: Law No. 13,260 of 16 March 2016 (To regulate the provisions of item XLIII of Article 5 of the Federal Constitution on terrorism, dealing with investigative and procedural provisions and redefining the concept of a terrorist organisation; and amends Laws No. 7,960 of 21 December 1989 and No. 12,850 of 2 August 2013); 2 - India: The Atomic Energy (Amendment) Act, 2015; Department Of Atomic Energy Notification (Civil Liability for Nuclear Damage); 3 - Japan: Act on Subsidisation, etc. for Nuclear Damage Compensation Funds following the implementation of the Convention on Supplementary Compensation for Nuclear Damage

  20. Journalistic Text Production

    DEFF Research Database (Denmark)

    Haugaard, Rikke Hartmann

    , a multiple case study investigated three professional text producers’ practices as they unfolded in their natural setting at the Spanish newspaper, El Mundo. • Results indicate that journalists’ revisions are related to form markedly more often than to content. • Results suggest two writing phases serving...... at the Spanish newspaper, El Mundo, in Madrid. The study applied a combination of quantitative and qualitative methods, i.e. keystroke logging, participant observation and retrospective interview. Results indicate that journalists’ revisions are related to form markedly more often than to content (approx. three...

  1. Value of syndromic surveillance within the Armed Forces for early warning during a dengue fever outbreak in French Guiana in 2006

    Directory of Open Access Journals (Sweden)

    Jefferson Henry

    2008-07-01

    Full Text Available Abstract Background A dengue fever outbreak occured in French Guiana in 2006. The objectives were to study the value of a syndromic surveillance system set up within the armed forces, compared to the traditional clinical surveillance system during this outbreak, to highlight issues involved in comparing military and civilian surveillance systems and to discuss the interest of syndromic surveillance for public health response. Methods Military syndromic surveillance allows the surveillance of suspected dengue fever cases among the 3,000 armed forces personnel. Within the same population, clinical surveillance uses several definition criteria for dengue fever cases, depending on the epidemiological situation. Civilian laboratory surveillance allows the surveillance of biologically confirmed cases, within the 200,000 inhabitants. Results It was shown that syndromic surveillance detected the dengue fever outbreak several weeks before clinical surveillance, allowing quick and effective enhancement of vector control within the armed forces. Syndromic surveillance was also found to have detected the outbreak before civilian laboratory surveillance. Conclusion Military syndromic surveillance allowed an early warning for this outbreak to be issued, enabling a quicker public health response by the armed forces. Civilian surveillance system has since introduced syndromic surveillance as part of its surveillance strategy. This should enable quicker public health responses in the future.

  2. Reliability of case definitions for public health surveillance assessed by Round-Robin test methodology

    Directory of Open Access Journals (Sweden)

    Claus Hermann

    2006-05-01

    Full Text Available Abstract Background Case definitions have been recognized to be important elements of public health surveillance systems. They are to assure comparability and consistency of surveillance data and have crucial impact on the sensitivity and the positive predictive value of a surveillance system. The reliability of case definitions has rarely been investigated systematically. Methods We conducted a Round-Robin test by asking all 425 local health departments (LHD and the 16 state health departments (SHD in Germany to classify a selection of 68 case examples using case definitions. By multivariate analysis we investigated factors linked to classification agreement with a gold standard, which was defined by an expert panel. Results A total of 7870 classifications were done by 396 LHD (93% and all SHD. Reporting sensitivity was 90.0%, positive predictive value 76.6%. Polio case examples had the lowest reporting precision, salmonellosis case examples the highest (OR = 0.008; CI: 0.005–0.013. Case definitions with a check-list format of clinical criteria resulted in higher reporting precision than case definitions with a narrative description (OR = 3.08; CI: 2.47–3.83. Reporting precision was higher among SHD compared to LHD (OR = 1.52; CI: 1.14–2.02. Conclusion Our findings led to a systematic revision of the German case definitions and build the basis for general recommendations for the creation of case definitions. These include, among others, that testable yes/no criteria in a check-list format is likely to improve reliability, and that software used for data transmission should be designed in strict accordance with the case definitions. The findings of this study are largely applicable to case definitions in many other countries or international networks as they share the same structural and editorial characteristics of the case definitions evaluated in this study before their revision.

  3. Layout-aware text extraction from full-text PDF of scientific articles.

    Science.gov (United States)

    Ramakrishnan, Cartic; Patnia, Abhishek; Hovy, Eduard; Burns, Gully Apc

    2012-05-28

    The Portable Document Format (PDF) is the most commonly used file format for online scientific publications. The absence of effective means to extract text from these PDF files in a layout-aware manner presents a significant challenge for developers of biomedical text mining or biocuration informatics systems that use published literature as an information source. In this paper we introduce the 'Layout-Aware PDF Text Extraction' (LA-PDFText) system to facilitate accurate extraction of text from PDF files of research articles for use in text mining applications. Our paper describes the construction and performance of an open source system that extracts text blocks from PDF-formatted full-text research articles and classifies them into logical units based on rules that characterize specific sections. The LA-PDFText system focuses only on the textual content of the research articles and is meant as a baseline for further experiments into more advanced extraction methods that handle multi-modal content, such as images and graphs. The system works in a three-stage process: (1) Detecting contiguous text blocks using spatial layout processing to locate and identify blocks of contiguous text, (2) Classifying text blocks into rhetorical categories using a rule-based method and (3) Stitching classified text blocks together in the correct order resulting in the extraction of text from section-wise grouped blocks. We show that our system can identify text blocks and classify them into rhetorical categories with Precision1 = 0.96% Recall = 0.89% and F1 = 0.91%. We also present an evaluation of the accuracy of the block detection algorithm used in step 2. Additionally, we have compared the accuracy of the text extracted by LA-PDFText to the text from the Open Access subset of PubMed Central. We then compared this accuracy with that of the text extracted by the PDF2Text system, 2commonly used to extract text from PDF. Finally, we discuss preliminary error analysis for

  4. Weitere Texte physiognomischen Inhalts

    Directory of Open Access Journals (Sweden)

    Böck, Barbara

    2004-12-01

    Full Text Available The present article offers the edition of three cuneiform texts belonging to the Akkadian handbook of omens drawn from the physical appearance as well as the morals and behaviour of man. The book comprising up to 27 chapters with more than 100 omens each was entitled in antiquity Alamdimmû. The edition of the three cuneiform tablets completes, thus, the author's monographic study on the ancient Mesopotamian divinatory discipline of physiognomy (Die babylonisch-assyrische Morphoskopie (Wien 2000 [=AfO Beih. 27].

    En este artículo se presenta la editio princeps de tres textos cuneiformes conservados en el British Museum (Londres y el Vorderasiatisches Museum (Berlín, que pertenecen al libro asirio-babilonio de presagios fisiognómicos. Este libro, titulado originalmente Alamdimmû ('forma, figura', consta de 27 capítulos, cada uno con más de cien presagios escritos en lengua acadia. Los tres textos completan así el estudio monográfico de la autora sobre la disciplina adivinatoria de la fisiognomía en el antiguo Oriente (Die babylonisch-assyrische Morphoskopie (Wien 2000 [=AfO Beih. 27].

  5. Utah Text Retrieval Project

    Energy Technology Data Exchange (ETDEWEB)

    Hollaar, L A

    1983-10-01

    The Utah Text Retrieval project seeks well-engineered solutions to the implementation of large, inexpensive, rapid text information retrieval systems. The project has three major components. Perhaps the best known is the work on the specialized processors, particularly search engines, necessary to achieve the desired performance and cost. The other two concern the user interface to the system and the system's internal structure. The work on user interface development is not only concentrating on the syntax and semantics of the query language, but also on the overall environment the system presents to the user. Environmental enhancements include convenient ways to browse through retrieved documents, access to other information retrieval systems through gateways supporting a common command interface, and interfaces to word processing systems. The system's internal structure is based on a high-level data communications protocol linking the user interface, index processor, search processor, and other system modules. This allows them to be easily distributed in a multi- or specialized-processor configuration. It also allows new modules, such as a knowledge-based query reformulator, to be added. 15 references.

  6. The Protection of Classified Information: The Legal Framework

    National Research Council Canada - National Science Library

    Elsea, Jennifer K

    2006-01-01

    Recent incidents involving leaks of classified information have heightened interest in the legal framework that governs security classification, access to classified information, and penalties for improper disclosure...

  7. Arabic Text Categorization Using Improved k-Nearest neighbour Algorithm

    Directory of Open Access Journals (Sweden)

    Wail Hamood KHALED

    2014-10-01

    Full Text Available The quantity of text information published in Arabic language on the net requires the implementation of effective techniques for the extraction and classifying of relevant information contained in large corpus of texts. In this paper we presented an implementation of an enhanced k-NN Arabic text classifier. We apply the traditional k-NN and Naive Bayes from Weka Toolkit for comparison purpose. Our proposed modified k-NN algorithm features an improved decision rule to skip the classes that are less similar and identify the right class from k nearest neighbours which increases the accuracy. The study evaluates the improved decision rule technique using the standard of recall, precision and f-measure as the basis of comparison. We concluded that the effectiveness of the proposed classifier is promising and outperforms the classical k-NN classifier.

  8. Theorizing Surveillance in the UK Crime Control Field

    Directory of Open Access Journals (Sweden)

    Michael McCahill

    2015-09-01

    Full Text Available Drawing upon the work of Pierre Bourdieu and Loic Wacquant, this paper argues that the demise of the Keynesian Welfare State (KWS and the rise of neo-liberal economic policies in the UK has placed new surveillance technologies at the centre of a reconfigured “crime control field” (Garland, 2001 designed to control the problem populations created by neo-liberal economic policies (Wacquant, 2009a. The paper also suggests that field theory could be usefully deployed in future research to explore how wider global trends or social forces, such as neo-liberalism or bio-power, are refracted through the crime control field in different national jurisdictions. We conclude by showing how this approach provides a bridge between society-wide analysis and micro-sociology by exploring how the operation of new surveillance technologies is mediated by the “habitus” of surveillance agents working in the crime control field and contested by surveillance subjects.

  9. Secure and Efficient Reactive Video Surveillance for Patient Monitoring

    Directory of Open Access Journals (Sweden)

    An Braeken

    2016-01-01

    Full Text Available Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient’s side.

  10. Formal and informal surveillance systems: how to build links

    Directory of Open Access Journals (Sweden)

    S. Desvaux

    2015-11-01

    Full Text Available Within the framework of highly pathogenic avian influenza (HPAI surveillance in Vietnam, interviews were carried out with poultry farmers and local animal health operators in two municipalities of the Red River delta with a view to documenting the circulation of health information concerning poultry (content of the information; method, scope and speed of circulation; actors involved; actions triggered as a result of the information received; economic and social incentives for disseminating or withholding information. The main results show that (i active informal surveillance networks exist, (ii the alert levels vary and the measures applied by the poultry farmers are myriad and often far-removed from the official recommendations, and (iii the municipal veterinarian is at the interface between the formal and the informal surveillance systems. The conclusions emphasize the need for the authorities to separate distinctly surveillance and control activities, and to regionalize control strategies, taking into account epidemiological specificities and social dynamics at local level.

  11. Deconstructing Cross-Entropy for Probabilistic Binary Classifiers

    Directory of Open Access Journals (Sweden)

    Daniel Ramos

    2018-03-01

    Full Text Available In this work, we analyze the cross-entropy function, widely used in classifiers both as a performance measure and as an optimization objective. We contextualize cross-entropy in the light of Bayesian decision theory, the formal probabilistic framework for making decisions, and we thoroughly analyze its motivation, meaning and interpretation from an information-theoretical point of view. In this sense, this article presents several contributions: First, we explicitly analyze the contribution to cross-entropy of (i prior knowledge; and (ii the value of the features in the form of a likelihood ratio. Second, we introduce a decomposition of cross-entropy into two components: discrimination and calibration. This decomposition enables the measurement of different performance aspects of a classifier in a more precise way; and justifies previously reported strategies to obtain reliable probabilities by means of the calibration of the output of a discriminating classifier. Third, we give different information-theoretical interpretations of cross-entropy, which can be useful in different application scenarios, and which are related to the concept of reference probabilities. Fourth, we present an analysis tool, the Empirical Cross-Entropy (ECE plot, a compact representation of cross-entropy and its aforementioned decomposition. We show the power of ECE plots, as compared to other classical performance representations, in two diverse experimental examples: a speaker verification system, and a forensic case where some glass findings are present.

  12. General and Local: Averaged k-Dependence Bayesian Classifiers

    Directory of Open Access Journals (Sweden)

    Limin Wang

    2015-06-01

    Full Text Available The inference of a general Bayesian network has been shown to be an NP-hard problem, even for approximate solutions. Although k-dependence Bayesian (KDB classifier can construct at arbitrary points (values of k along the attribute dependence spectrum, it cannot identify the changes of interdependencies when attributes take different values. Local KDB, which learns in the framework of KDB, is proposed in this study to describe the local dependencies implicated in each test instance. Based on the analysis of functional dependencies, substitution-elimination resolution, a new type of semi-naive Bayesian operation, is proposed to substitute or eliminate generalization to achieve accurate estimation of conditional probability distribution while reducing computational complexity. The final classifier, averaged k-dependence Bayesian (AKDB classifiers, will average the output of KDB and local KDB. Experimental results on the repository of machine learning databases from the University of California Irvine (UCI showed that AKDB has significant advantages in zero-one loss and bias relative to naive Bayes (NB, tree augmented naive Bayes (TAN, Averaged one-dependence estimators (AODE, and KDB. Moreover, KDB and local KDB show mutually complementary characteristics with respect to variance.

  13. Evaluation of Polarimetric SAR Decomposition for Classifying Wetland Vegetation Types

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Hong

    2015-07-01

    Full Text Available The Florida Everglades is the largest subtropical wetland system in the United States and, as with subtropical and tropical wetlands elsewhere, has been threatened by severe environmental stresses. It is very important to monitor such wetlands to inform management on the status of these fragile ecosystems. This study aims to examine the applicability of TerraSAR-X quadruple polarimetric (quad-pol synthetic aperture radar (PolSAR data for classifying wetland vegetation in the Everglades. We processed quad-pol data using the Hong & Wdowinski four-component decomposition, which accounts for double bounce scattering in the cross-polarization signal. The calculated decomposition images consist of four scattering mechanisms (single, co- and cross-pol double, and volume scattering. We applied an object-oriented image analysis approach to classify vegetation types with the decomposition results. We also used a high-resolution multispectral optical RapidEye image to compare statistics and classification results with Synthetic Aperture Radar (SAR observations. The calculated classification accuracy was higher than 85%, suggesting that the TerraSAR-X quad-pol SAR signal had a high potential for distinguishing different vegetation types. Scattering components from SAR acquisition were particularly advantageous for classifying mangroves along tidal channels. We conclude that the typical scattering behaviors from model-based decomposition are useful for discriminating among different wetland vegetation types.

  14. A Novel Cascade Classifier for Automatic Microcalcification Detection.

    Directory of Open Access Journals (Sweden)

    Seung Yeon Shin

    Full Text Available In this paper, we present a novel cascaded classification framework for automatic detection of individual and clusters of microcalcifications (μC. Our framework comprises three classification stages: i a random forest (RF classifier for simple features capturing the second order local structure of individual μCs, where non-μC pixels in the target mammogram are efficiently eliminated; ii a more complex discriminative restricted Boltzmann machine (DRBM classifier for μC candidates determined in the RF stage, which automatically learns the detailed morphology of μC appearances for improved discriminative power; and iii a detector to detect clusters of μCs from the individual μC detection results, using two different criteria. From the two-stage RF-DRBM classifier, we are able to distinguish μCs using explicitly computed features, as well as learn implicit features that are able to further discriminate between confusing cases. Experimental evaluation is conducted on the original Mammographic Image Analysis Society (MIAS and mini-MIAS databases, as well as our own Seoul National University Bundang Hospital digital mammographic database. It is shown that the proposed method outperforms comparable methods in terms of receiver operating characteristic (ROC and precision-recall curves for detection of individual μCs and free-response receiver operating characteristic (FROC curve for detection of clustered μCs.

  15. Entropy based classifier for cross-domain opinion mining

    Directory of Open Access Journals (Sweden)

    Jyoti S. Deshmukh

    2018-01-01

    Full Text Available In recent years, the growth of social network has increased the interest of people in analyzing reviews and opinions for products before they buy them. Consequently, this has given rise to the domain adaptation as a prominent area of research in sentiment analysis. A classifier trained from one domain often gives poor results on data from another domain. Expression of sentiment is different in every domain. The labeling cost of each domain separately is very high as well as time consuming. Therefore, this study has proposed an approach that extracts and classifies opinion words from one domain called source domain and predicts opinion words of another domain called target domain using a semi-supervised approach, which combines modified maximum entropy and bipartite graph clustering. A comparison of opinion classification on reviews on four different product domains is presented. The results demonstrate that the proposed method performs relatively well in comparison to the other methods. Comparison of SentiWordNet of domain-specific and domain-independent words reveals that on an average 72.6% and 88.4% words, respectively, are correctly classified.

  16. Harnessing information from injury narratives in the 'big data' era: understanding and applying machine learning for injury surveillance.

    Science.gov (United States)

    Vallmuur, Kirsten; Marucci-Wellman, Helen R; Taylor, Jennifer A; Lehto, Mark; Corns, Helen L; Smith, Gordon S

    2016-04-01

    Vast amounts of injury narratives are collected daily and are available electronically in real time and have great potential for use in injury surveillance and evaluation. Machine learning algorithms have been developed to assist in identifying cases and classifying mechanisms leading to injury in a much timelier manner than is possible when relying on manual coding of narratives. The aim of this paper is to describe the background, growth, value, challenges and future directions of machine learning as applied to injury surveillance. This paper reviews key aspects of machine learning using injury narratives, providing a case study to demonstrate an application to an established human-machine learning approach. The range of applications and utility of narrative text has increased greatly with advancements in computing techniques over time. Practical and feasible methods exist for semiautomatic classification of injury narratives which are accurate, efficient and meaningful. The human-machine learning approach described in the case study achieved high sensitivity and PPV and reduced the need for human coding to less than a third of cases in one large occupational injury database. The last 20 years have seen a dramatic change in the potential for technological advancements in injury surveillance. Machine learning of 'big injury narrative data' opens up many possibilities for expanded sources of data which can provide more comprehensive, ongoing and timely surveillance to inform future injury prevention policy and practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. Documents and legal texts

    International Nuclear Information System (INIS)

    2013-01-01

    This section reprints a selection of recently published legislative texts and documents: - Russian Federation: Federal Law No.170 of 21 November 1995 on the use of atomic energy, Adopted by the State Duma on 20 October 1995; - Uruguay: Law No.19.056 On the Radiological Protection and Safety of Persons, Property and the Environment (4 January 2013); - Japan: Third Supplement to Interim Guidelines on Determination of the Scope of Nuclear Damage resulting from the Accident at the Tokyo Electric Power Company Fukushima Daiichi and Daini Nuclear Power Plants (concerning Damages related to Rumour-Related Damage in the Agriculture, Forestry, Fishery and Food Industries), 30 January 2013; - France and the United States: Joint Statement on Liability for Nuclear Damage (Aug 2013); - Franco-Russian Nuclear Power Declaration (1 November 2013)

  18. A profile of the online dissemination of national influenza surveillance data

    Directory of Open Access Journals (Sweden)

    Ho Lai

    2009-09-01

    Full Text Available Abstract Background Influenza surveillance systems provide important and timely information to health service providers on trends in the circulation of influenza virus and other upper respiratory tract infections. Online dissemination of surveillance data is useful for risk communication to health care professionals, the media and the general public. We reviewed national influenza surveillance websites from around the world to describe the main features of surveillance data dissemination. Methods We searched for national influenza surveillance websites for every country and reviewed the resulting sites where available during the period from November 2008 through February 2009. Literature about influenza surveillance was searched at MEDLINE for relevant hyperlinks to related websites. Non-English websites were translated into English using human translators or Google language tools. Results A total of 70 national influenza surveillance websites were identified. The percentage of developing countries with surveillance websites was lower than that of developed countries (22% versus 57% respectively. Most of the websites (74% were in English or provided an English version. The most common surveillance methods included influenza-like illness consultation rates in primary care settings (89% and laboratory surveillance (44%. Most websites (70% provided data within a static report format and 66% of the websites provided data with at least weekly resolution. Conclusion Appropriate dissemination of surveillance data is important to maximize the utility of collected data. There may be room for improvement in the style and content of the dissemination of influenza data to health care professionals and the general public.

  19. Classifying smoking urges via machine learning.

    Science.gov (United States)

    Dumortier, Antoine; Beckjord, Ellen; Shiffman, Saul; Sejdić, Ervin

    2016-12-01

    Smoking is the largest preventable cause of death and diseases in the developed world, and advances in modern electronics and machine learning can help us deliver real-time intervention to smokers in novel ways. In this paper, we examine different machine learning approaches to use situational features associated with having or not having urges to smoke during a quit attempt in order to accurately classify high-urge states. To test our machine learning approaches, specifically, Bayes, discriminant analysis and decision tree learning methods, we used a dataset collected from over 300 participants who had initiated a quit attempt. The three classification approaches are evaluated observing sensitivity, specificity, accuracy and precision. The outcome of the analysis showed that algorithms based on feature selection make it possible to obtain high classification rates with only a few features selected from the entire dataset. The classification tree method outperformed the naive Bayes and discriminant analysis methods, with an accuracy of the classifications up to 86%. These numbers suggest that machine learning may be a suitable approach to deal with smoking cessation matters, and to predict smoking urges, outlining a potential use for mobile health applications. In conclusion, machine learning classifiers can help identify smoking situations, and the search for the best features and classifier parameters significantly improves the algorithms' performance. In addition, this study also supports the usefulness of new technologies in improving the effect of smoking cessation interventions, the management of time and patients by therapists, and thus the optimization of available health care resources. Future studies should focus on providing more adaptive and personalized support to people who really need it, in a minimum amount of time by developing novel expert systems capable of delivering real-time interventions. Copyright © 2016 Elsevier Ireland Ltd. All rights

  20. Classifying spaces of degenerating polarized Hodge structures

    CERN Document Server

    Kato, Kazuya

    2009-01-01

    In 1970, Phillip Griffiths envisioned that points at infinity could be added to the classifying space D of polarized Hodge structures. In this book, Kazuya Kato and Sampei Usui realize this dream by creating a logarithmic Hodge theory. They use the logarithmic structures begun by Fontaine-Illusie to revive nilpotent orbits as a logarithmic Hodge structure. The book focuses on two principal topics. First, Kato and Usui construct the fine moduli space of polarized logarithmic Hodge structures with additional structures. Even for a Hermitian symmetric domain D, the present theory is a refinem

  1. Cubical sets as a classifying topos

    DEFF Research Database (Denmark)

    Spitters, Bas

    Coquand’s cubical set model for homotopy type theory provides the basis for a computational interpretation of the univalence axiom and some higher inductive types, as implemented in the cubical proof assistant. We show that the underlying cube category is the opposite of the Lawvere theory of De...... Morgan algebras. The topos of cubical sets itself classifies the theory of ‘free De Morgan algebras’. This provides us with a topos with an internal ‘interval’. Using this interval we construct a model of type theory following van den Berg and Garner. We are currently investigating the precise relation...

  2. Double Ramp Loss Based Reject Option Classifier

    Science.gov (United States)

    2015-05-22

    of convex (DC) functions. To minimize it, we use DC programming approach [1]. The proposed method has following advantages: (1) the proposed loss LDR ...space constraints. We see that LDR does not put any restriction on ρ for it to be an upper bound of L0−d−1. 2.2 Risk Formulation Using LDR Let S = {(xn...classifier learnt using LDR based approach (C = 100, μ = 1, d = .2). Filled circles and triangles represent the support vectors. 4 Experimental Results We show

  3. Interconnectedness und digitale Texte

    Directory of Open Access Journals (Sweden)

    Detlev Doherr

    2013-04-01

    Full Text Available Zusammenfassung Die multimedialen Informationsdienste im Internet werden immer umfangreicher und umfassender, wobei auch die nur in gedruckter Form vorliegenden Dokumente von den Bibliotheken digitalisiert und ins Netz gestellt werden. Über Online-Dokumentenverwaltungen oder Suchmaschinen können diese Dokumente gefunden und dann in gängigen Formaten wie z.B. PDF bereitgestellt werden. Dieser Artikel beleuchtet die Funktionsweise der Humboldt Digital Library, die seit mehr als zehn Jahren Dokumente von Alexander von Humboldt in englischer Übersetzung im Web als HDL (Humboldt Digital Library kostenfrei zur Verfügung stellt. Anders als eine digitale Bibliothek werden dabei allerdings nicht nur digitalisierte Dokumente als Scan oder PDF bereitgestellt, sondern der Text als solcher und in vernetzter Form verfügbar gemacht. Das System gleicht damit eher einem Informationssystem als einer digitalen Bibliothek, was sich auch in den verfügbaren Funktionen zur Auffindung von Texten in unterschiedlichen Versionen und Übersetzungen, Vergleichen von Absätzen verschiedener Dokumente oder der Darstellung von Bilden in ihrem Kontext widerspiegelt. Die Entwicklung von dynamischen Hyperlinks auf der Basis der einzelnen Textabsätze der Humboldt‘schen Werke in Form von Media Assets ermöglicht eine Nutzung der Programmierschnittstelle von Google Maps zur geographischen wie auch textinhaltlichen Navigation. Über den Service einer digitalen Bibliothek hinausgehend, bietet die HDL den Prototypen eines mehrdimensionalen Informationssystems, das mit dynamischen Strukturen arbeitet und umfangreiche thematische Auswertungen und Vergleiche ermöglicht. Summary The multimedia information services on Internet are becoming more and more comprehensive, even the printed documents are digitized and republished as digital Web documents by the libraries. Those digital files can be found by search engines or management tools and provided as files in usual formats as

  4. Can scientific journals be classified based on their citation profiles?

    Directory of Open Access Journals (Sweden)

    Sayed-Amir Marashi

    2015-03-01

    Full Text Available Classification of scientific publications is of great importance in biomedical research evaluation. However, accurate classification of research publications is challenging and normally is performed in a rather subjective way. In the present paper, we propose to classify biomedical publications into superfamilies, by analysing their citation profiles, i.e. the location of citations in the structure of citing articles. Such a classification may help authors to find the appropriate biomedical journal for publication, may make journal comparisons more rational, and may even help planners to better track the consequences of their policies on biomedical research.

  5. DFRFT: A Classified Review of Recent Methods with Its Application

    Directory of Open Access Journals (Sweden)

    Ashutosh Kumar Singh

    2013-01-01

    Full Text Available In the literature, there are various algorithms available for computing the discrete fractional Fourier transform (DFRFT. In this paper, all the existing methods are reviewed, classified into four categories, and subsequently compared to find out the best alternative from the view point of minimal computational error, computational complexity, transform features, and additional features like security. Subsequently, the correlation theorem of FRFT has been utilized to remove significantly the Doppler shift caused due to motion of receiver in the DSB-SC AM signal. Finally, the role of DFRFT has been investigated in the area of steganography.

  6. Application of a naive Bayesians classifiers in assessing the supplier

    Directory of Open Access Journals (Sweden)

    Mijailović Snežana

    2017-01-01

    Full Text Available The paper considers the class of interactive knowledge based systems whose main purpose of making proposals and assisting customers in making decisions. The mathematical model provides a set of examples of learning about the delivered series of outflows from three suppliers, as well as an analysis of an illustrative example for assessing the supplier using a naive Bayesian classifier. The model was developed on the basis of the analysis of subjective probabilities, which are later revised with the help of new empirical information and Bayesian theorem on a posterior probability, i.e. combining of subjective and objective conditional probabilities in the choice of a reliable supplier.

  7. Classifying BCI signals from novice users with extreme learning machine

    Directory of Open Access Journals (Sweden)

    Rodríguez-Bermúdez Germán

    2017-07-01

    Full Text Available Brain computer interface (BCI allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.

  8. Research on Classification of Chinese Text Data Based on SVM

    Science.gov (United States)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  9. Documents and legal texts

    International Nuclear Information System (INIS)

    2015-01-01

    This section treats of the following Documents and legal texts: 1 - Canada: Nuclear Liability and Compensation Act (An Act respecting civil liability and compensation for damage in case of a nuclear incident, repealing the Nuclear Liability Act and making consequential amendments to other acts); 2 - Japan: Act on Compensation for Nuclear Damage (The purpose of this act is to protect persons suffering from nuclear damage and to contribute to the sound development of the nuclear industry by establishing a basic system regarding compensation in case of nuclear damage caused by reactor operation etc.); Act on Indemnity Agreements for Compensation of Nuclear Damage; 3 - Slovak Republic: Act on Civil Liability for Nuclear Damage and on its Financial Coverage and on Changes and Amendments to Certain Laws (This Act regulates: a) The civil liability for nuclear damage incurred in the causation of a nuclear incident, b) The scope of powers of the Nuclear Regulatory Authority (hereinafter only as the 'Authority') in relation to the application of this Act, c) The competence of the National Bank of Slovakia in relation to the supervised financial market entities in the financial coverage of liability for nuclear damage; and d) The penalties for violation of this Act)

  10. Documents and legal texts

    International Nuclear Information System (INIS)

    2014-01-01

    This section of the Bulletin presents the recently published documents and legal texts sorted by country: - Brazil: Resolution No. 169 of 30 April 2014. - Japan: Act Concerning Exceptions to Interruption of Prescription Pertaining to Use of Settlement Mediation Procedures by the Dispute Reconciliation Committee for Nuclear Damage Compensation in relation to Nuclear Damage Compensation Disputes Pertaining to the Great East Japan Earthquake (Act No. 32 of 5 June 2013); Act Concerning Measures to Achieve Prompt and Assured Compensation for Nuclear Damage Arising from the Nuclear Plant Accident following the Great East Japan Earthquake and Exceptions to the Extinctive Prescription, etc. of the Right to Claim Compensation for Nuclear Damage (Act No. 97 of 11 December 2013); Fourth Supplement to Interim Guidelines on Determination of the Scope of Nuclear Damage Resulting from the Accident at the Tokyo Electric Power Company Fukushima Daiichi and Daini Nuclear Power Plants (Concerning Damages Associated with the Prolongation of Evacuation Orders, etc.); Outline of 'Fourth Supplement to Interim Guidelines (Concerning Damages Associated with the Prolongation of Evacuation Orders, etc.)'. - OECD Nuclear Energy Agency: Decision and Recommendation of the Steering Committee Concerning the Application of the Paris Convention to Nuclear Installations in the Process of Being Decommissioned; Joint Declaration on the Security of Supply of Medical Radioisotopes. - United Arab Emirates: Federal Decree No. (51) of 2014 Ratifying the Convention on Supplementary Compensation for Nuclear Damage; Ratification of the Federal Supreme Council of Federal Decree No. (51) of 2014 Ratifying the Convention on Supplementary Compensation for Nuclear Damage

  11. A LITERATURE SURVEY ON VARIOUS ILLUMINATION NORMALIZATION TECHNIQUES FOR FACE RECOGNITION WITH FUZZY K NEAREST NEIGHBOUR CLASSIFIER

    Directory of Open Access Journals (Sweden)

    A. Thamizharasi

    2015-05-01

    Full Text Available The face recognition is popular in video surveillance, social networks and criminal identifications nowadays. The performance of face recognition would be affected by variations in illumination, pose, aging and partial occlusion of face by Wearing Hats, scarves and glasses etc. The illumination variations are still the challenging problem in face recognition. The aim is to compare the various illumination normalization techniques. The illumination normalization techniques include: Log transformations, Power Law transformations, Histogram equalization, Adaptive histogram equalization, Contrast stretching, Retinex, Multi scale Retinex, Difference of Gaussian, DCT, DCT Normalization, DWT, Gradient face, Self Quotient, Multi scale Self Quotient and Homomorphic filter. The proposed work consists of three steps. First step is to preprocess the face image with the above illumination normalization techniques; second step is to create the train and test database from the preprocessed face images and third step is to recognize the face images using Fuzzy K nearest neighbor classifier. The face recognition accuracy of all preprocessing techniques is compared using the AR face database of color images.

  12. A Culture-Proven Case of Community-Acquired Legionella Pneumonia Apparently Classified as Nosocomial: Diagnostic and Public Health Implications

    Directory of Open Access Journals (Sweden)

    Annalisa Bargellini

    2013-01-01

    Full Text Available We report a case of Legionella pneumonia in a 78-year-old patient affected by cerebellar haemangioblastoma continuously hospitalised for 24 days prior to the onset of overt symptoms. According to the established case definition, this woman should have been definitely classified as a nosocomial case (patient spending all of the ten days in hospital before onset of symptoms. Water samples from the oncology ward were negative, notably the patient’s room and the oxygen bubbler, and the revision of the case history induced us to verify possible contamination in water samples collected at home. We found that the clinical strain had identical rep-PCR fingerprint of L. pneumophila serogroup 1 isolated at home. The description of this culture-proven case of Legionnaires’ disease has major clinical, legal, and public health consequences as the complexity of hospitalised patients poses limitations to the rule-of-thumb surveillance definition of nosocomial pneumonia based on 2–10-day incubation period.

  13. STATISTICAL TOOLS FOR CLASSIFYING GALAXY GROUP DYNAMICS

    International Nuclear Information System (INIS)

    Hou, Annie; Parker, Laura C.; Harris, William E.; Wilman, David J.

    2009-01-01

    The dynamical state of galaxy groups at intermediate redshifts can provide information about the growth of structure in the universe. We examine three goodness-of-fit tests, the Anderson-Darling (A-D), Kolmogorov, and χ 2 tests, in order to determine which statistical tool is best able to distinguish between groups that are relaxed and those that are dynamically complex. We perform Monte Carlo simulations of these three tests and show that the χ 2 test is profoundly unreliable for groups with fewer than 30 members. Power studies of the Kolmogorov and A-D tests are conducted to test their robustness for various sample sizes. We then apply these tests to a sample of the second Canadian Network for Observational Cosmology Redshift Survey (CNOC2) galaxy groups and find that the A-D test is far more reliable and powerful at detecting real departures from an underlying Gaussian distribution than the more commonly used χ 2 and Kolmogorov tests. We use this statistic to classify a sample of the CNOC2 groups and find that 34 of 106 groups are inconsistent with an underlying Gaussian velocity distribution, and thus do not appear relaxed. In addition, we compute velocity dispersion profiles (VDPs) for all groups with more than 20 members and compare the overall features of the Gaussian and non-Gaussian groups, finding that the VDPs of the non-Gaussian groups are distinct from those classified as Gaussian.

  14. Mercury⊕: An evidential reasoning image classifier

    Science.gov (United States)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  15. A Bayesian method for comparing and combining binary classifiers in the absence of a gold standard

    Directory of Open Access Journals (Sweden)

    Keith Jonathan M

    2012-07-01

    Full Text Available Abstract Background Many problems in bioinformatics involve classification based on features such as sequence, structure or morphology. Given multiple classifiers, two crucial questions arise: how does their performance compare, and how can they best be combined to produce a better classifier? A classifier can be evaluated in terms of sensitivity and specificity using benchmark, or gold standard, data, that is, data for which the true classification is known. However, a gold standard is not always available. Here we demonstrate that a Bayesian model for comparing medical diagnostics without a gold standard can be successfully applied in the bioinformatics domain, to genomic scale data sets. We present a new implementation, which unlike previous implementations is applicable to any number of classifiers. We apply this model, for the first time, to the problem of finding the globally optimal logical combination of classifiers. Results We compared three classifiers of protein subcellular localisation, and evaluated our estimates of sensitivity and specificity against estimates obtained using a gold standard. The method overestimated sensitivity and specificity with only a small discrepancy, and correctly ranked the classifiers. Diagnostic tests for swine flu were then compared on a small data set. Lastly, classifiers for a genome-wide association study of macular degeneration with 541094 SNPs were analysed. In all cases, run times were feasible, and results precise. The optimal logical combination of classifiers was also determined for all three data sets. Code and data are available from http://bioinformatics.monash.edu.au/downloads/. Conclusions The examples demonstrate the methods are suitable for both small and large data sets, applicable to the wide range of bioinformatics classification problems, and robust to dependence between classifiers. In all three test cases, the globally optimal logical combination of the classifiers was found to be

  16. Revised surveillance case definition for HIV infection--United States, 2014.

    Science.gov (United States)

    2014-04-11

    Following extensive consultation and peer review, CDC and the Council of State and Territorial Epidemiologists have revised and combined the surveillance case definitions for human immunodeficiency virus (HIV) infection into a single case definition for persons of all ages (i.e., adults and adolescents aged ≥13 years and children aged case now accommodate new multitest algorithms, including criteria for differentiating between HIV-1 and HIV-2 infection and for recognizing early HIV infection. A confirmed case can be classified in one of five HIV infection stages (0, 1, 2, 3, or unknown); early infection, recognized by a negative HIV test within 6 months of HIV diagnosis, is classified as stage 0, and acquired immunodeficiency syndrome (AIDS) is classified as stage 3. Criteria for stage 3 have been simplified by eliminating the need to differentiate between definitive and presumptive diagnoses of opportunistic illnesses. Clinical (nonlaboratory) criteria for defining a case for surveillance purposes have been made more practical by eliminating the requirement for information about laboratory tests. The surveillance case definition is intended primarily for monitoring the HIV infection burden and planning for prevention and care on a population level, not as a basis for clinical decisions for individual patients. CDC and the Council of State and Territorial Epidemiologists recommend that all states and territories conduct case surveillance of HIV infection using this revised surveillance case definition.

  17. Inferring epidemic network topology from surveillance data.

    Directory of Open Access Journals (Sweden)

    Xiang Wan

    Full Text Available The transmission of infectious diseases can be affected by many or even hidden factors, making it difficult to accurately predict when and where outbreaks may emerge. One approach at the moment is to develop and deploy surveillance systems in an effort to detect outbreaks as timely as possible. This enables policy makers to modify and implement strategies for the control of the transmission. The accumulated surveillance data including temporal, spatial, clinical, and demographic information, can provide valuable information with which to infer the underlying epidemic networks. Such networks can be quite informative and insightful as they characterize how infectious diseases transmit from one location to another. The aim of this work is to develop a computational model that allows inferences to be made regarding epidemic network topology in heterogeneous populations. We apply our model on the surveillance data from the 2009 H1N1 pandemic in Hong Kong. The inferred epidemic network displays significant effect on the propagation of infectious diseases.

  18. WORKPLACE SURVEILLANCE: BIG BROTHER IS WATCHING YOU?

    Directory of Open Access Journals (Sweden)

    Corneliu BÎRSAN

    2018-05-01

    Full Text Available Only recently workplace surveillance has become a real concern of the international community. Very often we hear about employers who monitor and record the actions of their employees, in order to check for any breaches of company policies or procedures, to ensure that appropriate behaviour standards are being met and that company property, confidential information and intellectual property is not being damaged. Surveillance at workplace may include inter alia monitoring of telephone and internet use, opening of personal files stored on a professional computer, video surveillance. But what if this monitoring or recording breaches human rights? In order to give practical examples for these means, we shall proceed to a chronological analysis of the most relevant cases dealt by the European Court of Human Rights along the time, in which the Strasbourg judges decided that the measures taken by the employers exceed the limits given by Article 8 of the Convention. After providing the most relevant examples from the Court’s case-law in this field, we shall analyse the outcome of the recent Grand Chamber Barbulescu v. Romania judgment. The purpose of this study is to offer to the interested legal professionals and to the domestic authorities of the Member States the information in order to adequately protect the right of each individual to respect for his or her private life and correspondence under the European Convention on Human Rights.

  19. 36 CFR 1256.46 - National security-classified information.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false National security-classified... Restrictions § 1256.46 National security-classified information. In accordance with 5 U.S.C. 552(b)(1), NARA... properly classified under the provisions of the pertinent Executive Order on Classified National Security...

  20. Radioisotopic Thermoelectric Generator (RTG) Surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Mulford, Roberta Nancy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-29

    This lecture discusses stockpile stewardship efforts and the role surveillance plays in the process. Performance of the RTGs is described, and the question of the absence of anticipated He is addressed.

  1. Surveillance of nuclear power reactors

    International Nuclear Information System (INIS)

    Marini, J.

    1983-01-01

    Surveillance of nuclear power reactors is now a necessity imposed by such regulatory documents as USNRC Regulatory Guide 1.133. In addition to regulatory requirements, however, nuclear reactor surveillance offers plant operators significant economic advantages insofar as a single day's outage is very costly. The economic worth of a reactor surveillance system can be stated in terms of the improved plant availability provided through its capability to detect incidents before they occur and cause serious damage. Furthermore, the TMI accident has demonstrated the need for monitoring certain components to provide operators with clear information on their functional status. In response to the above considerations, Framatome has developed a line of products which includes: pressure vessel leakage detection systems, loose part detection systems, component vibration monitoring systems, and, crack detection and monitoring systems. Some of the surveillance systems developed by Framatome are described in this paper

  2. Youth Risk Behavior Surveillance System

    Science.gov (United States)

    ... Youth Risk Behavior Surveillance System (YRBSS) monitors six types of health-risk behaviors that contribute to the leading causes of death and disability among youth and adults, including— Behaviors that contribute ...

  3. Text segmentation in degraded historical document images

    Directory of Open Access Journals (Sweden)

    A.S. Kavitha

    2016-07-01

    Full Text Available Text segmentation from degraded Historical Indus script images helps Optical Character Recognizer (OCR to achieve good recognition rates for Hindus scripts; however, it is challenging due to complex background in such images. In this paper, we present a new method for segmenting text and non-text in Indus documents based on the fact that text components are less cursive compared to non-text ones. To achieve this, we propose a new combination of Sobel and Laplacian for enhancing degraded low contrast pixels. Then the proposed method generates skeletons for text components in enhanced images to reduce computational burdens, which in turn helps in studying component structures efficiently. We propose to study the cursiveness of components based on branch information to remove false text components. The proposed method introduces the nearest neighbor criterion for grouping components in the same line, which results in clusters. Furthermore, the proposed method classifies these clusters into text and non-text cluster based on characteristics of text components. We evaluate the proposed method on a large dataset containing varieties of images. The results are compared with the existing methods to show that the proposed method is effective in terms of recall and precision.

  4. A surveillance sector review applied to infectious diseases at a country level

    Directory of Open Access Journals (Sweden)

    Easther Sally

    2010-06-01

    Full Text Available Abstract Background The new International Health Regulations (IHR require World Health Organization (WHO member states to assess their core capacity for surveillance. Such reviews also have the potential to identify important surveillance gaps, improve the organisation of disparate surveillance systems and to focus attention on upstream hazards, determinants and interventions. Methods We developed a surveillance sector review method for evaluating all of the surveillance systems and related activities across a sector, in this case those concerned with infectious diseases in New Zealand. The first stage was a systematic description of these surveillance systems using a newly developed framework and classification system. Key informant interviews were conducted to validate the available information on the systems identified. Results We identified 91 surveillance systems and related activities in the 12 coherent categories of infectious diseases examined. The majority (n = 40 or 44% of these were disease surveillance systems. They covered all categories, particularly for more severe outcomes including those resulting in death or hospitalisations. Except for some notifiable diseases and influenza, surveillance of less severe, but important infectious diseases occurring in the community was largely absent. There were 31 systems (34% for surveillance of upstream infectious disease hazards, including risk and protective factors. This area tended to have many potential gaps and lack integration, partly because such systems were operated by a range of different agencies, often outside the health sector. There were fewer surveillance systems for determinants, including population size and characteristics (n = 9, and interventions (n = 11. Conclusions It was possible to create and populate a workable framework for describing all the infectious diseases surveillance systems and related activities in a single developed country and to identify potential

  5. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Health surveillance - myth and reality

    International Nuclear Information System (INIS)

    Sharp, C.

    1998-01-01

    This paper discusses the principles, health benefit and cost-effectiveness of health surveillance in the occupational setting, which apply to exposure to ionising radiations in the same manner as to other hazards in the workplace. It highlights the techniques for undertaking health surveillance, discusses their relative advantages and disadvantages and illustrates these in relation to specific hazards. The responsibilities of the medical staff and of the worker are also discussed. (author)

  7. Surface Environmental Surveillance Procedures Manual

    Energy Technology Data Exchange (ETDEWEB)

    RW Hanf; TM Poston

    2000-09-20

    Environmental surveillance data are used in assessing the impact of current and past site operations on human health and the environment, demonstrating compliance with applicable local, state, and federal environmental regulations, and verifying the adequacy of containment and effluent controls. SESP sampling schedules are reviewed, revised, and published each calendar year in the Hanford Site Environmental Surveillance Master Sampling Schedule. Environmental samples are collected by SESP staff in accordance with the approved sample collection procedures documented in this manual.

  8. Privacy Implications of Surveillance Systems

    DEFF Research Database (Denmark)

    Thommesen, Jacob; Andersen, Henning Boje

    2009-01-01

    This paper presents a model for assessing the privacy „cost‟ of a surveillance system. Surveillance systems collect and provide personal information or observations of people by means of surveillance technologies such as databases, video or location tracking. Such systems can be designed for vari......This paper presents a model for assessing the privacy „cost‟ of a surveillance system. Surveillance systems collect and provide personal information or observations of people by means of surveillance technologies such as databases, video or location tracking. Such systems can be designed...... for various purposes, even as a service for those being observed, but in any case they will to some degree invade their privacy. The model provided here can indicate how invasive any particular system may be – and be used to compare the invasiveness of different systems. Applying a functional approach......, the model is established by first considering the social function of privacy in everyday life, which in turn lets us determine which different domains will be considered as private, and finally identify the different types of privacy invasion. This underlying model (function – domain – invasion) then serves...

  9. Two channel EEG thought pattern classifier.

    Science.gov (United States)

    Craig, D A; Nguyen, H T; Burchey, H A

    2006-01-01

    This paper presents a real-time electro-encephalogram (EEG) identification system with the goal of achieving hands free control. With two EEG electrodes placed on the scalp of the user, EEG signals are amplified and digitised directly using a ProComp+ encoder and transferred to the host computer through the RS232 interface. Using a real-time multilayer neural network, the actual classification for the control of a powered wheelchair has a very fast response. It can detect changes in the user's thought pattern in 1 second. Using only two EEG electrodes at positions O(1) and C(4) the system can classify three mental commands (forward, left and right) with an accuracy of more than 79 %

  10. Classifying Drivers' Cognitive Load Using EEG Signals.

    Science.gov (United States)

    Barua, Shaibal; Ahmed, Mobyen Uddin; Begum, Shahina

    2017-01-01

    A growing traffic safety issue is the effect of cognitive loading activities on traffic safety and driving performance. To monitor drivers' mental state, understanding cognitive load is important since while driving, performing cognitively loading secondary tasks, for example talking on the phone, can affect the performance in the primary task, i.e. driving. Electroencephalography (EEG) is one of the reliable measures of cognitive load that can detect the changes in instantaneous load and effect of cognitively loading secondary task. In this driving simulator study, 1-back task is carried out while the driver performs three different simulated driving scenarios. This paper presents an EEG based approach to classify a drivers' level of cognitive load using Case-Based Reasoning (CBR). The results show that for each individual scenario as well as using data combined from the different scenarios, CBR based system achieved approximately over 70% of classification accuracy.

  11. Classifying prion and prion-like phenomena.

    Science.gov (United States)

    Harbi, Djamel; Harrison, Paul M

    2014-01-01

    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  12. What are the benefits of medical screening and surveillance?

    Directory of Open Access Journals (Sweden)

    D. Wilken

    2012-06-01

    Full Text Available Pre-employment examination is considered to be an important practice and is commonly performed in several countries within the European Union. The benefits of medical surveillance programmes are not generally accepted and their structure is often inconsistent. The aim of this review was to evaluate, on the basis of the available literature, the usefulness of medical screening and surveillance. MEDLINE was searched from its inception up to March 2010. Retrieved literature was evaluated in a peer-review process and relevant data was collected following a systematic extraction schema. Pre-placement screening identifies subjects who are at an increased risk for developing work-related allergic disease, but pre-employment screening is too low to be used as exclusion criteria. Medical surveillance programmes can identify workers who have, or who are developing, work-related asthma. These programmes can also be used to avoid worsening of symptoms by implementing preventive measures. A combination of different tools within the surveillance programme, adjusted for the risk of the individual worker, improves the predictive value. Medical surveillance programmes provide medical as well as socioeconomic benefits. However, pre-employment screening cannot be used to exclude workers. They may act as a starting point for surveillance strategies. A stratified approach can increase the effectiveness and reduce the costs for such programmes.

  13. Just-in-time adaptive classifiers-part II: designing the classifier.

    Science.gov (United States)

    Alippi, Cesare; Roveri, Manuel

    2008-12-01

    Aging effects, environmental changes, thermal drifts, and soft and hard faults affect physical systems by changing their nature and behavior over time. To cope with a process evolution adaptive solutions must be envisaged to track its dynamics; in this direction, adaptive classifiers are generally designed by assuming the stationary hypothesis for the process generating the data with very few results addressing nonstationary environments. This paper proposes a methodology based on k-nearest neighbor (NN) classifiers for designing adaptive classification systems able to react to changing conditions just-in-time (JIT), i.e., exactly when it is needed. k-NN classifiers have been selected for their computational-free training phase, the possibility to easily estimate the model complexity k and keep under control the computational complexity of the classifier through suitable data reduction mechanisms. A JIT classifier requires a temporal detection of a (possible) process deviation (aspect tackled in a companion paper) followed by an adaptive management of the knowledge base (KB) of the classifier to cope with the process change. The novelty of the proposed approach resides in the general framework supporting the real-time update of the KB of the classification system in response to novel information coming from the process both in stationary conditions (accuracy improvement) and in nonstationary ones (process tracking) and in providing a suitable estimate of k. It is shown that the classification system grants consistency once the change targets the process generating the data in a new stationary state, as it is the case in many real applications.

  14. The politics of surveillance policy: UK regulatory dynamics after Snowden

    Directory of Open Access Journals (Sweden)

    Arne Hintz

    2016-09-01

    Full Text Available The revelations by NSA whistleblower Edward Snowden have illustrated the scale and extent of digital surveillance carried out by different security and intelligence agencies. The publications have led to a variety of concerns, public debate, and some diplomatic fallout regarding the legality of the surveillance, the extent of state interference in civic life, and the protection of civil rights in the context of security. Debates about the policy environment of surveillance emerged quickly after the leaks began, but actual policy change is only starting. In the UK, a draft law (Investigatory Powers Bill has been proposed and is currently discussed. In this paper, we will trace the forces and dynamics that have shaped this particular policy response. Addressing surveillance policy as a site of struggle between different social forces and drawing on different fields across communication policy research, we suggest eight dynamics that, often in conflicting ways, have shaped the regulatory framework of surveillance policy in the UK since the Snowden leaks. These include the governmental context; national and international norms; court rulings; civil society advocacy; technical standards; private sector interventions; media coverage; and public opinion. We investigate how state surveillance has been met with criticism by parts of the technology industry and civil society, and that policy change was required as a result of legal challenges, review commissions and normative interventions. However a combination of specific government compositions, the strong role of security agendas and discourses, media justification and a muted reaction by the public have hindered a more fundamental review of surveillance practices so far and have moved policy debate towards the expansion, rather than the restriction, of surveillance in the aftermath of Snowden.

  15. Descriptive review of tuberculosis surveillance systems across the circumpolar regions

    Directory of Open Access Journals (Sweden)

    Annie-Claude Bourgeois

    2016-04-01

    Full Text Available Background: Tuberculosis is highly prevalent in many Arctic areas. Members of the International Circumpolar Surveillance Tuberculosis (ICS-TB Working Group collaborate to increase knowledge about tuberculosis in Arctic regions. Objective: To establish baseline knowledge of tuberculosis surveillance systems used by ICS-TB member jurisdictions. Design: Three questionnaires were developed to reflect the different surveillance levels (local, regional and national; all 3 were forwarded to the official representative of each of the 15 ICS-TB member jurisdictions in 2013. Respondents self-identified the level of surveillance conducted in their region and completed the applicable questionnaire. Information collected included surveillance system objectives, case definitions, data collection methodology, storage and dissemination. Results: Thirteen ICS-TB jurisdictions [Canada (Labrador, Northwest Territories, Nunavik, Nunavut, Yukon, Finland, Greenland, Norway, Sweden, Russian Federation (Arkhangelsk, Khanty-Mansiysk Autonomous Okrug, Yakutia (Sakha Republic, United States (Alaska] voluntarily completed the survey – representing 2 local, 7 regional and 4 national levels. Tuberculosis reporting is mandatory in all jurisdictions, and case definitions are comparable across regions. The common objectives across systems are to detect outbreaks, and inform the evaluation/planning of public health programmes and policies. All jurisdictions collect data on confirmed active tuberculosis cases and treatment outcomes; 11 collect contact tracing results. Faxing of standardized case reporting forms is the most common reporting method. Similar core data elements are collected; 8 regions report genotyping results. Data are stored using customized programmes (n=7 and commercial software (n=6. Nine jurisdictions provide monthly, bi-annual or annual reports to principally government and/or scientific/medical audiences. Conclusion: This review successfully establishes

  16. Ebola virus disease surveillance and response preparedness in northern Ghana

    Directory of Open Access Journals (Sweden)

    Martin N. Adokiya

    2016-05-01

    Full Text Available Background: The recent Ebola virus disease (EVD outbreak has been described as unprecedented in terms of morbidity, mortality, and geographical extension. It also revealed many weaknesses and inadequacies for disease surveillance and response systems in Africa due to underqualified staff, cultural beliefs, and lack of trust for the formal health care sector. In 2014, Ghana had high risk of importation of EVD cases. Objective: The objective of this study was to assess the EVD surveillance and response system in northern Ghana. Design: This was an observational study conducted among 47 health workers (district directors, medical, disease control, and laboratory officers in all 13 districts of the Upper East Region representing public, mission, and private health services. A semi-structured questionnaire with focus on core and support functions (e.g. detection, confirmation was administered to the informants. Their responses were recorded according to specific themes. In addition, 34 weekly Integrated Disease Surveillance and Response reports (August 2014 to March 2015 were collated from each district. Results: In 2014 and 2015, a total of 10 suspected Ebola cases were clinically diagnosed from four districts. Out of the suspected cases, eight died and the cause of death was unexplained. All the 10 suspected cases were reported, none was confirmed. The informants had knowledge on EVD surveillance and data reporting. However, there were gaps such as delayed reporting, low quality protective equipment (e.g. gloves, aprons, inadequate staff, and lack of laboratory capacity. The majority (38/47 of the respondents were not satisfied with EVD surveillance system and response preparedness due to lack of infrared thermometers, ineffective screening, and lack of isolation centres. Conclusion: EVD surveillance and response preparedness is insufficient and the epidemic is a wake-up call for early detection and response preparedness. Ebola surveillance remains

  17. Accuracy Evaluation of C4.5 and Naive Bayes Classifiers Using Attribute Ranking Method

    Directory of Open Access Journals (Sweden)

    S. Sivakumari

    2009-03-01

    Full Text Available This paper intends to classify the Ljubljana Breast Cancer dataset using C4.5 Decision Tree and Nai?ve Bayes classifiers. In this work, classification is carriedout using two methods. In the first method, dataset is analysed using all the attributes in the dataset. In the second method, attributes are ranked using information gain ranking technique and only the high ranked attributes are used to build the classification model. We are evaluating the results of C4.5 Decision Tree and Nai?ve Bayes classifiers in terms of classifier accuracy for various folds of cross validation. Our results show that both the classifiers achieve good accuracy on the dataset.

  18. Elementary Surveillance (ELS) and Enhanced Surveillance (EHS) Validation via Mode S Secondary Radar Surveillance

    National Research Council Canada - National Science Library

    Grappel, Robert D; Harris, Garrett S; Kozar, Mark J; Wiken, Randall T

    2008-01-01

    ...) and Enhanced Surveillance (ERS) data link applications. The intended audience for this report is an engineering staff assigned the task of implementing a monitoring system used to determine ELS and EHS compliance...

  19. Enhanced surveillance program FY97 accomplishments. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Mauzy, A. [ed.; Laake, B. [comp.

    1997-10-01

    This annual report is one volume of the Enhanced Surveillance Program (ESP) FY97 Accomplishments. The complete accomplishments report consists of 11 volumes. Volume 1 includes an ESP overview and a summary of selected unclassified FY97 program highlights. Volume 1 specifically targets a general audience, reflecting about half of the tasks conducted in FY97 and emphasizing key program accomplishments and contributions. The remaining volumes of the accomplishments report are classified, organized by program focus area, and present in technical detail the progress achieved in each of the 104 FY97 program tasks. Focus areas are as follows: pits; high explosives; organics; dynamics; diagnostics; systems; secondaries; nonnuclear materials; nonnuclear components; and Surveillance Test Program upgrades.

  20. Evaluation of surveillance of dengue fever cases in the public health centre of Putat Jaya based on attribute surveillance

    Directory of Open Access Journals (Sweden)

    Zumaroh Zumaroh

    2015-01-01

    Full Text Available Dengue Hemorrhagic Fever (DHF is a public health problem in the village of Putat Jaya which is an endemic area. Surveilans activity in DHF control program is the most important activity in controlling and monitoring disease progression. The program is expected to achieve incidence rate 55/100.000 population. This study aimed to evaluate the implementation of case surveilans in health centre of putat jaya based on attribute surveillance. Attribute surveillance is an indicator that describes the characteristics of the surveillance system. This research was an evaluation research with descriptive study design. As informants were clinic staff who deal specifically with cases of dengue hemorrhagic fever and laboratory workers. The techniques of data collection by interviews and document study. The variables of this study were simplicity, flexibility, acceptability, sensitivity, positive predictive value, representativeness, timeliness, data quality and data stability. It could be seen from Incidence Rate in 2013 has reached 133/100.00 population. The activity of surveilance in the village of Putat Jaya reviewed from disease contol program management was not succeed into decrease incidence rate of DHF. Therefore, dengue control programs in health centers Putat Jaya need to do cross-sector cooperation and cross-program cooperation, strengthening the case reporting system by way increasing in the utilization of information and communication technology electromedia. Keywords: case surveillance, dengue hemorrhagic fever, evaluation, attribute surveillance, Putat Jaya

  1. Optimizing community-level surveillance data for pediatric asthma management

    Directory of Open Access Journals (Sweden)

    Wande O. Benka-Coker

    2018-06-01

    Full Text Available Community-level approaches for pediatric asthma management rely on locally collected information derived primarily from two sources: claims records and school-based surveys. We combined claims and school-based surveillance data, and examined the asthma-related risk patterns among adolescent students.Symptom data collected from school-based asthma surveys conducted in Oakland, CA were used for case identification and determination of severity levels for students (high and low. Survey data were matched to Medicaid claims data for all asthma-related health care encounters for the year prior to the survey. We then employed recursive partitioning to develop classification trees that identified patterns of demographics and healthcare utilization associated with severity.A total of 561 students had complete matched data; 86.1% were classified as high-severity, and 13.9% as low-severity asthma. The classification tree consisted of eight subsets: three indicating high severity and five indicating low severity. The risk subsets highlighted varying combinations of non-specific demographic and socioeconomic predictors of asthma prevalence, morbidity and severity. For example, the subset with the highest class-prior probability (92.1% predicted high-severity asthma and consisted of students without prescribed rescue medication, but with at least one in-clinic nebulizer treatment. The predictive accuracy of the tree-based model was approximately 66.7%, with an estimated 91.1% of high-severity cases and 42.3% of low-severity cases correctly predicted.Our analysis draws on the strengths of two complementary datasets to provide community-level information on children with asthma, and demonstrates the utility of recursive partitioning methods to explore a combination of features that convey asthma severity. Keywords: Asthma, Classification, Risk stratification, Statistical data analysis, Disease management

  2. Identification of flooded area from satellite images using Hybrid Kohonen Fuzzy C-Means sigma classifier

    Directory of Open Access Journals (Sweden)

    Krishna Kant Singh

    2017-06-01

    Full Text Available A novel neuro fuzzy classifier Hybrid Kohonen Fuzzy C-Means-σ (HKFCM-σ is proposed in this paper. The proposed classifier is a hybridization of Kohonen Clustering Network (KCN with FCM-σ clustering algorithm. The network architecture of HKFCM-σ is similar to simple KCN network having only two layers, i.e., input and output layer. However, the selection of winner neuron is done based on FCM-σ algorithm. Thus, embedding the features of both, a neural network and a fuzzy clustering algorithm in the classifier. This hybridization results in a more efficient, less complex and faster classifier for classifying satellite images. HKFCM-σ is used to identify the flooding that occurred in Kashmir area in September 2014. The HKFCM-σ classifier is applied on pre and post flooding Landsat 8 OLI images of Kashmir to detect the areas that were flooded due to the heavy rainfalls of September, 2014. The classifier is trained using the mean values of the various spectral indices like NDVI, NDWI, NDBI and first component of Principal Component Analysis. The error matrix was computed to test the performance of the method. The method yields high producer’s accuracy, consumer’s accuracy and kappa coefficient value indicating that the proposed classifier is highly effective and efficient.

  3. Text

    International Nuclear Information System (INIS)

    Anon.

    2009-01-01

    The purpose of this act is to safeguard against the dangers and harmful effects of radioactive waste and to contribute to public safety and environmental protection by laying down requirements for the safe and efficient management of radioactive waste. We will find definitions, interrelation with other legislation, responsibilities of the state and local governments, responsibilities of radioactive waste management companies and generators, formulation of the basic plan for the control of radioactive waste, radioactive waste management ( with public information, financing and part of spent fuel management), Korea radioactive waste management corporation ( business activities, budget), establishment of a radioactive waste fund in order to secure the financial resources required for radioactive waste management, and penalties in case of improper operation of radioactive waste management. (N.C.)

  4. Drug overdose surveillance using hospital discharge data.

    Science.gov (United States)

    Slavova, Svetla; Bunn, Terry L; Talbert, Jeffery

    2014-01-01

    We compared three methods for identifying drug overdose cases in inpatient hospital discharge data on their ability to classify drug overdoses by intent and drug type(s) involved. We compared three International Classification of Diseases, Ninth Revision, Clinical Modification code-based case definitions using Kentucky hospital discharge data for 2000-2011. The first definition (Definition 1) was based on the external-cause-of-injury (E-code) matrix. The other two definitions were based on the Injury Surveillance Workgroup on Poisoning (ISW7) consensus recommendations for national and state poisoning surveillance using the principal diagnosis or first E-code (Definition 2) or any diagnosis/E-code (Definition 3). Definition 3 identified almost 50% more drug overdose cases than did Definition 1. The increase was largely due to cases with a first-listed E-code describing a drug overdose but a principal diagnosis that was different from drug overdose (e.g., mental disorders, or respiratory or circulatory system failure). Regardless of the definition, more than 53% of the hospitalizations were self-inflicted drug overdoses; benzodiazepines were involved in about 30% of the hospitalizations. The 2011 age-adjusted drug overdose hospitalization rate in Kentucky was 146/100,000 population using Definition 3 and 107/100,000 population using Definition 1. The ISW7 drug overdose definition using any drug poisoning diagnosis/E-code (Definition 3) is potentially the highest sensitivity definition for counting drug overdose hospitalizations, including by intent and drug type(s) involved. As the states enact policies and plan for adequate treatment resources, standardized drug overdose definitions are critical for accurate reporting, trend analysis, policy evaluation, and state-to-state comparison.

  5. Drug Overdose Surveillance Using Hospital Discharge Data

    Science.gov (United States)

    Bunn, Terry L.; Talbert, Jeffery

    2014-01-01

    Objectives We compared three methods for identifying drug overdose cases in inpatient hospital discharge data on their ability to classify drug overdoses by intent and drug type(s) involved. Methods We compared three International Classification of Diseases, Ninth Revision, Clinical Modification code-based case definitions using Kentucky hospital discharge data for 2000–2011. The first definition (Definition 1) was based on the external-cause-of-injury (E-code) matrix. The other two definitions were based on the Injury Surveillance Workgroup on Poisoning (ISW7) consensus recommendations for national and state poisoning surveillance using the principal diagnosis or first E-code (Definition 2) or any diagnosis/E-code (Definition 3). Results Definition 3 identified almost 50% more drug overdose cases than did Definition 1. The increase was largely due to cases with a first-listed E-code describing a drug overdose but a principal diagnosis that was different from drug overdose (e.g., mental disorders, or respiratory or circulatory system failure). Regardless of the definition, more than 53% of the hospitalizations were self-inflicted drug overdoses; benzodiazepines were involved in about 30% of the hospitalizations. The 2011 age-adjusted drug overdose hospitalization rate in Kentucky was 146/100,000 population using Definition 3 and 107/100,000 population using Definition 1. Conclusion The ISW7 drug overdose definition using any drug poisoning diagnosis/E-code (Definition 3) is potentially the highest sensitivity definition for counting drug overdose hospitalizations, including by intent and drug type(s) involved. As the states enact policies and plan for adequate treatment resources, standardized drug overdose definitions are critical for accurate reporting, trend analysis, policy evaluation, and state-to-state comparison. PMID:25177055

  6. Possibilities of Applying Video Surveillance and other ICT Tools and Services in the Production Process

    Directory of Open Access Journals (Sweden)

    Adis Rahmanović

    2018-02-01

    Full Text Available The paper presents the possibilities of applying Video surveillance and other ICT tools and services in the production process. The first part of the paper presented the system for controlling video surveillance for and the given opportunity of application of video surveillance for the security of the employees and the assets. In the second part of the paper an analysis of the system for controlling production is given and then a video surveillance of a work excavator. The next part of the paper presents integration of video surveillance and the accompanying tools. At the end of the paper, suggestions were also given for further works in the field of data protection and cryptography in video surveillance use.

  7. Classifying Adverse Events in the Dental Office.

    Science.gov (United States)

    Kalenderian, Elsbeth; Obadan-Udoh, Enihomo; Maramaldi, Peter; Etolue, Jini; Yansane, Alfa; Stewart, Denice; White, Joel; Vaderhobli, Ram; Kent, Karla; Hebballi, Nutan B; Delattre, Veronique; Kahn, Maria; Tokede, Oluwabunmi; Ramoni, Rachel B; Walji, Muhammad F

    2017-06-30

    Dentists strive to provide safe and effective oral healthcare. However, some patients may encounter an adverse event (AE) defined as "unnecessary harm due to dental treatment." In this research, we propose and evaluate two systems for categorizing the type and severity of AEs encountered at the dental office. Several existing medical AE type and severity classification systems were reviewed and adapted for dentistry. Using data collected in previous work, two initial dental AE type and severity classification systems were developed. Eight independent reviewers performed focused chart reviews, and AEs identified were used to evaluate and modify these newly developed classifications. A total of 958 charts were independently reviewed. Among the reviewed charts, 118 prospective AEs were found and 101 (85.6%) were verified as AEs through a consensus process. At the end of the study, a final AE type classification comprising 12 categories, and an AE severity classification comprising 7 categories emerged. Pain and infection were the most common AE types representing 73% of the cases reviewed (56% and 17%, respectively) and 88% were found to cause temporary, moderate to severe harm to the patient. Adverse events found during the chart review process were successfully classified using the novel dental AE type and severity classifications. Understanding the type of AEs and their severity are important steps if we are to learn from and prevent patient harm in the dental office.

  8. Is it important to classify ischaemic stroke?

    LENUS (Irish Health Repository)

    Iqbal, M

    2012-02-01

    Thirty-five percent of all ischemic events remain classified as cryptogenic. This study was conducted to ascertain the accuracy of diagnosis of ischaemic stroke based on information given in the medical notes. It was tested by applying the clinical information to the (TOAST) criteria. Hundred and five patients presented with acute stroke between Jan-Jun 2007. Data was collected on 90 patients. Male to female ratio was 39:51 with age range of 47-93 years. Sixty (67%) patients had total\\/partial anterior circulation stroke; 5 (5.6%) had a lacunar stroke and in 25 (28%) the mechanism of stroke could not be identified. Four (4.4%) patients with small vessel disease were anticoagulated; 5 (5.6%) with atrial fibrillation received antiplatelet therapy and 2 (2.2%) patients with atrial fibrillation underwent CEA. This study revealed deficiencies in the clinical assessment of patients and treatment was not tailored to the mechanism of stroke in some patients.

  9. Stress fracture development classified by bone scintigraphy

    International Nuclear Information System (INIS)

    Zwas, S.T.; Elkanovich, R.; Frank, G.; Aharonson, Z.

    1985-01-01

    There is no consensus on classifying stress fractures (SF) appearing on bone scans. The authors present a system of classification based on grading the severity and development of bone lesions by visual inspection, according to three main scintigraphic criteria: focality and size, intensity of uptake compare to adjacent bone, and local medular extension. Four grades of development (I-IV) were ranked, ranging from ill defined slightly increased cortical uptake to well defined regions with markedly increased uptake extending transversely bicortically. 310 male subjects aged 19-2, suffering several weeks from leg pains occurring during intensive physical training underwent bone scans of the pelvis and lower extremities using Tc-99-m-MDP. 76% of the scans were positive with 354 lesions, of which 88% were in th4e mild (I-II) grades and 12% in the moderate (III) and severe (IV) grades. Post-treatment scans were obtained in 65 cases having 78 lesions during 1- to 6-month intervals. Complete resolution was found after 1-2 months in 36% of the mild lesions but in only 12% of the moderate and severe ones, and after 3-6 months in 55% of the mild lesions and 15% of the severe ones. 75% of the moderate and severe lesions showed residual uptake in various stages throughout the follow-up period. Early recognition and treatment of mild SF lesions in this study prevented protracted disability and progression of the lesions and facilitated complete healing

  10. Surveillance of healthcare-associated infection in hospitalised South African children: Which method performs best?

    Directory of Open Access Journals (Sweden)

    A Dramowski

    2017-01-01

    Full Text Available Background. In 2012, the South African (SA National Department of Health mandated surveillance of healthcare-associated infection (HAI, but made no recommendations of appropriate surveillance methods. Methods. Prospective clinical HAI surveillance (the reference method was conducted at Tygerberg Children’s Hospital, Cape Town, from 1 May to 31 October 2015. Performance of three surveillance methods (point prevalence surveys (PPSs, laboratory surveillance and tracking of antimicrobial prescriptions was compared with the reference method using surveillance evaluation guidelines. Factors associated with failure to detect HAI were identified by logistic regression analysis. Results. The reference method detected 417 HAIs among 1 347 paediatric hospitalisations (HAI incidence of 31/1000 patient days; 95% confidence interval (CI 28.2 - 34.2. Surveillance methods had variable sensitivity (S and positive predictive value (PPV: PPS S = 24.9% (95% CI 21 - 29.3, PPV = 100%; laboratory surveillance S = 48.4% (95% CI 43.7 - 53.2, PPV = 55.2% (95% CI 50.1 - 60.2; and antimicrobial prescriptions S = 66.4% (95% CI 61.8 - 70.8%, PPV = 88.5% (95% CI 84.5 - 91.6. Combined laboratory-antimicrobial surveillance achieved superior HAI detection (S = 84.7% (95% CI 80.9 - 87.8%, PPV = 97% (95% CI 94.6 - 98.4%. Factors associated with failure to detect HAI included patient transfer (odds ratio (OR 2.0, single HAI event (OR 2.8, age category 1 - 5 years (OR 2.1 and hospitalisation in a general ward (OR 2.3. Conclusions. Repeated PPSs, laboratory surveillance and/or antimicrobial prescription tracking are feasible HAI surveillance methods for low-resource settings. Combined laboratory-antimicrobial surveillance achieved the best sensitivity and PPV. SA paediatric healthcare facilities should individualise HAI surveillance, selecting a method suited to available resources and practice context.

  11. Applied learning-based color tone mapping for face recognition in video surveillance system

    Science.gov (United States)

    Yew, Chuu Tian; Suandi, Shahrel Azmin

    2012-04-01

    In this paper, we present an applied learning-based color tone mapping technique for video surveillance system. This technique can be applied onto both color and grayscale surveillance images. The basic idea is to learn the color or intensity statistics from a training dataset of photorealistic images of the candidates appeared in the surveillance images, and remap the color or intensity of the input image so that the color or intensity statistics match those in the training dataset. It is well known that the difference in commercial surveillance cameras models, and signal processing chipsets used by different manufacturers will cause the color and intensity of the images to differ from one another, thus creating additional challenges for face recognition in video surveillance system. Using Multi-Class Support Vector Machines as the classifier on a publicly available video surveillance camera database, namely SCface database, this approach is validated and compared to the results of using holistic approach on grayscale images. The results show that this technique is suitable to improve the color or intensity quality of video surveillance system for face recognition.

  12. Project Surveillance and Maintenance Plan

    International Nuclear Information System (INIS)

    1985-09-01

    The Project Surveillance and Maintenance Plan (PSMP) describes the procedures that will be used by the US Department of Energy (DOE), or other agency as designated by the President to verify that inactive uranium tailings disposal facilities remain in compliance with licensing requirements and US Environmental Protection Agency (EPA) standards for remedial actions. The PSMP will be used as a guide for the development of individual Site Surveillance and Maintenance Plans (part of a license application) for each of the UMTRA Project sites. The PSMP is not intended to provide minimum requirements but rather to provide guidance in the selection of surveillance measures. For example, the plan acknowledges that ground-water monitoring may or may not be required and provides the [guidance] to make this decision. The Site Surveillance and Maintenance Plans (SSMPs) will form the basis for the licensing of the long-term surveillance and maintenance of each UMTRA Project site by the NRC. Therefore, the PSMP is a key milestone in the licensing process of all UMTRA Project sites. The Project Licensing Plan (DOE, 1984a) describes the licensing process. 11 refs., 22 figs., 8 tabs

  13. Teaching Text Structure: Examining the Affordances of Children's Informational Texts

    Science.gov (United States)

    Jones, Cindy D.; Clark, Sarah K.; Reutzel, D. Ray

    2016-01-01

    This study investigated the affordances of informational texts to serve as model texts for teaching text structure to elementary school children. Content analysis of a random sampling of children's informational texts from top publishers was conducted on text structure organization and on the inclusion of text features as signals of text…

  14. Text Classification and Distributional features techniques in Datamining and Warehousing

    OpenAIRE

    Bethu, Srikanth; Babu, G Charless; Vinoda, J; Priyadarshini, E; rao, M Raghavendra

    2013-01-01

    Text Categorization is traditionally done by using the term frequency and inverse document frequency.This type of method is not very good because, some words which are not so important may appear in the document .The term frequency of unimportant words may increase and document may be classified in the wrong category.For reducing the error of classifying of documents in wrong category. The Distributional features are introduced. In the Distribuional Features, the Distribution of the words in ...

  15. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    ved for the isolated English text, but for the handwritten Devanagari script it is not ... characters, lack of standard benchmarking and ground truth dataset, lack of ..... theory, proposed by Glen Shafer as a way to represent cognitive knowledge.

  16. SVM Classifiers: The Objects Identification on the Base of Their Hyperspectral Features

    Directory of Open Access Journals (Sweden)

    Demidova Liliya

    2017-01-01

    Full Text Available The problem of the objects identification on the base of their hyperspectral features has been considered. It is offered to use the SVM classifiers on the base of the modified PSO algorithm, adapted to specifics of the problem of the objects identification on the base of their hyperspectral features. The results of the objects identification on the base of their hyperspectral features with using of the SVM classifiers have been presented.

  17. Schema-Based Text Comprehension

    Science.gov (United States)

    Ensar, Ferhat

    2015-01-01

    Schema is one of the most common terms used for classifying and constructing knowledge. Therefore, a schema is a pre-planned set of concepts. It usually contains social information and is used to represent chain of events, perceptions, situations, relationships and even objects. For example, Kant initially defines the idea of schema as some…

  18. REPTREE CLASSIFIER FOR IDENTIFYING LINK SPAM IN WEB SEARCH ENGINES

    Directory of Open Access Journals (Sweden)

    S.K. Jayanthi

    2013-01-01

    Full Text Available Search Engines are used for retrieving the information from the web. Most of the times, the importance is laid on top 10 results sometimes it may shrink as top 5, because of the time constraint and reliability on the search engines. Users believe that top 10 or 5 of total results are more relevant. Here comes the problem of spamdexing. It is a method to deceive the search result quality. Falsified metrics such as inserting enormous amount of keywords or links in website may take that website to the top 10 or 5 positions. This paper proposes a classifier based on the Reptree (Regression tree representative. As an initial step Link-based features such as neighbors, pagerank, truncated pagerank, trustrank and assortativity related attributes are inferred. Based on this features, tree is constructed. The tree uses the feature inference to differentiate spam sites from legitimate sites. WEBSPAM-UK-2007 dataset is taken as a base. It is preprocessed and converted into five datasets FEATA, FEATB, FEATC, FEATD and FEATE. Only link based features are taken for experiments. This paper focus on link spam alone. Finally a representative tree is created which will more precisely classify the web spam entries. Results are given. Regression tree classification seems to perform well as shown through experiments.

  19. On the statistical assessment of classifiers using DNA microarray data

    Directory of Open Access Journals (Sweden)

    Carella M

    2006-08-01

    Full Text Available Abstract Background In this paper we present a method for the statistical assessment of cancer predictors which make use of gene expression profiles. The methodology is applied to a new data set of microarray gene expression data collected in Casa Sollievo della Sofferenza Hospital, Foggia – Italy. The data set is made up of normal (22 and tumor (25 specimens extracted from 25 patients affected by colon cancer. We propose to give answers to some questions which are relevant for the automatic diagnosis of cancer such as: Is the size of the available data set sufficient to build accurate classifiers? What is the statistical significance of the associated error rates? In what ways can accuracy be considered dependant on the adopted classification scheme? How many genes are correlated with the pathology and how many are sufficient for an accurate colon cancer classification? The method we propose answers these questions whilst avoiding the potential pitfalls hidden in the analysis and interpretation of microarray data. Results We estimate the generalization error, evaluated through the Leave-K-Out Cross Validation error, for three different classification schemes by varying the number of training examples and the number of the genes used. The statistical significance of the error rate is measured by using a permutation test. We provide a statistical analysis in terms of the frequencies of the genes involved in the classification. Using the whole set of genes, we found that the Weighted Voting Algorithm (WVA classifier learns the distinction between normal and tumor specimens with 25 training examples, providing e = 21% (p = 0.045 as an error rate. This remains constant even when the number of examples increases. Moreover, Regularized Least Squares (RLS and Support Vector Machines (SVM classifiers can learn with only 15 training examples, with an error rate of e = 19% (p = 0.035 and e = 18% (p = 0.037 respectively. Moreover, the error rate

  20. Hot complaint intelligent classification based on text mining

    Directory of Open Access Journals (Sweden)

    XIA Haifeng

    2013-10-01

    Full Text Available The complaint recognizer system plays an important role in making sure the correct classification of the hot complaint,improving the service quantity of telecommunications industry.The customers’ complaint in telecommunications industry has its special particularity which should be done in limited time,which cause the error in classification of hot complaint.The paper presents a model of complaint hot intelligent classification based on text mining,which can classify the hot complaint in the correct level of the complaint navigation.The examples show that the model can be efficient to classify the text of the complaint.

  1. 41 CFR 105-62.102 - Authority to originally classify.

    Science.gov (United States)

    2010-07-01

    ... originally classify. (a) Top secret, secret, and confidential. The authority to originally classify information as Top Secret, Secret, or Confidential may be exercised only by the Administrator and is delegable...

  2. Fast Most Similar Neighbor (MSN) classifiers for Mixed Data

    OpenAIRE

    Hernández Rodríguez, Selene

    2010-01-01

    The k nearest neighbor (k-NN) classifier has been extensively used in Pattern Recognition because of its simplicity and its good performance. However, in large datasets applications, the exhaustive k-NN classifier becomes impractical. Therefore, many fast k-NN classifiers have been developed; most of them rely on metric properties (usually the triangle inequality) to reduce the number of prototype comparisons. Hence, the existing fast k-NN classifiers are applicable only when the comparison f...

  3. Containment and Surveillance Equipment Compendium

    International Nuclear Information System (INIS)

    Luetters, F.O.

    1980-02-01

    The Containment and Surveillance Equipment Compendium contains information sections describing the application and status of seals, optical surveillance systems, and monitors for international safeguards systems. The Compendium is a collection of information on equipment in use (generally by the IAEA) or under development in the US in diverse programs being conducted at numerous facilities under different sponsors. The Compendium establishes a baseline for the status and applications of C/S equipment and is a tool to assist in the planning of future C/S hardware development activities. The Appendix contains design concepts which can be developed to meet future goals

  4. Interval algebra: an effective means of scheduling surveillance radar networks

    CSIR Research Space (South Africa)

    Focke, RW

    2015-05-01

    Full Text Available Interval Algebra provides an effective means to schedule surveillance radar networks, as it is a temporal ordering constraint language. Thus it provides a solution to a part of resource management, which is included in the revised Data Fusion...

  5. Interval algebra - an effective means of scheduling surveillance radar networks

    CSIR Research Space (South Africa)

    Focke, RW

    2015-05-01

    Full Text Available Interval Algebra provides an effective means to schedule surveillance radar networks, as it is a temporal ordering constraint language. Thus it provides a solution to a part of resource management, which is included in the revised Data Fusion...

  6. On infectious intestinal disease surveillance using social media content

    DEFF Research Database (Denmark)

    Zou, Bin; Lampos, Vasileios; Gorton, Russell

    2016-01-01

    by traditional health surveillance methods. We employ a deep learning approach for creating a topical vocabulary, and then apply a regularised linear (Elastic Net) as well as a nonlinear (Gaussian Process) regression function for inference. We show that like previous text regression tasks, the nonlinear approach...

  7. Three data partitioning strategies for building local classifiers (Chapter 14)

    NARCIS (Netherlands)

    Zliobaite, I.; Okun, O.; Valentini, G.; Re, M.

    2011-01-01

    Divide-and-conquer approach has been recognized in multiple classifier systems aiming to utilize local expertise of individual classifiers. In this study we experimentally investigate three strategies for building local classifiers that are based on different routines of sampling data for training.

  8. 32 CFR 2400.28 - Dissemination of classified information.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Dissemination of classified information. 2400.28... SECURITY PROGRAM Safeguarding § 2400.28 Dissemination of classified information. Heads of OSTP offices... originating official may prescribe specific restrictions on dissemination of classified information when...

  9. Developing regional workplace health and hazard surveillance in the Americas

    Directory of Open Access Journals (Sweden)

    Choi Bernard C. K.

    2001-01-01

    Full Text Available An objective of the Workers' Health Program at the Pan American Health Organization (PAHO is to strengthen surveillance in workers' health in the Region of the Americas in order to implement prevention and control strategies. To date, four phases of projects have been organized to develop multinational workplace health and hazard surveillance in the Region. Phase 1 was a workshop held in 1999 in Washington, D.C., for the purpose of developing a methodology for identifying and prioritizing the top three occupational sentinel health events to be incorporated into the surveillance systems in the Region. Three surveillance protocols were developed, one each for fatal occupational injuries, pesticide poisoning,4 and low back pain, which were identified in the workshop as the most important occupational health problems. Phase 2 comprised projects to disseminate the findings and recommendations of the Washington Workshop, including publications, pilot projects, software development, electronic communication, and meetings. Phase 3 was a sub-regional meeting in 2000 in Rosario, Argentina, to follow up on the progress in carrying out the recommendations of the Washington workshop and to create a Virtual Regional Center for Latin America that could coordinate the efforts of member countries. Currently phase 4 includes a number of projects to achieve the objectives of this Center, such as pilot projects, capacity building, editing a compact disk, analyzing legal systems and intervention strategies, software training, and developing an internet course on surveillance. By documenting the joint efforts made to initiate and develop Regional multinational surveillance of occupational injuries and diseases in the Americas, this paper aims to provide experience and guidance for others wishing to initiate and develop regional multinational surveillance for other diseases or in other regions.

  10. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  11. Important Text Characteristics for Early-Grades Text Complexity

    Science.gov (United States)

    Fitzgerald, Jill; Elmore, Jeff; Koons, Heather; Hiebert, Elfrieda H.; Bowen, Kimberly; Sanford-Moore, Eleanor E.; Stenner, A. Jackson

    2015-01-01

    The Common Core set a standard for all children to read increasingly complex texts throughout schooling. The purpose of the present study was to explore text characteristics specifically in relation to early-grades text complexity. Three hundred fifty primary-grades texts were selected and digitized. Twenty-two text characteristics were identified…

  12. Least Square Support Vector Machine Classifier vs a Logistic Regression Classifier on the Recognition of Numeric Digits

    Directory of Open Access Journals (Sweden)

    Danilo A. López-Sarmiento

    2013-11-01

    Full Text Available In this paper is compared the performance of a multi-class least squares support vector machine (LSSVM mc versus a multi-class logistic regression classifier to problem of recognizing the numeric digits (0-9 handwritten. To develop the comparison was used a data set consisting of 5000 images of handwritten numeric digits (500 images for each number from 0-9, each image of 20 x 20 pixels. The inputs to each of the systems were vectors of 400 dimensions corresponding to each image (not done feature extraction. Both classifiers used OneVsAll strategy to enable multi-classification and a random cross-validation function for the process of minimizing the cost function. The metrics of comparison were precision and training time under the same computational conditions. Both techniques evaluated showed a precision above 95 %, with LS-SVM slightly more accurate. However the computational cost if we found a marked difference: LS-SVM training requires time 16.42 % less than that required by the logistic regression model based on the same low computational conditions.

  13. Higher School Marketing Strategy Formation: Classifying the Factors

    Directory of Open Access Journals (Sweden)

    N. K. Shemetova

    2012-01-01

    Full Text Available The paper deals with the main trends of higher school management strategy formation. The author specifies the educational changes in the modern information society determining the strategy options. For each professional training level the author denotes the set of strategic factors affecting the educational service consumers and, therefore, the effectiveness of the higher school marketing. The given factors are classified from the stand-points of the providers and consumers of educational service (enrollees, students, graduates and postgraduates. The research methods include the statistic analysis and general methods of scientific analysis, synthesis, induction, deduction, comparison, and classification. The author is convinced that the university management should develop the necessary prerequisites for raising the graduates’ competitiveness in the labor market, and stimulate the active marketing policies of the relating subdivisions and departments. In author’s opinion, the above classification of marketing strategy factors can be used as the system of values for educational service providers. 

  14. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław

    2016-01-01

    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  15. The Motivation of Betrayal by Leaking of Classified Information

    Directory of Open Access Journals (Sweden)

    Lăzăroiu Laurențiu-Leonard

    2017-03-01

    Full Text Available Trying to forecast the human behavior involves acts and knowledge of motivational theories, applicable to profile of each organization and in particular to each individual’s style. The anticipation of personal attitudes has not the only aim for a passive monitoring of professional activity, but also wants to increase performance of risk avoidance, in acordance with a specific organizational environment. The emergence and development of motivational forms and values, whose projections determine social crimes, are risk factors, affecting the professional activity of the person, but also affecting the performance and stability of the institution. Moreover, if the motivation determines attitudes aimed at compromising classified information, the resulting actions may be considered as threats to national security. The prevention of such threats can only be achieved by understanding motivational mechanisms and external conditions for the perssonel that make it possible to transform some intentions into real actions.

  16. ARC: Automated Resource Classifier for agglomerative functional ...

    Indian Academy of Sciences (India)

    PRAKASH KUMAR

    prokaryotic proteins using annotation texts; J. Biosci. 32 937–945] .... processor server running on RedHat Linux Enterprise version 4. The Web server was ... File Formats: Three types of file formats can be accepted. An input file with (i) ...

  17. Surveillant militaire, j’ai vu la fin du bagne

    Directory of Open Access Journals (Sweden)

    Marc Renneville

    2006-01-01

    Full Text Available Surveillant principal (assimilé Lieutenant, Emile Demaret est peut-être le dernier survivant du corps des surveillants militaires des services pénitentiaires coloniaux de la Guyane. Né le 26 juin 1918 à Toulouse, il a été mousse à l’Ecole des Apprentis marins de Brest le 4 avril 1934, engagé dans la Marine nationale pour cinq ans à partir du 17 août 1934, embarqué sur les cuirassés Jean Bart et Paris, matelot timonier le 1er octobre 1935 sur le torpilleur Enseigne Roux à Bizerte (Tunisie, q...

  18. Analysis of a Pareto Mixture Distribution for Maritime Surveillance Radar

    Directory of Open Access Journals (Sweden)

    Graham V. Weinberg

    2012-01-01

    Full Text Available The Pareto distribution has been shown to be an excellent model for X-band high-resolution maritime surveillance radar clutter returns. Given the success of mixture distributions in radar, it is thus of interest to consider the effect of Pareto mixture models. This paper introduces a formulation of a Pareto intensity mixture distribution and investigates coherent multilook radar detector performance using this new clutter model. Clutter parameter estimates are derived from data sets produced by the Defence Science and Technology Organisation's Ingara maritime surveillance radar.

  19. Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2015-01-01

    Full Text Available For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system.

  20. The Role of MRI in Prostate Cancer Active Surveillance

    Directory of Open Access Journals (Sweden)

    Linda M. Johnson

    2014-01-01

    Full Text Available Prostate cancer is the most common cancer diagnosis in American men, excluding skin cancer. The clinical behavior of prostate cancer varies from low-grade, slow growing tumors to high-grade aggressive tumors that may ultimately progress to metastases and cause death. Given the high incidence of men diagnosed with prostate cancer, conservative treatment strategies such as active surveillance are critical in the management of prostate cancer to reduce therapeutic complications of radiation therapy or radical prostatectomy. In this review, we will review the role of multiparametric MRI in the selection and follow-up of patients on active surveillance.

  1. Peat classified as slowly renewable biomass fuel

    International Nuclear Information System (INIS)

    2001-01-01

    thousands of years. The report states also that peat should be classified as biomass fuel instead of biofuels, such as wood, or fossil fuels such as coal. According to the report peat is a renewable biomass fuel like biofuels, but due to slow accumulation it should be considered as slowly renewable fuel. The report estimates that bonding of carbon in both virgin and forest drained peatlands are so high that it can compensate the emissions formed in combustion of energy peat

  2. Use of emergency department electronic medical records for automated epidemiological surveillance of suicide attempts: a French pilot study.

    Science.gov (United States)

    Metzger, Marie-Hélène; Tvardik, Nastassia; Gicquel, Quentin; Bouvry, Côme; Poulet, Emmanuel; Potinet-Pagliaroli, Véronique

    2017-06-01

    The aim of this study was to determine whether an expert system based on automated processing of electronic health records (EHRs) could provide a more accurate estimate of the annual rate of emergency department (ED) visits for suicide attempts in France, as compared to the current national surveillance system based on manual coding by emergency practitioners. A feasibility study was conducted at Lyon University Hospital, using data for all ED patient visits in 2012. After automatic data extraction and pre-processing, including automatic coding of medical free-text through use of the Unified Medical Language System, seven different machine-learning methods were used to classify the reasons for ED visits into "suicide attempts" versus "other reasons". The performance of these different methods was compared by using the F-measure. In a test sample of 444 patients admitted to the ED in 2012 (98 suicide attempts, 48 cases of suicidal ideation, and 292 controls with no recorded non-fatal suicidal behaviour), the F-measure for automatic detection of suicide attempts ranged from 70.4% to 95.3%. The random forest and naïve Bayes methods performed best. This study demonstrates that machine-learning methods can improve the quality of epidemiological indicators as compared to current national surveillance of suicide attempts. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Evaluation of a Spotted Fever Group Rickettsia Public Health Surveillance System in Tennessee.

    Science.gov (United States)

    Fill, Mary-Margaret A; Moncayo, Abelardo C; Bloch, Karen C; Dunn, John R; Schaffner, William; Jones, Timothy F

    2017-09-01

    Spotted fever group (SFG) rickettsioses are endemic in Tennessee, with ∼2,500 cases reported during 2000-2012. Because of this substantial burden of disease, we performed a three-part evaluation of Tennessee's routine surveillance for SFG rickettsioses cases and deaths to assess the system's effectiveness. Tennessee Department of Health (TDH) SFG rickettsioses surveillance records were matched to three patient series: 1) patients with positive serologic specimens from a commercial reference laboratory during 2010-2011, 2) tertiary medical center patients with positive serologic tests during 2007-2013, and 3) patients identified from death certificates issued during 1995-2014 with SFG rickettsiosis-related causes of death. Chart reviews were performed and patients were classified according to the Council of State and Territorial Epidemiologists' case definition. Of 254 SFG Rickettsia -positive serologic specimens from the reference laboratory, 129 (51%) met the case definition for confirmed or probable cases of rickettsial disease after chart review. The sensitivity of the TDH surveillance system to detect cases was 45%. Of the 98 confirmed or probable cases identified from the medical center, the sensitivity of the TDH surveillance system to detect cases was 34%. Of 27 patients identified by death certificates, 12 (44%) were classified as confirmed or probable cases; four (33%) were reported to TDH, but none were correctly identified as deceased. Cases of SFG rickettsioses were underreported and fatalities not correctly identified. Efforts are needed to improve SFG rickettsiosis surveillance in Tennessee.

  4. Regional Disease Surveillance Meeting - Final Paper

    Energy Technology Data Exchange (ETDEWEB)

    Lesperance, Ann M.; Mahy, Heidi A.

    2006-08-08

    On June 1, 2006, public health officials working in surveillance, epidemiological modeling, and information technology communities from the Seattle/Tacoma area and State of Washington met with members of the Pacific Northwest National Laboratory (PNNL) to discuss the current state of disease surveillance and gaps and needs to improve the current systems. The meeting also included a discussion of PNNL initiatives that might be appropriate to enhance disease surveillance and the current tools being used for disease surveillance. Participants broke out into two groups to identify critical gaps and needs for improving a surveillance system, and discuss the requirements for developing improved surveillance. Each group developed a list of key priorities summarizing the requirements for improved surveillance. The objective of this meeting was to work towards the development of an improved disease surveillance system.

  5. Inappropriate colonoscopic surveillance of hyperplastic polyps.

    LENUS (Irish Health Repository)

    Keane, R A

    2011-11-15

    Colonoscopic surveillance of hyperplastic polyps alone is controversial and may be inappropriate. The colonoscopy surveillance register at a university teaching hospital was audited to determine the extent of such hyperplastic polyp surveillance. The surveillance endoscopy records were reviewed, those patients with hyperplastic polyps were identified, their clinical records were examined and contact was made with each patient. Of the 483 patients undergoing surveillance for colonic polyps 113 (23%) had hyperplastic polyps alone on last colonoscopy. 104 patients remained after exclusion of those under appropriate surveillance. 87 of the 104 patients (84%) were successfully contacted. 37 patients (8%) were under appropriate colonoscopic surveillance for a significant family history of colorectal carcinoma. 50 (10%) patients with hyperplastic polyps alone and no other clinical indication for colonoscopic surveillance were booked for follow up colonoscopy. This represents not only a budgetary but more importantly a clinical opportunity cost the removal of which could liberate valuable colonoscopy time for more appropriate indications.

  6. National Cardiac Device Surveillance Program Database

    Data.gov (United States)

    Department of Veterans Affairs — The National Cardiac Device Surveillance Program Database supports the Eastern Pacemaker Surveillance Center (EPSC) staff in its function of monitoring some 11,000...

  7. Strengthening Injury Surveillance System in Iran

    Directory of Open Access Journals (Sweden)

    Motevalian Seyed Abbas

    2012-02-01

    Full Text Available 【Abstract】Objective: To strengthen the current Injury Surveillance System (IS System in order to better monitor injury conditions, improve protection ways and promote safety. Methods: At first we carried out a study to evaluate the frameworks of IS System in the developed countries. Then all the available documents from World Health Organization, Eastern Mediterranean Regional Organization, as well as Minister of Health and Medical Education concerning Iran were reviewed. Later a national stakeholder抯 consultation was held to collect opinions and views. A national workshop was also intended for provincial representatives from 41 universities to identify the barriers and limitations of the existing program and further to strengthen injury surveillance. Results: The evaluation of the current IS System revealed many problems, mainly presented as lack of accurate pre- and post-hospital death registry, need of precise injury data registry in outpatient medical centers, incomplete injury data registry in hospitals and lack of accuracy in definition of variables in injury registry. The five main characteristics of current IS System including flexibility, acceptability, simplicity, usefulness and timeliness were evaluated as moderate by experts. Conclusions: Major revisions must be considered in the current IS System in Iran. The following elements should be added to the questionnaire: identifier, manner of arrival to the hospital, situation of the injured patient, consumption of alcohol and opioids, other involved participants in the accident, intention, severity and site of injury, side effects of surgery and medication, as well as one month follow-up results. Data should be collected from 10% of all hospitals in Iran and analyzed every 3 months. Simultaneously data should be online to be retrieved by researches. Key words: Wounds and injuries; Population surveillance; Registries; Iran

  8. Modeling of Food and Nutrition Surveillance in Primary Health Care

    Directory of Open Access Journals (Sweden)

    Santuzza Arreguy Silva VITORINO

    Full Text Available ABSTRACT Objective: To describe the modeling stages of food and nutrition surveillance in the Primary Health Care of the Unified Health Care System, considering its activities, objectives, and goals Methods: Document analysis and semi-structured interviews were used for identifying the components, describe the intervention, and identify potential assessment users. Results: The results include identification of the objectives and goals of the intervention, the required inputs, activities, and expected effects. The intervention was then modeled based on these data. The use of the theoretical logic model optimizes times, resources, definition of the indicators that require monitoring, and the aspects that require assessment, identifying more clearly the contribution of the intervention to the results Conclusion: Modeling enabled the description of food and nutrition surveillance based on its components and may guide the development of viable plans to monitor food and nutrition surveillance actions so that modeling can be established as a local intersectoral planning instrument.

  9. Child injury surveillance capabilities in NSW: informing policy and practice

    Directory of Open Access Journals (Sweden)

    Rebecca Mitchell

    2017-10-01

    Full Text Available Injury is one of the most common reasons why a child is hospitalised. Information gained from injury surveillance activities provides an estimate of the injury burden, describes injury event circumstances, can be used to monitor injury trends over time, and is used to design and evaluate injury prevention activities. This perspective article provides an overview of child injury surveillance capabilities within New South Wales (NSW, Australia, following a stocktake of population-based injury-related data collections using the Evaluation Framework for Injury Surveillance Systems. Information about childhood injury in NSW is obtained from multiple administrative data collections that were not specifically designed to conduct injury surveillance. Obtaining good information for child injury surveillance in NSW will involve better coordination of information from agencies that record information about childhood injury. Regular reporting about childhood injury to provide a comprehensive profile of injuries of children and young people in the state should be considered, along with the provision and/or linkage of child injury information from multiple data collections. This could support the development of a suite of injury performance indicators to monitor childhood injury reduction strategies across NSW.

  10. Epidemic Intelligence. Langmuir and the Birth of Disease Surveillance

    Directory of Open Access Journals (Sweden)

    Lyle Fearnley

    2010-12-01

    Full Text Available In the wake of the SARS and influenza epidemics of the past decade, one public health solution has become a refrain: surveillance systems for detection of disease outbreaks. This paper is an effort to understand how disease surveillance for outbreak detection gained such paramount rationality in contemporary public health. The epidemiologist Alexander Langmuir is well known as the creator of modern disease surveillance. But less well known is how he imagined disease surveillance as one part of what he called “epidemic intelligence.” Langmuir developed the practice of disease surveillance during an unprecedented moment in which the threat of biological warfare brought civil defense experts and epidemiologists together around a common problem. In this paper, I describe how Langmuir navigated this world, experimenting with new techniques and rationales of epidemic control. Ultimately, I argue, Langmuir′s experiments resulted in a set of techniques and infrastructures – a system of epidemic intelligence – that transformed the epidemic as an object of human art.

  11. Web-based infectious disease surveillance systems and public health perspectives: a systematic review

    Directory of Open Access Journals (Sweden)

    Jihye Choi

    2016-12-01

    Full Text Available Abstract Background Emerging and re-emerging infectious diseases are a significant public health concern, and early detection and immediate response is crucial for disease control. These challenges have led to the need for new approaches and technologies to reinforce the capacity of traditional surveillance systems for detecting emerging infectious diseases. In the last few years, the availability of novel web-based data sources has contributed substantially to infectious disease surveillance. This study explores the burgeoning field of web-based infectious disease surveillance systems by examining their current status, importance, and potential challenges. Methods A systematic review framework was applied to the search, screening, and analysis of web-based infectious disease surveillance systems. We searched PubMed, Web of Science, and Embase databases to extensively review the English literature published between 2000 and 2015. Eleven surveillance systems were chosen for evaluation according to their high frequency of application. Relevant terms, including newly coined terms, development and classification of the surveillance systems, and various characteristics associated with the systems were studied. Results Based on a detailed and informative review of the 11 web-based infectious disease surveillance systems, it was evident that these systems exhibited clear strengths, as compared to traditional surveillance systems, but with some limitations yet to be overcome. The major strengths of the newly emerging surveillance systems are that they are intuitive, adaptable, low-cost, and operated in real-time, all of which are necessary features of an effective public health tool. The most apparent potential challenges of the web-based systems are those of inaccurate interpretation and prediction of health status, and privacy issues, based on an individual’s internet activity. Conclusion Despite being in a nascent stage with further modification

  12. Surveillance by diagnostic microbiology laboratories

    African Journals Online (AJOL)

    account for almost threequarters of all Acinetobacter baumannii bloodstream infections, supporting the decision to include colistin or tobramycin as empirical treatment options for ICU patients with suspected Gramnegative sepsis. The dissemination and utilisation of surveillance data is crucial if they are to impact on patient ...

  13. Symbolic power, robotting, and surveilling

    DEFF Research Database (Denmark)

    Skovsmose, Ole

    2012-01-01

    describes as it prioritises is discussed with reference to robotting and surveillance. In general, the symbolic power of mathematics and formal languages is summarised through the observations: that mathematics treats parts and properties as autonomous, that it dismembers what it addresses and destroys...

  14. A Constrained Multi-Objective Learning Algorithm for Feed-Forward Neural Network Classifiers

    Directory of Open Access Journals (Sweden)

    M. Njah

    2017-06-01

    Full Text Available This paper proposes a new approach to address the optimal design of a Feed-forward Neural Network (FNN based classifier. The originality of the proposed methodology, called CMOA, lie in the use of a new constraint handling technique based on a self-adaptive penalty procedure in order to direct the entire search effort towards finding only Pareto optimal solutions that are acceptable. Neurons and connections of the FNN Classifier are dynamically built during the learning process. The approach includes differential evolution to create new individuals and then keeps only the non-dominated ones as the basis for the next generation. The designed FNN Classifier is applied to six binary classification benchmark problems, obtained from the UCI repository, and results indicated the advantages of the proposed approach over other existing multi-objective evolutionary neural networks classifiers reported recently in the literature.

  15. Approaches to canine health surveillance.

    Science.gov (United States)

    O'Neill, Dan G; Church, David B; McGreevy, Paul D; Thomson, Peter C; Brodbelt, Dave C

    2014-01-01

    Effective canine health surveillance systems can be used to monitor disease in the general population, prioritise disorders for strategic control and focus clinical research, and to evaluate the success of these measures. The key attributes for optimal data collection systems that support canine disease surveillance are representativeness of the general population, validity of disorder data and sustainability. Limitations in these areas present as selection bias, misclassification bias and discontinuation of the system respectively. Canine health data sources are reviewed to identify their strengths and weaknesses for supporting effective canine health surveillance. Insurance data benefit from large and well-defined denominator populations but are limited by selection bias relating to the clinical events claimed and animals covered. Veterinary referral clinical data offer good reliability for diagnoses but are limited by referral bias for the disorders and animals included. Primary-care practice data have the advantage of excellent representation of the general dog population and recording at the point of care by veterinary professionals but may encounter misclassification problems and technical difficulties related to management and analysis of large datasets. Questionnaire surveys offer speed and low cost but may suffer from low response rates, poor data validation, recall bias and ill-defined denominator population information. Canine health scheme data benefit from well-characterised disorder and animal data but reflect selection bias during the voluntary submissions process. Formal UK passive surveillance systems are limited by chronic under-reporting and selection bias. It is concluded that active collection systems using secondary health data provide the optimal resource for canine health surveillance.

  16. Environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1995-02-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy (DOE). This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring onsite drinking water falls outside the scope of the SESP. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control, and reporting. The ground-water sampling schedule identifies ground-water sampling .events used by PNL for environmental surveillance of the Hanford Site. Sampling is indicated as annual, semi-annual, quarterly, or monthly in the sampling schedule. Some samples are collected and analyzed as part of ground-water monitoring and characterization programs at Hanford (e.g. Resources Conservation and Recovery Act (RCRA), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Operational). The number of samples planned by other programs are identified in the sampling schedule by a number in the analysis column and a project designation in the Cosample column. Well sampling events may be merged to avoid redundancy in cases where sampling is planned by both-environmental surveillance and another program

  17. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L.E.

    1995-02-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the U.S. Department of Energy (DOE). This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring onsite drinking water falls outside the scope of the SESP. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control, and reporting. The ground-water sampling schedule identifies ground-water sampling .events used by PNL for environmental surveillance of the Hanford Site. Sampling is indicated as annual, semi-annual, quarterly, or monthly in the sampling schedule. Some samples are collected and analyzed as part of ground-water monitoring and characterization programs at Hanford (e.g. Resources Conservation and Recovery Act (RCRA), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), or Operational). The number of samples planned by other programs are identified in the sampling schedule by a number in the analysis column and a project designation in the Cosample column. Well sampling events may be merged to avoid redundancy in cases where sampling is planned by both-environmental surveillance and another program.

  18. Sunglass detection method for automation of video surveillance system

    Science.gov (United States)

    Sikandar, Tasriva; Samsudin, Wan Nur Azhani W.; Hawari Ghazali, Kamarul; Mohd, Izzeldin I.; Fazle Rabbi, Mohammad

    2018-04-01

    Wearing sunglass to hide face from surveillance camera is a common activity in criminal incidences. Therefore, sunglass detection from surveillance video has become a demanding issue in automation of security systems. In this paper we propose an image processing method to detect sunglass from surveillance images. Specifically, a unique feature using facial height and width has been employed to identify the covered region of the face. The presence of covered area by sunglass is evaluated using facial height-width ratio. Threshold value of covered area percentage is used to classify the glass wearing face. Two different types of glasses have been considered i.e. eye glass and sunglass. The results of this study demonstrate that the proposed method is able to detect sunglasses in two different illumination conditions such as, room illumination as well as in the presence of sunlight. In addition, due to the multi-level checking in facial region, this method has 100% accuracy of detecting sunglass. However, in an exceptional case where fabric surrounding the face has similar color as skin, the correct detection rate was found 93.33% for eye glass.

  19. Using multivariate machine learning methods and structural MRI to classify childhood onset schizophrenia and healthy controls

    Directory of Open Access Journals (Sweden)

    Deanna eGreenstein

    2012-06-01

    Full Text Available Introduction: Multivariate machine learning methods can be used to classify groups of schizophrenia patients and controls using structural magnetic resonance imaging (MRI. However, machine learning methods to date have not been extended beyond classification and contemporaneously applied in a meaningful way to clinical measures. We hypothesized that brain measures would classify groups, and that increased likelihood of being classified as a patient using regional brain measures would be positively related to illness severity, developmental delays and genetic risk. Methods: Using 74 anatomic brain MRI sub regions and Random Forest, we classified 98 COS patients and 99 age, sex, and ethnicity-matched healthy controls. We also used Random Forest to determine the likelihood of being classified as a schizophrenia patient based on MRI measures. We then explored relationships between brain-based probability of illness and symptoms, premorbid development, and presence of copy number variation associated with schizophrenia. Results: Brain regions jointly classified COS and control groups with 73.7% accuracy. Greater brain-based probability of illness was associated with worse functioning (p= 0.0004 and fewer developmental delays (p=0.02. Presence of copy number variation (CNV was associated with lower probability of being classified as schizophrenia (p=0.001. The regions that were most important in classifying groups included left temporal lobes, bilateral dorsolateral prefrontal regions, and left medial parietal lobes. Conclusions: Schizophrenia and control groups can be well classified using Random Forest and anatomic brain measures, and brain-based probability of illness has a positive relationship with illness severity and a negative relationship with developmental delays/problems and CNV-based risk.

  20. Text analysis methods, text analysis apparatuses, and articles of manufacture

    Science.gov (United States)

    Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

    2014-10-28

    Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

  1. Classifying network attack scenarios using an ontology

    CSIR Research Space (South Africa)

    Van Heerden, RP

    2012-03-01

    Full Text Available ) or to the target?s reputation. The Residue sub-phase refers to damage or artefacts of the attack that occur after the attack goal has been achieved, and occurs because the attacker loses control of some systems. For example after the launch of a DDOS..., A. (1995). Hacking theft of $10 million from citibank revealed. Retrieved 10/10, 2011, from http://articles.latimes.com/1995-08-19/business/fi-36656_1_citibank-system Hurley, E. (2004). SCO site succumbs to DDoS attack. Retrieved 10/10, 2011, from...

  2. Classroom Texting in College Students

    Science.gov (United States)

    Pettijohn, Terry F.; Frazier, Erik; Rieser, Elizabeth; Vaughn, Nicholas; Hupp-Wilds, Bobbi

    2015-01-01

    A 21-item survey on texting in the classroom was given to 235 college students. Overall, 99.6% of students owned a cellphone and 98% texted daily. Of the 138 students who texted in the classroom, most texted friends or significant others, and indicate the reason for classroom texting is boredom or work. Students who texted sent a mean of 12.21…

  3. Surveillance in the Information Age: Text Quantification, Anomaly Detection, and Empirical Evaluation

    Science.gov (United States)

    Lu, Hsin-Min

    2010-01-01

    Deep penetration of personal computers, data communication networks, and the Internet has created a massive platform for data collection, dissemination, storage, and retrieval. Large amounts of textual data are now available at a very low cost. Valuable information, such as consumer preferences, new product developments, trends, and opportunities,…

  4. Syndromic surveillance: hospital emergency department participation during the Kentucky Derby Festival.

    Science.gov (United States)

    Carrico, Ruth; Goss, Linda

    2005-01-01

    Electronic syndromic surveillance may have value in detecting emerging pathogens or a biological weapons release. Hospitals that have an agile process to evaluate chief complaints of patients seeking emergency care may be able to more quickly identify subtle changes in the community's health. An easily adaptable prototype system was developed to monitor emergency department patient visits during the Kentucky Derby Festival in Louisville, Kentucky, from April 16-May 14, 2002. Use of the system was continued during the same festival periods in 2003 and 2004. Twelve area hospitals in Louisville, Kentucky, participated in a prospective analysis of the chief symptoms of patients who sought care in the emergency department during the Kentucky Derby Festival during 2002. Six hospitals were classified as computer record groups (CRG) and used their existing computerized record capabilities. The other 6 hospitals used a personal digital assistant (PDA) with customized software (PDA group). Data were evaluated by the health department epidemiologist using SaTScan, a modified version of a cancer cluster detection program, to look for clusters of cases above baseline over time and by Zip code. All 12 hospitals were able to collect and provide data elements during the study period. The 6 CRG hospitals were able to perform daily data transmission; however, 3 CRG hospitals were unable to interpret their data because it was transmitted in pure text format. In contrast, data from all 6 PDA group hospitals were interpretable. Real-time data analysis was compared with post-event data, and it was found that the real-time evaluation correctly identified no unusual disease activity during the study period. The 12 hospitals participating in this study demonstrated that community-wide surveillance using computerized data was possible and that the 6 study hospitals using a PDA could quickly interpret emergency department patients' chief complaints. The emergency department chief complaints

  5. Preferential sampling in veterinary parasitological surveillance

    Directory of Open Access Journals (Sweden)

    Lorenzo Cecconi

    2016-04-01

    Full Text Available In parasitological surveillance of livestock, prevalence surveys are conducted on a sample of farms using several sampling designs. For example, opportunistic surveys or informative sampling designs are very common. Preferential sampling refers to any situation in which the spatial process and the sampling locations are not independent. Most examples of preferential sampling in the spatial statistics literature are in environmental statistics with focus on pollutant monitors, and it has been shown that, if preferential sampling is present and is not accounted for in the statistical modelling and data analysis, statistical inference can be misleading. In this paper, working in the context of veterinary parasitology, we propose and use geostatistical models to predict the continuous and spatially-varying risk of a parasite infection. Specifically, breaking with the common practice in veterinary parasitological surveillance to ignore preferential sampling even though informative or opportunistic samples are very common, we specify a two-stage hierarchical Bayesian model that adjusts for preferential sampling and we apply it to data on Fasciola hepatica infection in sheep farms in Campania region (Southern Italy in the years 2013-2014.

  6. Robust Behavior Recognition in Intelligent Surveillance Environments

    Directory of Open Access Journals (Sweden)

    Ganbayar Batchuluun

    2016-06-01

    Full Text Available Intelligent surveillance systems have been studied by many researchers. These systems should be operated in both daytime and nighttime, but objects are invisible in images captured by visible light camera during the night. Therefore, near infrared (NIR cameras, thermal cameras (based on medium-wavelength infrared (MWIR, and long-wavelength infrared (LWIR light have been considered for usage during the nighttime as an alternative. Due to the usage during both daytime and nighttime, and the limitation of requiring an additional NIR illuminator (which should illuminate a wide area over a great distance for NIR cameras during the nighttime, a dual system of visible light and thermal cameras is used in our research, and we propose a new behavior recognition in intelligent surveillance environments. Twelve datasets were compiled by collecting data in various environments, and they were used to obtain experimental results. The recognition accuracy of our method was found to be 97.6%, thereby confirming the ability of our method to outperform previous methods.

  7. Chinese social media analysis for disease surveillance

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [Wuhan Univ., Wuhan (China); Yang, Nanhai [Wuhan Univ., Wuhan (China); Wang, Zhibo [Wuhan Univ., Wuhan (China); East China Institute of Technology, Nanchang (China); Hu, Cheng [Wuhan Univ., Wuhan (China); Zhu, Weiping [Wuhan Univ., Wuhan (China); Li, Hanjie [Wuhan Univ., Wuhan (China); Ji, Yujie [Wuhan Univ., Wuhan (China); Liu, Cheng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-09-11

    Here, it is reported that there are hundreds of thou- sands of deaths caused by seasonal flu all around the world every year. More other diseases such as chickenpox, malaria, etc. are also serious threats to people’s physical and mental health. There are 250,000–500,000 deaths every year around the world. Therefore proper techniques for disease surveillance are highly demanded. Recently, social media analysis is regarded as an efficient way to achieve this goal, which is feasible since growing number of people have been posting their health information on social media such as blogs, personal websites, etc. Previous work on social media analysis mainly focused on English materials but hardly considered Chinese materials, which hinders the application of such technique to Chinese peo- ple. In this paper, we proposed a new method of Chinese social media analysis for disease surveillance. More specifically, we compared different kinds of methods in the process of classification and then proposed a new way to process Chinese text data. The Chinese Sina micro-blog data collected from September to December 2013 are used to validate the effectiveness of the proposed method. The results show that a high classification precision of 87.49 % in average has been obtained. Comparing with the data from the authority, Chinese National Influenza Center, we can predict the outbreak time of flu 5 days earlier.

  8. Environmental health surveillance system; Kankyo hoken surveillance system

    Energy Technology Data Exchange (ETDEWEB)

    Ono, M. [National Inst. for Environmental Studies, Tsukuba (Japan)

    1998-02-01

    The Central Environmental Pollution Prevention Council pointed out the necessity to establish an environmental health surveillance system (hereinafter referred to as System) in its report `on the first type district specified by the Environmental Pollution Caused Health Damages Compensation Act,` issued in 1986. A study team, established in Environment Agency, has been discussing to establish System since 1986. This paper outlines System, and some of the pilot surveillance results. It is not aimed at elucidation of the cause-effect relationships between health and air pollution but at discovery of problems, in which the above relationships in a district population are monitored periodically and continuously from long-term and prospective viewpoints, in order to help take necessary measures in the early stage. System is now collecting the data of the chronic obstructive lung diseases on a nation-wide scale through health examinations of 3-year-old and preschool children and daily air pollution monitoring. 6 refs., 3 figs., 1 tab.

  9. Integrating air-related health surveillance into air quality management: perceptions and practicalities

    CSIR Research Space (South Africa)

    Wright, C

    2012-06-01

    Full Text Available Health surveillance is presently not an integral part of air quality management in South Africa, although ambient air pollution standards are derived from health effects of personal exposure. In a survey to air quality officials and environmental...

  10. Validation of vertical refractivity profiles as required for performance prediction of coastal surveillance radars

    CSIR Research Space (South Africa)

    Naicker, K

    2011-04-01

    Full Text Available Maritime border safeguarding is a vital component in the protection of a countries resources and interests against illegal activities. With the increasing asymmetric nature of today’s threats, a primary requirement of any coastal surveillance system...

  11. Surveillance Patterns After Curative-Intent Colorectal Cancer Surgery in Ontario

    Directory of Open Access Journals (Sweden)

    Jensen Tan

    2014-01-01

    Full Text Available BACKGROUND: Postoperative surveillance following curative-intent resection of colorectal cancer (CRC is variably performed due to existing guideline differences and to the limited data supporting different strategies.

  12. Using a data fusion-based activity recognition framework to determine surveillance system requirements

    CSIR Research Space (South Africa)

    Le Roux, WH

    2007-07-01

    Full Text Available A technique is proposed to extract system requirements for a maritime area surveillance system, based on an activity recognition framework originally intended for the characterisation, prediction and recognition of intentional actions for threat...

  13. Surveillance of surgical site infection after cholecystectomy using the hospital in Europe link for infection control through surveillance protocol.

    Science.gov (United States)

    Bogdanic, Branko; Bosnjak, Zrinka; Budimir, Ana; Augustin, Goran; Milosevic, Milan; Plecko, Vanda; Kalenic, Smilja; Fiolic, Zlatko; Vanek, Maja

    2013-06-01

    The third most common healthcare-associated infection is surgical site infection (SSI), accounting for 14%-16% of infections. These SSIs are associated with high morbidity, numerous deaths, and greater cost. A prospective study was conducted to assess the incidence of SSI in a single university hospital in Croatia. We used the Hospital in Europe Link for Infection Control through Surveillance (HELICS) protocol for surveillance. The SSIs were classified using the standard definition of the National Nosocomial Infections Surveillance (NNIS) system. The overall incidence of SSI was 1.44%. The incidence of infection in the open cholecystectomy group was 6.06%, whereas in the laparoscopic group, it was only 0.60%. The incidence density of in-hospital SSIs per 1,000 post-operative days was 5.76. Patients who underwent a laparoscopic cholecystectomy were significantly younger (53.65±14.65 vs. 64.42±14.17 years; pconcept for the monitoring of SSI, but in the case of cholecystectomy, additional factors such as antibiotic appropriateness, gallbladder entry, empyema of the gallbladder, and obstructive jaundice must be considered.

  14. Observation of [Formula: see text] and [Formula: see text] decays.

    Science.gov (United States)

    Aaij, R; Adeva, B; Adinolfi, M; Ajaltouni, Z; Akar, S; Albrecht, J; Alessio, F; Alexander, M; Ali, S; Alkhazov, G; Alvarez Cartelle, P; Alves, A A; Amato, S; Amerio, S; Amhis, Y; An, L; Anderlini, L; Andreassi, G; Andreotti, M; Andrews, J E; Appleby, R B; Archilli, F; d'Argent, P; Arnau Romeu, J; Artamonov, A; Artuso, M; Aslanides, E; Auriemma, G; Baalouch, M; Babuschkin, I; Bachmann, S; Back, J J; Badalov, A; Baesso, C; Baker, S; Baldini, W; Barlow, R J; Barschel, C; Barsuk, S; Barter, W; Baszczyk, M; Batozskaya, V; Batsukh, B; Battista, V; Bay, A; Beaucourt, L; Beddow, J; Bedeschi, F; Bediaga, I; Bel, L J; Bellee, V; Belloli, N; Belous, K; Belyaev, I; Ben-Haim, E; Bencivenni, G; Benson, S; Benton, J; Berezhnoy, A; Bernet, R; Bertolin, A; Betancourt, C; Betti, F; Bettler, M-O; van Beuzekom, M; Bezshyiko, Ia; Bifani, S; Billoir, P; Bird, T; Birnkraut, A; Bitadze, A; Bizzeti, A; Blake, T; Blanc, F; Blouw, J; Blusk, S; Bocci, V; Boettcher, T; Bondar, A; Bondar, N; Bonivento, W; Bordyuzhin, I; Borgheresi, A; Borghi, S; Borisyak, M; Borsato, M; Bossu, F; Boubdir, M; Bowcock, T J V; Bowen, E; Bozzi, C; Braun, S; Britsch, M; Britton, T; Brodzicka, J; Buchanan, E; Burr, C; Bursche, A; Buytaert, J; Cadeddu, S; Calabrese, R; Calvi, M; Calvo Gomez, M; Camboni, A; Campana, P; Campora Perez, D H; Capriotti, L; Carbone, A; Carboni, G; Cardinale, R; Cardini, A; Carniti, P; Carson, L; Carvalho Akiba, K; Casse, G; Cassina, L; Castillo Garcia, L; Cattaneo, M; Cauet, Ch; Cavallero, G; Cenci, R; Charles, M; Charpentier, Ph; Chatzikonstantinidis, G; Chefdeville, M; Chen, S; Cheung, S-F; Chobanova, V; Chrzaszcz, M; Cid Vidal, X; Ciezarek, G; Clarke, P E L; Clemencic, M; Cliff, H V; Closier, J; Coco, V; Cogan, J; Cogneras, E; Cogoni, V; Cojocariu, L; Collazuol, G; Collins, P; Comerma-Montells, A; Contu, A; Cook, A; Coombs, G; Coquereau, S; Corti, G; Corvo, M; Costa Sobral, C M; Couturier, B; Cowan, G A; Craik, D C; Crocombe, A; Cruz Torres, M; Cunliffe, S; Currie, R; D'Ambrosio, C; Da Cunha Marinho, F; Dall'Occo, E; Dalseno, J; David, P N Y; Davis, A; De Aguiar Francisco, O; De Bruyn, K; De Capua, S; De Cian, M; De Miranda, J M; De Paula, L; De Serio, M; De Simone, P; Dean, C-T; Decamp, D; Deckenhoff, M; Del Buono, L; Demmer, M; Dendek, A; Derkach, D; Deschamps, O; Dettori, F; Dey, B; Di Canto, A; Dijkstra, H; Dordei, F; Dorigo, M; Dosil Suárez, A; Dovbnya, A; Dreimanis, K; Dufour, L; Dujany, G; Dungs, K; Durante, P; Dzhelyadin, R; Dziurda, A; Dzyuba, A; Déléage, N; Easo, S; Ebert, M; Egede, U; Egorychev, V; Eidelman, S; Eisenhardt, S; Eitschberger, U; Ekelhof, R; Eklund, L; Ely, S; Esen, S; Evans, H M; Evans, T; Falabella, A; Farley, N; Farry, S; Fay, R; Fazzini, D; Ferguson, D; Fernandez Prieto, A; Ferrari, F; Ferreira Rodrigues, F; Ferro-Luzzi, M; Filippov, S; Fini, R A; Fiore, M; Fiorini, M; Firlej, M; Fitzpatrick, C; Fiutowski, T; Fleuret, F; Fohl, K; Fontana, M; Fontanelli, F; Forshaw, D C; Forty, R; Franco Lima, V; Frank, M; Frei, C; Fu, J; Furfaro, E; Färber, C; Gallas Torreira, A; Galli, D; Gallorini, S; Gambetta, S; Gandelman, M; Gandini, P; Gao, Y; Garcia Martin, L M; García Pardiñas, J; Garra Tico, J; Garrido, L; Garsed, P J; Gascon, D; Gaspar, C; Gavardi, L; Gazzoni, G; Gerick, D; Gersabeck, E; Gersabeck, M; Gershon, T; Ghez, Ph; Gianì, S; Gibson, V; Girard, O G; Giubega, L; Gizdov, K; Gligorov, V V; Golubkov, D; Golutvin, A; Gomes, A; Gorelov, I V; Gotti, C; Govorkova, E; Grabalosa Gándara, M; Graciani Diaz, R; Granado Cardoso, L A; Graugés, E; Graverini, E; Graziani, G; Grecu, A; Griffith, P; Grillo, L; Gruberg Cazon, B R; Grünberg, O; Gushchin, E; Guz, Yu; Gys, T; Göbel, C; Hadavizadeh, T; Hadjivasiliou, C; Haefeli, G; Haen, C; Haines, S C; Hall, S; Hamilton, B; Han, X; Hansmann-Menzemer, S; Harnew, N; Harnew, S T; Harrison, J; Hatch, M; He, J; Head, T; Heister, A; Hennessy, K; Henrard, P; Henry, L; Hernando Morata, J A; van Herwijnen, E; Heß, M; Hicheur, A; Hill, D; Hombach, C; Hopchev, H; Hulsbergen, W; Humair, T; Hushchyn, M; Hussain, N; Hutchcroft, D; Idzik, M; Ilten, P; Jacobsson, R; Jaeger, A; Jalocha, J; Jans, E; Jawahery, A; Jiang, F; John, M; Johnson, D; Jones, C R; Joram, C; Jost, B; Jurik, N; Kandybei, S; Kanso, W; Karacson, M; Kariuki, J M; Karodia, S; Kecke, M; Kelsey, M; Kenyon, I R; Kenzie, M; Ketel, T; Khairullin, E; Khanji, B; Khurewathanakul, C; Kirn, T; Klaver, S; Klimaszewski, K; Koliiev, S; Kolpin, M; Komarov, I; Koopman, R F; Koppenburg, P; Kosmyntseva, A; Kozachuk, A; Kozeiha, M; Kravchuk, L; Kreplin, K; Kreps, M; Krokovny, P; Kruse, F; Krzemien, W; Kucewicz, W; Kucharczyk, M; Kudryavtsev, V; Kuonen, A K; Kurek, K; Kvaratskheliya, T; Lacarrere, D; Lafferty, G; Lai, A; Lanfranchi, G; Langenbruch, C; Latham, T; Lazzeroni, C; Le Gac, R; van Leerdam, J; Lees, J-P; Leflat, A; Lefrançois, J; Lefèvre, R; Lemaitre, F; Lemos Cid, E; Leroy, O; Lesiak, T; Leverington, B; Li, Y; Likhomanenko, T; Lindner, R; Linn, C; Lionetto, F; Liu, B; Liu, X; Loh, D; Longstaff, I; Lopes, J H; Lucchesi, D; Lucio Martinez, M; Luo, H; Lupato, A; Luppi, E; Lupton, O; Lusiani, A; Lyu, X; Machefert, F; Maciuc, F; Maev, O; Maguire, K; Malde, S; Malinin, A; Maltsev, T; Manca, G; Mancinelli, G; Manning, P; Maratas, J; Marchand, J F; Marconi, U; Marin Benito, C; Marino, P; Marks, J; Martellotti, G; Martin, M; Martinelli, M; Martinez Santos, D; Martinez Vidal, F; Martins Tostes, D; Massacrier, L M; Massafferri, A; Matev, R; Mathad, A; Mathe, Z; Matteuzzi, C; Mauri, A; Maurin, B; Mazurov, A; McCann, M; McCarthy, J; McNab, A; McNulty, R; Meadows, B; Meier, F; Meissner, M; Melnychuk, D; Merk, M; Merli, A; Michielin, E; Milanes, D A; Minard, M-N; Mitzel, D S; Mogini, A; Molina Rodriguez, J; Monroy, I A; Monteil, S; Morandin, M; Morawski, P; Mordà, A; Morello, M J; Moron, J; Morris, A B; Mountain, R; Muheim, F; Mulder, M; Mussini, M; Müller, D; Müller, J; Müller, K; Müller, V; Naik, P; Nakada, T; Nandakumar, R; Nandi, A; Nasteva, I; Needham, M; Neri, N; Neubert, S; Neufeld, N; Neuner, M; Nguyen, A D; Nguyen, T D; Nguyen-Mau, C; Nieswand, S; Niet, R; Nikitin, N; Nikodem, T; Novoselov, A; O'Hanlon, D P; Oblakowska-Mucha, A; Obraztsov, V; Ogilvy, S; Oldeman, R; Onderwater, C J G; Otalora Goicochea, J M; Otto, A; Owen, P; Oyanguren, A; Pais, P R; Palano, A; Palombo, F; Palutan, M; Panman, J; Papanestis, A; Pappagallo, M; Pappalardo, L L; Parker, W; Parkes, C; Passaleva, G; Pastore, A; Patel, G D; Patel, M; Patrignani, C; Pearce, A; Pellegrino, A; Penso, G; Pepe Altarelli, M; Perazzini, S; Perret, P; Pescatore, L; Petridis, K; Petrolini, A; Petrov, A; Petruzzo, M; Picatoste Olloqui, E; Pietrzyk, B; Pikies, M; Pinci, D; Pistone, A; Piucci, A; Playfer, S; Plo Casasus, M; Poikela, T; Polci, F; Poluektov, A; Polyakov, I; Polycarpo, E; Pomery, G J; Popov, A; Popov, D; Popovici, B; Poslavskii, S; Potterat, C; Price, E; Price, J D; Prisciandaro, J; Pritchard, A; Prouve, C; Pugatch, V; Puig Navarro, A; Punzi, G; Qian, W; Quagliani, R; Rachwal, B; Rademacker, J H; Rama, M; Ramos Pernas, M; Rangel, M S; Raniuk, I; Ratnikov, F; Raven, G; Redi, F; Reichert, S; Dos Reis, A C; Remon Alepuz, C; Renaudin, V; Ricciardi, S; Richards, S; Rihl, M; Rinnert, K; Rives Molina, V; Robbe, P; Rodrigues, A B; Rodrigues, E; Rodriguez Lopez, J A; Rodriguez Perez, P; Rogozhnikov, A; Roiser, S; Rollings, A; Romanovskiy, V; Romero Vidal, A; Ronayne, J W; Rotondo, M; Rudolph, M S; Ruf, T; Ruiz Valls, P; Saborido Silva, J J; Sadykhov, E; Sagidova, N; Saitta, B; Salustino Guimaraes, V; Sanchez Mayordomo, C; Sanmartin Sedes, B; Santacesaria, R; Santamarina Rios, C; Santimaria, M; Santovetti, E; Sarti, A; Satriano, C; Satta, A; Saunders, D M; Savrina, D; Schael, S; Schellenberg, M; Schiller, M; Schindler, H; Schlupp, M; Schmelling, M; Schmelzer, T; Schmidt, B; Schneider, O; Schopper, A; Schubert, K; Schubiger, M; Schune, M-H; Schwemmer, R; Sciascia, B; Sciubba, A; Semennikov, A; Sergi, A; Serra, N; Serrano, J; Sestini, L; Seyfert, P; Shapkin, M; Shapoval, I; Shcheglov, Y; Shears, T; Shekhtman, L; Shevchenko, V; Siddi, B G; Silva Coutinho, R; Silva de Oliveira, L; Simi, G; Simone, S; Sirendi, M; Skidmore, N; Skwarnicki, T; Smith, E; Smith, I T; Smith, J; Smith, M; Snoek, H; Sokoloff, M D; Soler, F J P; Souza De Paula, B; Spaan, B; Spradlin, P; Sridharan, S; Stagni, F; Stahl, M; Stahl, S; Stefko, P; Stefkova, S; Steinkamp, O; Stemmle, S; Stenyakin, O; Stevenson, S; Stoica, S; Stone, S; Storaci, B; Stracka, S; Straticiuc, M; Straumann, U; Sun, L; Sutcliffe, W; Swientek, K; Syropoulos, V; Szczekowski, M; Szumlak, T; T'Jampens, S; Tayduganov, A; Tekampe, T; Tellarini, G; Teubert, F; Thomas, E; van Tilburg, J; Tilley, M J; Tisserand, V; Tobin, M; Tolk, S; Tomassetti, L; Tonelli, D; Topp-Joergensen, S; Toriello, F; Tournefier, E; Tourneur, S; Trabelsi, K; Traill, M; Tran, M T; Tresch, M; Trisovic, A; Tsaregorodtsev, A; Tsopelas, P; Tully, A; Tuning, N; Ukleja, A; Ustyuzhanin, A; Uwer, U; Vacca, C; Vagnoni, V; Valassi, A; Valat, S; Valenti, G; Vallier, A; Vazquez Gomez, R; Vazquez Regueiro, P; Vecchi, S; van Veghel, M; Velthuis, J J; Veltri, M; Veneziano, G; Venkateswaran, A; Vernet, M; Vesterinen, M; Viaud, B; Vieira, D; Vieites Diaz, M; Viemann, H; Vilasis-Cardona, X; Vitti, M; Volkov, V; Vollhardt, A; Voneki, B; Vorobyev, A; Vorobyev, V; Voß, C; de Vries, J A; Vázquez Sierra, C; Waldi, R; Wallace, C; Wallace, R; Walsh, J; Wang, J; Ward, D R; Wark, H M; Watson, N K; Websdale, D; Weiden, A; Whitehead, M; Wicht, J; Wilkinson, G; Wilkinson, M; Williams, M; Williams, M P; Williams, M; Williams, T; Wilson, F F; Wimberley, J; Wishahi, J; Wislicki, W; Witek, M; Wormser, G; Wotton, S A; Wraight, K; Wyllie, K; Xie, Y; Xing, Z; Xu, Z; Yang, Z; Yin, H; Yu, J; Yuan, X; Yushchenko, O; Zarebski, K A; Zavertyaev, M; Zhang, L; Zhang, Y; Zhang, Y; Zhelezov, A; Zheng, Y; Zhokhov, A; Zhu, X; Zhukov, V; Zucchelli, S

    2017-01-01

    The decays [Formula: see text] and [Formula: see text] are observed for the first time using a data sample corresponding to an integrated luminosity of 3.0 fb[Formula: see text], collected by the LHCb experiment in proton-proton collisions at the centre-of-mass energies of 7 and 8[Formula: see text]. The branching fractions relative to that of [Formula: see text] are measured to be [Formula: see text]where the first uncertainties are statistical and the second are systematic.

  15. Mining the Text: 34 Text Features that Can Ease or Obstruct Text Comprehension and Use

    Science.gov (United States)

    White, Sheida

    2012-01-01

    This article presents 34 characteristics of texts and tasks ("text features") that can make continuous (prose), noncontinuous (document), and quantitative texts easier or more difficult for adolescents and adults to comprehend and use. The text features were identified by examining the assessment tasks and associated texts in the national…

  16. Lightweight Active Object Retrieval with Weak Classifiers

    Directory of Open Access Journals (Sweden)

    László Czúni

    2018-03-01

    Full Text Available In the last few years, there has been a steadily growing interest in autonomous vehicles and robotic systems. While many of these agents are expected to have limited resources, these systems should be able to dynamically interact with other objects in their environment. We present an approach where lightweight sensory and processing techniques, requiring very limited memory and processing power, can be successfully applied to the task of object retrieval using sensors of different modalities. We use the Hough framework to fuse optical and orientation information of the different views of the objects. In the presented spatio-temporal perception technique, we apply active vision, where, based on the analysis of initial measurements, the direction of the next view is determined to increase the hit-rate of retrieval. The performance of the proposed methods is shown on three datasets loaded with heavy noise.

  17. Multivariate analysis of quantitative traits can effectively classify rapeseed germplasm

    Directory of Open Access Journals (Sweden)

    Jankulovska Mirjana

    2014-01-01

    Full Text Available In this study, the use of different multivariate approaches to classify rapeseed genotypes based on quantitative traits has been presented. Tree regression analysis, PCA analysis and two-way cluster analysis were applied in order todescribe and understand the extent of genetic variability in spring rapeseed genotype by trait data. The traits which highly influenced seed and oil yield in rapeseed were successfully identified by the tree regression analysis. Principal predictor for both response variables was number of pods per plant (NP. NP and 1000 seed weight could help in the selection of high yielding genotypes. High values for both traits and oil content could lead to high oil yielding genotypes. These traits may serve as indirect selection criteria and can lead to improvement of seed and oil yield in rapeseed. Quantitative traits that explained most of the variability in the studied germplasm were classified using principal component analysis. In this data set, five PCs were identified, out of which the first three PCs explained 63% of the total variance. It helped in facilitating the choice of variables based on which the genotypes’ clustering could be performed. The two-way cluster analysissimultaneously clustered genotypes and quantitative traits. The final number of clusters was determined using bootstrapping technique. This approach provided clear overview on the variability of the analyzed genotypes. The genotypes that have similar performance regarding the traits included in this study can be easily detected on the heatmap. Genotypes grouped in the clusters 1 and 8 had high values for seed and oil yield, and relatively short vegetative growth duration period and those in cluster 9, combined moderate to low values for vegetative growth duration and moderate to high seed and oil yield. These genotypes should be further exploited and implemented in the rapeseed breeding program. The combined application of these multivariate methods

  18. Classifying and Visualising Roman Pottery using Computer-scanned Typologies

    Directory of Open Access Journals (Sweden)

    Jacqueline Christmas

    2018-05-01

    Full Text Available For many archaeological assemblages and type-series, accurate drawings of standardised pottery vessels have been recorded in consistent styles. This provides the opportunity to extract individual pot drawings and derive from them data that can be used for analysis and visualisation. Starting from PDF scans of the original pages of pot drawings, we have automated much of the process for locating, defining the boundaries, extracting and orientating each individual pot drawing. From these processed images, basic features such as width and height, the volume of the interior, the edges, and the shape of the cross-section outline are extracted and are then used to construct more complex features such as a measure of a pot's 'circularity'. Capturing these traits opens up new possibilities for (a classifying vessel form in a way that is sensitive to the physical characteristics of pots relative to other vessels in an assemblage, and (b visualising the results of quantifying assemblages using standard typologies. A frequently encountered problem when trying to compare pottery from different archaeological sites is that the pottery is classified into forms and labels using different standards. With a set of data from early Roman urban centres and related sites that has been labelled both with forms (e.g. 'platter' and 'bowl' and shape identifiers (based on the Camulodunum type-series, we use the extracted features from images to look both at how the pottery forms cluster for a given set of features, and at how the features may be used to compare finds from different sites.

  19. Immunohistochemical analysis of breast tissue microarray images using contextual classifiers

    Directory of Open Access Journals (Sweden)

    Stephen J McKenna

    2013-01-01

    Full Text Available Background: Tissue microarrays (TMAs are an important tool in translational research for examining multiple cancers for molecular and protein markers. Automatic immunohistochemical (IHC scoring of breast TMA images remains a challenging problem. Methods: A two-stage approach that involves localization of regions of invasive and in-situ carcinoma followed by ordinal IHC scoring of nuclei in these regions is proposed. The localization stage classifies locations on a grid as tumor or non-tumor based on local image features. These classifications are then refined using an auto-context algorithm called spin-context. Spin-context uses a series of classifiers to integrate image feature information with spatial context information in the form of estimated class probabilities. This is achieved in a rotationally-invariant manner. The second stage estimates ordinal IHC scores in terms of the strength of staining and the proportion of nuclei stained. These estimates take the form of posterior probabilities, enabling images with uncertain scores to be referred for pathologist review. Results: The method was validated against manual pathologist scoring on two nuclear markers, progesterone receptor (PR and estrogen receptor (ER. Errors for PR data were consistently lower than those achieved with ER data. Scoring was in terms of estimated proportion of cells that were positively stained (scored on an ordinal scale of 0-6 and perceived strength of staining (scored on an ordinal scale of 0-3. Average absolute differences between predicted scores and pathologist-assigned scores were 0.74 for proportion of cells and 0.35 for strength of staining (PR. Conclusions: The use of context information via spin-context improved the precision and recall of tumor localization. The combination of the spin-context localization method with the automated scoring method resulted in reduced IHC scoring errors.

  20. An efficient approach for surveillance of childhood diabetes by type derived from electronic health record data: the SEARCH for Diabetes in Youth Study

    Science.gov (United States)

    Zhong, Victor W; Obeid, Jihad S; Craig, Jean B; Pfaff, Emily R; Thomas, Joan; Jaacks, Lindsay M; Beavers, Daniel P; Carey, Timothy S; Lawrence, Jean M; Dabelea, Dana; Hamman, Richard F; Bowlby, Deborah A; Pihoker, Catherine; Saydah, Sharon H

    2016-01-01

    Objective To develop an efficient surveillance approach for childhood diabetes by type across 2 large US health care systems, using phenotyping algorithms derived from electronic health record (EHR) data. Materials and Methods Presumptive diabetes cases diabetes-related billing codes, patient problem list, and outpatient anti-diabetic medications. EHRs of all the presumptive cases were manually reviewed, and true diabetes status and diabetes type were determined. Algorithms for identifying diabetes cases overall and classifying diabetes type were either prespecified or derived from classification and regression tree analysis. Surveillance approach was developed based on the best algorithms identified. Results We developed a stepwise surveillance approach using billing code–based prespecified algorithms and targeted manual EHR review, which efficiently and accurately ascertained and classified diabetes cases by type, in both health care systems. The sensitivity and positive predictive values in both systems were approximately ≥90% for ascertaining diabetes cases overall and classifying cases with type 1 or type 2 diabetes. About 80% of the cases with “other” type were also correctly classified. This stepwise surveillance approach resulted in a >70% reduction in the number of cases requiring manual validation compared to traditional surveillance methods. Conclusion EHR data may be used to establish an efficient approach for large-scale surveillance for childhood diabetes by type, although some manual effort is still needed. PMID:27107449

  1. A Chinese text classification system based on Naive Bayes algorithm

    Directory of Open Access Journals (Sweden)

    Cui Wei

    2016-01-01

    Full Text Available In this paper, aiming at the characteristics of Chinese text classification, using the ICTCLAS(Chinese lexical analysis system of Chinese academy of sciences for document segmentation, and for data cleaning and filtering the Stop words, using the information gain and document frequency feature selection algorithm to document feature selection. Based on this, based on the Naive Bayesian algorithm implemented text classifier , and use Chinese corpus of Fudan University has carried on the experiment and analysis on the system.

  2. Investigation into Text Classification With Kernel Based Schemes

    Science.gov (United States)

    2010-03-01

    Document Matrix TDMs Term-Document Matrices TMG Text to Matrix Generator TN True Negative TP True Positive VSM Vector Space Model xxii THIS PAGE...are represented as a term-document matrix, common evaluation metrics, and the software package Text to Matrix Generator ( TMG ). The classifier...AND METRICS This chapter introduces the indexing capabilities of the Text to Matrix Generator ( TMG ) Toolbox. Specific attention is placed on the

  3. An automated, broad-based, near real-time public health surveillance system using presentations to hospital Emergency Departments in New South Wales, Australia

    Directory of Open Access Journals (Sweden)

    Chiu Clayton

    2005-12-01

    Full Text Available Abstract Background In a climate of concern over bioterrorism threats and emergent diseases, public health authorities are trialling more timely surveillance systems. The 2003 Rugby World Cup (RWC provided an opportunity to test the viability of a near real-time syndromic surveillance system in metropolitan Sydney, Australia. We describe the development and early results of this largely automated system that used data routinely collected in Emergency Departments (EDs. Methods Twelve of 49 EDs in the Sydney metropolitan area automatically transmitted surveillance data from their existing information systems to a central database in near real-time. Information captured for each ED visit included patient demographic details, presenting problem and nursing assessment entered as free-text at triage time, physician-assigned provisional diagnosis codes, and status at departure from the ED. Both diagnoses from the EDs and triage text were used to assign syndrome categories. The text information was automatically classified into one or more of 26 syndrome categories using automated "naïve Bayes" text categorisation techniques. Automated processes were used to analyse both diagnosis and free text-based syndrome data and to produce web-based statistical summaries for daily review. An adjusted cumulative sum (cusum was used to assess the statistical significance of trends. Results During the RWC the system did not identify any major public health threats associated with the tournament, mass gatherings or the influx of visitors. This was consistent with evidence from other sources, although two known outbreaks were already in progress before the tournament. Limited baseline in early monitoring prevented the system from automatically identifying these ongoing outbreaks. Data capture was invisible to clinical staff in EDs and did not add to their workload. Conclusion We have demonstrated the feasibility and potential utility of syndromic surveillance using

  4. From Text to Political Positions: Text analysis across disciplines

    NARCIS (Netherlands)

    Kaal, A.R.; Maks, I.; van Elfrinkhof, A.M.E.

    2014-01-01

    ABSTRACT From Text to Political Positions addresses cross-disciplinary innovation in political text analysis for party positioning. Drawing on political science, computational methods and discourse analysis, it presents a diverse collection of analytical models including pure quantitative and

  5. Pressure Ulcers Surveillance Report

    Directory of Open Access Journals (Sweden)

    Zehra Esin Gencer

    2015-04-01

    Full Text Available Objective: Pressure ulcer is a chronic wound. It reduces the quality of life of the elderly and individuals with restricted range of motion. It prolongs hospital stay and increases the risk of complications. The cost is quite high. Preventive actions for the prevention of pressure ulcers should be developed. Planning protocols and standards of care are among the main targets. Material and Method: Research was conducted in one-year period between 2012 May and 2013 May on patients who were followed up in Akdeniz University Hospital clinics and intensive care unit with pressure ulcers. The research population consisted of 569 patients. Patient data were recorded in SPSS 16 for Windows program. Statistical analyzes were performed with retrospective methods. The demographic characteristics of patients with pressure ulcers were analyzed as frequency and descriptive statistics. Prevalence and incidence of one year were calculated. Results: Of the patients, 58% were males, 42% were females. Of the patients, 36% were in the age range of 61-80 years, and their average length of stay was 42,9 days. Of the patients, 70% were at stage 2 and 3. In 15% of patients pressure ulcers occurred on the first day of hospitalization. Pressure ulcers were developed between days 2 and 10 in 59% of the patients. Prevalence rate was 2.5%, the incidence was 1.9%, the prevalence rate was 5.9% in the intensive care unit. Conclusion: It is easier to prevent pressure ulcers than treating.

  6. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Public involvement in environmental surveillance at Hanford

    International Nuclear Information System (INIS)

    Hanf, R.W. Jr.; Patton, G.W.; Woodruff, R.K.; Poston, T.M.

    1994-08-01

    Environmental surveillance at the Hanford Site began during the mid-1940s following the construction and start-up of the nation's first plutonium production reactor. Over the past approximately 45 years, surveillance operations on and off the Site have continued, with virtually all sampling being conducted by Hanford Site workers. Recently, the US Department of Energy Richland Operations Office directed that public involvement in Hanford environmental surveillance operations be initiated. Accordingly, three special radiological air monitoring stations were constructed offsite, near hanford's perimeter. Each station is managed and operated by two local school teaches. These three stations are the beginning of a community-operated environmental surveillance program that will ultimately involve the public in most surveillance operations around the Site. The program was designed to stimulate interest in Hanford environmental surveillance operations, and to help the public better understand surveillance results. The program has also been used to enhance educational opportunities at local schools

  8. Issues ignored in laboratory quality surveillance

    International Nuclear Information System (INIS)

    Zeng Jing; Li Xingyuan; Zhang Tingsheng

    2008-01-01

    According to the work requirement of the related laboratory quality surveillance in ISO17025, this paper analyzed and discussed the issued ignored in the laboratory quality surveillance. In order to solve the present problem, it is required to understand the work responsibility in the quality surveillance correctly, to establish the effective working routine in the quality surveillance, and to conduct, the quality surveillance work. The object in the quality surveillance shall be 'the operator' who engaged in the examination/calibration directly in the laboratory, especially the personnel in training (who is engaged in the examination/calibration). The quality supervisors shall be fully authorized, so that they can correctly understand the work responsibility in quality surveillance, and are with the rights for 'full supervision'. The laboratory also shall arrange necessary training to the quality supervisor, so that they can obtain sufficient guide in time and are with required qualification or occupation prerequisites. (authors)

  9. Risk based surveillance for vector borne diseases

    DEFF Research Database (Denmark)

    Bødker, Rene

    of samples and hence early detection of outbreaks. Models for vector borne diseases in Denmark have demonstrated dramatic variation in outbreak risk during the season and between years. The Danish VetMap project aims to make these risk based surveillance estimates available on the veterinarians smart phones...... in Northern Europe. This model approach may be used as a basis for risk based surveillance. In risk based surveillance limited resources for surveillance are targeted at geographical areas most at risk and only when the risk is high. This makes risk based surveillance a cost effective alternative...... sample to a diagnostic laboratory. Risk based surveillance models may reduce this delay. An important feature of risk based surveillance models is their ability to continuously communicate the level of risk to veterinarians and hence increase awareness when risk is high. This is essential for submission...

  10. Reactor Vessel Surveillance Program for Advanced Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kyeong-Hoon; Kim, Tae-Wan; Lee, Gyu-Mahn; Kim, Jong-Wook; Park, Keun-Bae; Kim, Keung-Koo

    2008-10-15

    This report provides the design requirements of an integral type reactor vessel surveillance program for an integral type reactor in accordance with the requirements of Korean MEST (Ministry of Education, Science and Technology Development) Notice 2008-18. This report covers the requirements for the design of surveillance capsule assemblies including their test specimens, test block materials, handling tools, and monitors of the surveillance capsule neutron fluence and temperature. In addition, this report provides design requirements for the program for irradiation surveillance of reactor vessel materials, a layout of specimens and monitors in the surveillance capsule, procedures of installation and retrieval of the surveillance capsule assemblies, and the layout of the surveillance capsule assemblies in the reactor.

  11. Surveillance guidelines for smallpox vaccine (vaccinia) adverse reactions.

    Science.gov (United States)

    Casey, Christine; Vellozzi, Claudia; Mootrey, Gina T; Chapman, Louisa E; McCauley, Mary; Roper, Martha H; Damon, Inger; Swerdlow, David L

    2006-02-03

    CDC and the U.S. Food and Drug Administration rely on state and local health departments, health-care providers, and the public to report the occurrence of adverse events after vaccination to the Vaccine Adverse Event Reporting System. With such data, trends can be accurately monitored, unusual occurrences of adverse events can be detected, and the safety of vaccination intervention activities can be evaluated. On January 24, 2003, the U.S. Department of Health and Human Services (DHHS) implemented a preparedness program in which smallpox (vaccinia) vaccine was administered to federal, state, and local volunteers who might be first responders during a biologic terrorism event. As part of the DHHS Smallpox Preparedness and Response Program, CDC in consultation with experts, established surveillance case definitions for adverse events after smallpox vaccination. Adverse reactions after smallpox vaccination identified during the 1960s surveillance activities were classified on the basis of clinical description and included eczema vaccinatum; fetal vaccinia; generalized vaccinia; accidental autoinoculation, nonocular; ocular vaccinia; progressive vaccinia; erythema multiforme major; postvaccinial encephalitis or encephalomyelitis; and pyogenic infection of the vaccination site. This report provides uniform criteria used for the surveillance case definition and classification for these previously recognized adverse reactions used during the DHHS Smallpox Preparedness and Response Program. Inadvertent inoculation was changed to more precisely describe this event as inadvertent autoinoculation and contact transmission, nonocular and ocular vaccinia. Pyogenic infection also was renamed superinfection of the vaccination site or regional lymph nodes. Finally, case definitions were developed for a new cardiac adverse reaction (myo/pericarditis) and for a cardiac adverse event (dilated cardiomyopathy) and are included in this report. The smallpox vaccine surveillance case

  12. Sistemas de vigilancia de la salud pública: no pidamos peras al olmo Public health surveillance systems: let's not ask for the impossible

    Directory of Open Access Journals (Sweden)

    S. de Mateo

    2003-07-01

    Full Text Available La publicación del Decreto por el que se creó la Red Nacional de Vigilancia Epidemiológica, hace ya siete años, da pie para reflexionar sobre los sistemas de vigilancia de la salud pública en nuestro país e incidir en aquellos aspectos que facilitan o impiden que estos sistemas cumplan con su objetivo fundamental de proporcionar una información que sirva para facilitar el control de las enfermedades. Muchas de las situaciones vividas en el ámbito de la salud en estos últimos años, calificadas de «crisis sanitarias» por los medios de comunicación, han sido consideradas como riesgos inaceptables por la población, que los sistemas sanitarios deberían haber evitado y, entre los fallos evidenciados, siempre se alude a defectos de los sistemas de vigilancia. Algunos de estos defectos provienen de las propias limitaciones de los instrumentos utilizados para la medición y clasificación de los problemas de salud, pero también existen otros derivados de una concepción no adecuada de la vigilancia y que impiden valorar el verdadero impacto de los problemas de salud. Comentar unos y otros no solucionará los problemas de la vigilancia, pero sí puede servir para que muchas personas no sigan pidiendo a nuestros sistemas de vigilancia aquello que no pueden ofrecer.The publication of the Decree creating the National Epidemiological Surveillance Network, 7 years ago now, invites us to reflect on public health surveillance systems in our country and to highlight those aspects that help or obstruct these systems in meeting their basic objective of providing information that can be used to facilitate disease control. Many of the events that have taken place in the health arena in recent years, labeled as «health crises» by the communications media, have been considered by the population as unacceptable risks that the health system should have avoided; defects in surveillance systems are one of the errors always mentioned in this respect. Some

  13. Classifying emotion in Twitter using Bayesian network

    Science.gov (United States)

    Surya Asriadie, Muhammad; Syahrul Mubarok, Mohamad; Adiwijaya

    2018-03-01

    Language is used to express not only facts, but also emotions. Emotions are noticeable from behavior up to the social media statuses written by a person. Analysis of emotions in a text is done in a variety of media such as Twitter. This paper studies classification of emotions on twitter using Bayesian network because of its ability to model uncertainty and relationships between features. The result is two models based on Bayesian network which are Full Bayesian Network (FBN) and Bayesian Network with Mood Indicator (BNM). FBN is a massive Bayesian network where each word is treated as a node. The study shows the method used to train FBN is not very effective to create the best model and performs worse compared to Naive Bayes. F1-score for FBN is 53.71%, while for Naive Bayes is 54.07%. BNM is proposed as an alternative method which is based on the improvement of Multinomial Naive Bayes and has much lower computational complexity compared to FBN. Even though it’s not better compared to FBN, the resulting model successfully improves the performance of Multinomial Naive Bayes. F1-Score for Multinomial Naive Bayes model is 51.49%, while for BNM is 52.14%.

  14. Text mining from ontology learning to automated text processing applications

    CERN Document Server

    Biemann, Chris

    2014-01-01

    This book comprises a set of articles that specify the methodology of text mining, describe the creation of lexical resources in the framework of text mining and use text mining for various tasks in natural language processing (NLP). The analysis of large amounts of textual data is a prerequisite to build lexical resources such as dictionaries and ontologies and also has direct applications in automated text processing in fields such as history, healthcare and mobile applications, just to name a few. This volume gives an update in terms of the recent gains in text mining methods and reflects

  15. Health effects and medical surveillance

    International Nuclear Information System (INIS)

    1998-01-01

    Source of ionizing radiations have innumerable applications in the work place. Usually, even where the work is performed safely, the employees involved inevitably receive small, regular exposures to radiation that are not manifestly harmful. This Module explains how ionizing radiations can interact with and affect human tissues, the various factors that influence the outcome and the detrimental effects that may result. The medical surveillance that is appropriate for those working with radiation sources, depending on the degree of hazard of the work, is described. The Manual will be of most benefit it if forms part of more comprehensive training or is supplemented by the advice of a medically qualified expert. Where medical surveillance is appropriate for radiation employees, the services of a qualified doctor, occupational physician or other trained medical staff will be required

  16. Bat Rabies Surveillance in Europe

    DEFF Research Database (Denmark)

    Schatz, J.; Fooks, A. R.; McElhinney, L.

    2013-01-01

    Rabies is the oldest known zoonotic disease and was also the first recognized bat associated infection in humans. To date, four different lyssavirus species are the causative agents of rabies in European bats: the European Bat Lyssaviruses type 1 and 2 (EBLV-1, EBLV-2), the recently discovered...... putative new lyssavirus species Bokeloh Bat Lyssavirus (BBLV) and the West Caucasian Bat Virus (WCBV). Unlike in the new world, bat rabies cases in Europe are comparatively less frequent, possibly as a result of varying intensity of surveillance. Thus, the objective was to provide an assessment of the bat...... rabies surveillance data in Europe, taking both reported data to the WHO Rabies Bulletin Europe and published results into account. In Europe, 959 bat rabies cases were reported to the RBE in the time period 1977–2010 with the vast majority characterized as EBLV-1, frequently isolated in the Netherlands...

  17. SCORPIO - VVER core surveillance system

    International Nuclear Information System (INIS)

    Zalesky, K.; Svarny, J.; Novak, L.; Rosol, J.; Horanes, A.

    1997-01-01

    The Halden Project has developed the core surveillance system SCORPIO which has two parallel modes of operation: the Core Follow Mode and the Predictive Mode. The main motivation behind the development of SCORPIO is to make a practical tool for reactor operators which can increase the quality and quantity of information presented on core status and dynamic behavior. This can first of all improve plant safety as undesired core conditions are detected and prevented. Secondly, more flexible and efficient plant operation is made possible. So far the system has only been implemented on western PWRs but the basic concept is applicable to a wide range of reactor including WWERs. The main differences between WWERs and typical western PWRs with respect to core surveillance requirements are outlined. The development of a WWER version of SCORPIO was initiated in cooperation with the Nuclear Research Institute at Rez and industry partners in the Czech Republic. The first system will be installed at the Dukovany NPP. (author)

  18. Environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1997-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest National Laboratory (PNNL)(a) for the US Department of Energy (DOE). This document contains the planned 1997 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP) and Drinking Water Monitoring Project. In addition, Section 3.0, Biota, also reflects a rotating collection schedule identifying the year a specific sample is scheduled for collection. The purpose of these monitoring projects is to evaluate levels of radioactive and nonradioactive pollutants in the Hanford environs, as required in DOE Order 5400.1, General Environmental Protection Program, and DOE Order 5400.5, Radiation Protection of the Public and the Environment. The sampling methods will be the same as those described in the Environmental Monitoring Plan, US Department of Energy, Richland Operations Office, DOE/RL91-50, Rev. 1, US Department of Energy, Richland, Washington

  19. Mining Surveillance and Maintenance Dollars

    International Nuclear Information System (INIS)

    MARTINEZ, R.

    2000-01-01

    Accelerating site cleanup to reduce facility risks to the workers, the public and the environment during a time of declining federal budgets represents a significant technical and economic challenge to U.S. Department of Energy (DOE) Operations Offices and their respective contractors. A significant portion of a facility's recurring annual expenses are associated with routine, long-term surveillance and maintenance (S and M) activities. However, ongoing S and M activities do nothing to reduce risks and basically spend money that could be reallocated towards facility deactivation. This paper discusses the background around DOE efforts to reduce surveillance and maintenance costs, one approach used to perform cost reviews, lessons learned from field implementation and what assistance is available to assist DOE sites in performing these evaluations

  20. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L E

    1992-01-01

    Environmental surveillance of the Hanford Site and surrounding areas is conducted by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE). This document contains the planned schedule for routine sample collection for the Surface Environmental Surveillance Project (SESP) and Ground-Water Monitoring Project. Samples for radiological analyses include Air-Particulate Filter, gases and vapor; Water/Columbia River, Onsite Pond, Spring, Irrigation, and Drinking; Foodstuffs/Animal Products including Whole Milk, Poultry and Eggs, and Beef; Foodstuffs/Produce including Leafy Vegetables, Vegetables, and Fruit; Foodstuffs/Farm Products including Wine, Wheat and Alfalfa; Wildlife; Soil; Vegetation; and Sediment. Direct Radiation Measurements include Terrestrial Locations, Columbia River Shoreline Locations, and Onsite Roadway, Railway and Aerial, Radiation Surveys.

  1. Working with text tools, techniques and approaches for text mining

    CERN Document Server

    Tourte, Gregory J L

    2016-01-01

    Text mining tools and technologies have long been a part of the repository world, where they have been applied to a variety of purposes, from pragmatic aims to support tools. Research areas as diverse as biology, chemistry, sociology and criminology have seen effective use made of text mining technologies. Working With Text collects a subset of the best contributions from the 'Working with text: Tools, techniques and approaches for text mining' workshop, alongside contributions from experts in the area. Text mining tools and technologies in support of academic research include supporting research on the basis of a large body of documents, facilitating access to and reuse of extant work, and bridging between the formal academic world and areas such as traditional and social media. Jisc have funded a number of projects, including NaCTem (the National Centre for Text Mining) and the ResDis programme. Contents are developed from workshop submissions and invited contributions, including: Legal considerations in te...

  2. Performance indicators for rinderpest surveillance

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-12-01

    In 1986, the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture initiated a programme of assistance to FAO and IAEA Member States for the development of effective, quality assured veterinary laboratory diagnostic services. This programme introduced the use of standardized and internationally validated ELISA-based systems for the diagnosis and surveillance of the major transboundary diseases that affect livestock. This approach has proved of immense value in the monitoring of national, regional and global animal disease control and eradication programmes. One such programme focuses on the global elimination of rinderpest. Co-ordinated by FAO through the Global Rinderpest Eradication Programme (GREP) the joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture has developed critical diagnostic and epidemiological tools to assist this effort. As the final stages of the global eradication of rinderpest are reached, it is fitting that the Joint Division should again take the lead in providing guidance to Member States on how best to meet the criteria for quality assurance of national disease surveillance programmes - a prerequisite for international acceptance of freedom from a particular disease. This publication is intended to provide countries involved in rinderpest eradication with a detailed protocol for using performance indicators in evaluating their disease surveillance system and making, where necessary, adjustments to meet the criteria for acceptance specified in the OIE Rinderpest Pathway - a pathway that leads to international recognition of freedom from rinderpest. An initial publication (IAEA-TECDOC-1161) described guidelines for the use of performance indicators in rinderpest surveillance programmes. This publication now describes in detail the protocols and the linked indicators which have been developed and field validated through a series of FAO/IAEA meetings and through IAEA expert assignments to countries in Africa.

  3. Performance indicators for rinderpest surveillance

    International Nuclear Information System (INIS)

    2001-12-01

    In 1986, the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture initiated a programme of assistance to FAO and IAEA Member States for the development of effective, quality assured veterinary laboratory diagnostic services. This programme introduced the use of standardized and internationally validated ELISA-based systems for the diagnosis and surveillance of the major transboundary diseases that affect livestock. This approach has proved of immense value in the monitoring of national, regional and global animal disease control and eradication programmes. One such programme focuses on the global elimination of rinderpest. Co-ordinated by FAO through the Global Rinderpest Eradication Programme (GREP) the joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture has developed critical diagnostic and epidemiological tools to assist this effort. As the final stages of the global eradication of rinderpest are reached, it is fitting that the Joint Division should again take the lead in providing guidance to Member States on how best to meet the criteria for quality assurance of national disease surveillance programmes - a prerequisite for international acceptance of freedom from a particular disease. This publication is intended to provide countries involved in rinderpest eradication with a detailed protocol for using performance indicators in evaluating their disease surveillance system and making, where necessary, adjustments to meet the criteria for acceptance specified in the OIE Rinderpest Pathway - a pathway that leads to international recognition of freedom from rinderpest. An initial publication (IAEA-TECDOC-1161) described guidelines for the use of performance indicators in rinderpest surveillance programmes. This publication now describes in detail the protocols and the linked indicators which have been developed and field validated through a series of FAO/IAEA meetings and through IAEA expert assignments to countries in Africa

  4. Active epidemiological surveillance in the program of poliomyelitis eradication in Serbia

    Directory of Open Access Journals (Sweden)

    Jevremović Ivana

    2002-01-01

    Full Text Available The main strategy of the worldwide Program of Poliomyelitis Eradication is based on immunization with oral poliovirus vaccine and active epidemiological surveillance aimed to demonstrate the absence of wild poliovirus circulation. The specification of the surveillance in the program, reporting and investigation of certain syndrome – the acute flaccid paralysis - as a specific feature of surveillance of poliomyelitis, is a new experience both for clinicians and epidemiologists. Along with the achieved results, problems in conducting the active epidemiological surveillance in Serbia, applied measures, and suggestions for improving its quality were presented. This experience might help in implementing the active surveillance for some other diseases that could be prevented by vaccine immunization.

  5. Classifying the metal dependence of uncharacterized nitrogenases

    Directory of Open Access Journals (Sweden)

    Shawn E Mcglynn

    2013-01-01

    Full Text Available Nitrogenase enzymes have evolved complex iron-sulfur (Fe-S containing cofactors that most commonly contain molybdenum (MoFe, Nif as a heterometal but also exist as vanadium (VFe, Vnf and heterometal independent (Fe-only, Anf forms. All three varieties are capable of the reduction of dinitrogen (N2 to ammonia (NH3 but exhibit differences in catalytic rates and substrate specificity unique to metal type. Recently, N2 reduction activity was observed in archaeal methanotrophs and methanogens that encode for nitrogenase homologs which do not cluster phylogenetically with previously characterized nitrogenases. To gain insight into the metal cofactors of these uncharacterized nitrogenase homologs, predicted three-dimensional structures of the nitrogenase active site metal-cofactor binding subunits NifD, VnfD, and AnfD were generated and compared. Dendograms based on structural similarity indicate nitrogenase homologs cluster based on heterometal content and that uncharacterized nitrogenase D homologs cluster with NifD, providing evidence that the structure of the enzyme has evolved in response to metal utilization. Characterization of the structural environment of the nitrogenase active site revealed amino acid variations that are unique to each class of nitrogenase as defined by heterometal cofactor content; uncharacterized nitrogenases contain amino acids near the active site most similar to NifD. Together, these results suggest that uncharacterized nitrogenase homologs present in numerous anaerobic methanogens, archaeal methanotrophs, and firmicutes bind FeMo-co in their active site, and add to growing evidence that diversification of metal utilization likely occurred in an anaerobic habitat.

  6. SAR Target Recognition Based on Multi-feature Multiple Representation Classifier Fusion

    Directory of Open Access Journals (Sweden)

    Zhang Xinzheng

    2017-10-01

    Full Text Available In this paper, we present a Synthetic Aperture Radar (SAR image target recognition algorithm based on multi-feature multiple representation learning classifier fusion. First, it extracts three features from the SAR images, namely principal component analysis, wavelet transform, and Two-Dimensional Slice Zernike Moments (2DSZM features. Second, we harness the sparse representation classifier and the cooperative representation classifier with the above-mentioned features to get six predictive labels. Finally, we adopt classifier fusion to obtain the final recognition decision. We researched three different classifier fusion algorithms in our experiments, and the results demonstrate thatusing Bayesian decision fusion gives thebest recognition performance. The method based on multi-feature multiple representation learning classifier fusion integrates the discrimination of multi-features and combines the sparse and cooperative representation classification performance to gain complementary advantages and to improve recognition accuracy. The experiments are based on the Moving and Stationary Target Acquisition and Recognition (MSTAR database,and they demonstrate the effectiveness of the proposed approach.

  7. Thai Finger-Spelling Recognition Using a Cascaded Classifier Based on Histogram of Orientation Gradient Features

    Directory of Open Access Journals (Sweden)

    Kittasil Silanon

    2017-01-01

    Full Text Available Hand posture recognition is an essential module in applications such as human-computer interaction (HCI, games, and sign language systems, in which performance and robustness are the primary requirements. In this paper, we proposed automatic classification to recognize 21 hand postures that represent letters in Thai finger-spelling based on Histogram of Orientation Gradient (HOG feature (which is applied with more focus on the information within certain region of the image rather than each single pixel and Adaptive Boost (i.e., AdaBoost learning technique to select the best weak classifier and to construct a strong classifier that consists of several weak classifiers to be cascaded in detection architecture. We collected 21 static hand posture images from 10 subjects for testing and training in Thai letters finger-spelling. The parameters for the training process have been adjusted in three experiments, false positive rates (FPR, true positive rates (TPR, and number of training stages (N, to achieve the most suitable training model for each hand posture. All cascaded classifiers are loaded into the system simultaneously to classify different hand postures. A correlation coefficient is computed to distinguish the hand postures that are similar. The system achieves approximately 78% accuracy on average on all classifier experiments.

  8. Environmental surveillance master sampling schedule

    International Nuclear Information System (INIS)

    Bisping, L.E.

    1994-02-01

    This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring the onsite drinking water falls outside the scope of the SESP. The Hanford Environmental Health Foundation is responsible for monitoring the nonradiological parameters as defined in the National Drinking Water Standards while PNL conducts the radiological monitoring of the onsite drinking water. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize the expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control and reporting. The ground-water sampling schedule identifies ground-water sampling events used by PNL for environmental surveillance of the Hanford Site

  9. Mobile Surveillance and Monitoring Robots

    International Nuclear Information System (INIS)

    Kimberly, Howard R.; Shipers, Larry R.

    1999-01-01

    Long-term nuclear material storage will require in-vault data verification, sensor testing, error and alarm response, inventory, and maintenance operations. System concept development efforts for a comprehensive nuclear material management system have identified the use of a small flexible mobile automation platform to perform these surveillance and maintenance operations. In order to have near-term wide-range application in the Complex, a mobile surveillance system must be small, flexible, and adaptable enough to allow retrofit into existing special nuclear material facilities. The objective of the Mobile Surveillance and Monitoring Robot project is to satisfy these needs by development of a human scale mobile robot to monitor the state of health, physical security and safety of items in storage and process; recognize and respond to alarms, threats, and off-normal operating conditions; and perform material handling and maintenance operations. The system will integrate a tool kit of onboard sensors and monitors, maintenance equipment and capability, and SNL developed non-lethal threat response technology with the intelligence to identify threats and develop and implement first response strategies for abnormal signals and alarm conditions. System versatility will be enhanced by incorporating a robot arm, vision and force sensing, robust obstacle avoidance, and appropriate monitoring and sensing equipment

  10. Environmental surveillance master sampling schedule

    Energy Technology Data Exchange (ETDEWEB)

    Bisping, L.E.

    1994-02-01

    This document contains the planned 1994 schedules for routine collection of samples for the Surface Environmental Surveillance Project (SESP), Drinking Water Project, and Ground-Water Surveillance Project. Samples are routinely collected for the SESP and analyzed to determine the quality of air, surface water, soil, sediment, wildlife, vegetation, foodstuffs, and farm products at Hanford Site and surrounding communities. The responsibility for monitoring the onsite drinking water falls outside the scope of the SESP. The Hanford Environmental Health Foundation is responsible for monitoring the nonradiological parameters as defined in the National Drinking Water Standards while PNL conducts the radiological monitoring of the onsite drinking water. PNL conducts the drinking water monitoring project concurrent with the SESP to promote efficiency and consistency, utilize the expertise developed over the years, and reduce costs associated with management, procedure development, data management, quality control and reporting. The ground-water sampling schedule identifies ground-water sampling events used by PNL for environmental surveillance of the Hanford Site.

  11. Monitoring influenza activity in the United States: a comparison of traditional surveillance systems with Google Flu Trends.

    Directory of Open Access Journals (Sweden)

    Justin R Ortiz

    2011-04-01

    Full Text Available Google Flu Trends was developed to estimate US influenza-like illness (ILI rates from internet searches; however ILI does not necessarily correlate with actual influenza virus infections.Influenza activity data from 2003-04 through 2007-08 were obtained from three US surveillance systems: Google Flu Trends, CDC Outpatient ILI Surveillance Network (CDC ILI Surveillance, and US Influenza Virologic Surveillance System (CDC Virus Surveillance. Pearson's correlation coefficients with 95% confidence intervals (95% CI were calculated to compare surveillance data. An analysis was performed to investigate outlier observations and determine the extent to which they affected the correlations between surveillance data. Pearson's correlation coefficient describing Google Flu Trends and CDC Virus Surveillance over the study period was 0.72 (95% CI: 0.64, 0.79. The correlation between CDC ILI Surveillance and CDC Virus Surveillance over the same period was 0.85 (95% CI: 0.81, 0.89. Most of the outlier observations in both comparisons were from the 2003-04 influenza season. Exclusion of the outlier observations did not substantially improve the correlation between Google Flu Trends and CDC Virus Surveillance (0.82; 95% CI: 0.76, 0.87 or CDC ILI Surveillance and CDC Virus Surveillance (0.86; 95%CI: 0.82, 0.90.This analysis demonstrates that while Google Flu Trends is highly correlated with rates of ILI, it has a lower correlation with surveillance for laboratory-confirmed influenza. Most of the outlier observations occurred during the 2003-04 influenza season that was characterized by early and intense influenza activity, which potentially altered health care seeking behavior, physician testing practices, and internet search behavior.

  12. TEXT CLASSIFICATION USING NAIVE BAYES UPDATEABLE ALGORITHM IN SBMPTN TEST QUESTIONS

    Directory of Open Access Journals (Sweden)

    Ristu Saptono

    2017-01-01

    Full Text Available Document classification is a growing interest in the research of text mining. Classification can be done based on the topics, languages, and so on. This study was conducted to determine how Naive Bayes Updateable performs in classifying the SBMPTN exam questions based on its theme. Increment model of one classification algorithm often used in text classification Naive Bayes classifier has the ability to learn from new data introduces with the system even after the classifier has been produced with the existing data. Naive Bayes Classifier classifies the exam questions based on the theme of the field of study by analyzing keywords that appear on the exam questions. One of feature selection method DF-Thresholding is implemented for improving the classification performance. Evaluation of the classification with Naive Bayes classifier algorithm produces 84,61% accuracy.

  13. An adaptive optimal ensemble classifier via bagging and rank aggregation with applications to high dimensional data

    Directory of Open Access Journals (Sweden)

    Datta Susmita

    2010-08-01

    Full Text Available Abstract Background Generally speaking, different classifiers tend to work well for certain types of data and conversely, it is usually not known a priori which algorithm will be optimal in any given classification application. In addition, for most classification problems, selecting the best performing classification algorithm amongst a number of competing algorithms is a difficult task for various reasons. As for example, the order of performance may depend on the performance measure employed for such a comparison. In this work, we present a novel adaptive ensemble classifier constructed by combining bagging and rank aggregation that is capable of adaptively changing its performance depending on the type of data that is being classified. The attractive feature of the proposed classifier is its multi-objective nature where the classification results can be simultaneously optimized with respect to several performance measures, for example, accuracy, sensitivity and specificity. We also show that our somewhat complex strategy has better predictive performance as judged on test samples than a more naive approach that attempts to directly identify the optimal classifier based on the training data performances of the individual classifiers. Results We illustrate the proposed method with two simulated and two real-data examples. In all cases, the ensemble classifier performs at the level of the best individual classifier comprising the ensemble or better. Conclusions For complex high-dimensional datasets resulting from present day high-throughput experiments, it may be wise to consider a number of classification algorithms combined with dimension reduction techniques rather than a fixed standard algorithm set a priori.

  14. Reading the Surface: Body Language and Surveillance

    Directory of Open Access Journals (Sweden)

    Mark Andrejevic

    2010-03-01

    Full Text Available This article explores the role played by body language in recent examples of popular culture and political news coverage as a means of highlighting the poten-tially deceptive haracter of speech and promising to bypass it altogether. It situ-ates the promise of "visceral literacy" - the alleged ability to read inner emotions and dispositions - within emerging surveillance practices and the landscapes of risk they navigate. At the same time, it describes portrayals of body language analysis as characteristic of an emerging genre of "securitainment" that instructs viewers in monitoring techniques as it entertains and informs them. Body lan-guage ends up caught in the symbolic impasse it sought to avoid: as soon as it is portrayed as a language that can be learned and consciously "spoken" it falls prey to the potential for deceit. The article's conclusion considers the way in which emerging technologies attempt to address this impasse, bypassing the attempt to infer underlying signification altogether.

  15. USBcat - Towards an Intrusion Surveillance Toolset

    Directory of Open Access Journals (Sweden)

    Chris Chapman

    2014-10-01

    Full Text Available This paper identifies an intrusion surveillance framework which provides an analyst with the ability to investigate and monitor cyber-attacks in a covert manner. Where cyber-attacks are perpetrated for the purposes of espionage the ability to understand an adversary's techniques and objectives are an important element in network and computer security. With the appropriate toolset, security investigators would be permitted to perform both live and stealthy counter-intelligence operations by observing the behaviour and communications of the intruder. Subsequently a more complete picture of the attacker's identity, objectives, capabilities, and infiltration could be formulated than is possible with present technologies. This research focused on developing an extensible framework to permit the covert investigation of malware. Additionally, a Universal Serial Bus (USB Mass Storage Device (MSD based covert channel was designed to enable remote command and control of the framework. The work was validated through the design, implementation and testing of a toolset.

  16. Informational Text and the CCSS

    Science.gov (United States)

    Aspen Institute, 2012

    2012-01-01

    What constitutes an informational text covers a broad swath of different types of texts. Biographies & memoirs, speeches, opinion pieces & argumentative essays, and historical, scientific or technical accounts of a non-narrative nature are all included in what the Common Core State Standards (CCSS) envisions as informational text. Also included…

  17. The Only Safe SMS Texting Is No SMS Texting.

    Science.gov (United States)

    Toth, Cheryl; Sacopulos, Michael J

    2015-01-01

    Many physicians and practice staff use short messaging service (SMS) text messaging to communicate with patients. But SMS text messaging is unencrypted, insecure, and does not meet HIPAA requirements. In addition, the short and abbreviated nature of text messages creates opportunities for misinterpretation, and can negatively impact patient safety and care. Until recently, asking patients to sign a statement that they understand and accept these risks--as well as having policies, device encryption, and cyber insurance in place--would have been enough to mitigate the risk of using SMS text in a medical practice. But new trends and policies have made SMS text messaging unsafe under any circumstance. This article explains these trends and policies, as well as why only secure texting or secure messaging should be used for physician-patient communication.

  18. HIV surveillance in complex emergencies.

    Science.gov (United States)

    Salama, P; Dondero, T J

    2001-04-01

    Many studies have shown a positive association between both migration and temporary expatriation and HIV risk. This association is likely to be similar or even more pronounced for forced migrants. In general, HIV transmission in host-migrant or host-forced-migrant interactions depends on the maturity of the HIV epidemic in both the host and the migrant population, the relative seroprevalence of HIV in the host and the migrant population, the prevalence of other sexually transmitted infections (STIs) that may facilitate transmission, and the level of sexual interaction between the two communities. Complex emergencies are the major cause of mass population movement today. In complex emergencies, additional factors such as sexual interaction between forced-migrant populations and the military; sexual violence; increasing commercial sex work; psychological trauma; and disruption of preventive and curative health services may increase the risk for HIV transmission. Despite recent success in preventing HIV infection in stable populations in selected developing countries, internally displaced persons and refugees (or forced migrants) have not been systematically included in HIV surveillance systems, nor consequently in prevention activities. Standard surveillance systems that rely on functioning health services may not provide useful data in many complex emergency settings. Secondary sources can provide some information in these settings. Little attempt has been made, however, to develop innovative HIV surveillance systems in countries affected by complex emergencies. Consequently, data on the HIV epidemic in these countries are scarce and HIV prevention programs are either not implemented or interventions are not effectively targeted. Second generation surveillance methods such as cross-sectional, population-based surveys can provide rapid information on HIV, STIs, and sexual behavior. The risks for stigmatization and breaches of confidentiality must be recognized

  19. Predicting Prosody from Text for Text-to-Speech Synthesis

    CERN Document Server

    Rao, K Sreenivasa

    2012-01-01

    Predicting Prosody from Text for Text-to-Speech Synthesis covers the specific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing.

  20. Monitoring interaction and collective text production through text mining

    Directory of Open Access Journals (Sweden)

    Macedo, Alexandra Lorandi

    2014-04-01

    Full Text Available This article presents the Concepts Network tool, developed using text mining technology. The main objective of this tool is to extract and relate terms of greatest incidence from a text and exhibit the results in the form of a graph. The Network was implemented in the Collective Text Editor (CTE which is an online tool that allows the production of texts in synchronized or non-synchronized forms. This article describes the application of the Network both in texts produced collectively and texts produced in a forum. The purpose of the tool is to offer support to the teacher in managing the high volume of data generated in the process of interaction amongst students and in the construction of the text. Specifically, the aim is to facilitate the teacher’s job by allowing him/her to process data in a shorter time than is currently demanded. The results suggest that the Concepts Network can aid the teacher, as it provides indicators of the quality of the text produced. Moreover, messages posted in forums can be analyzed without their content necessarily having to be pre-read.

  1. Text recycling: acceptable or misconduct?

    Science.gov (United States)

    Harriman, Stephanie; Patel, Jigisha

    2014-08-16

    Text recycling, also referred to as self-plagiarism, is the reproduction of an author's own text from a previous publication in a new publication. Opinions on the acceptability of this practice vary, with some viewing it as acceptable and efficient, and others as misleading and unacceptable. In light of the lack of consensus, journal editors often have difficulty deciding how to act upon the discovery of text recycling. In response to these difficulties, we have created a set of guidelines for journal editors on how to deal with text recycling. In this editorial, we discuss some of the challenges of developing these guidelines, and how authors can avoid undisclosed text recycling.

  2. TEXT DEIXIS IN NARRATIVE SEQUENCES

    Directory of Open Access Journals (Sweden)

    Josep Rivera

    2007-06-01

    Full Text Available This study looks at demonstrative descriptions, regarding them as text-deictic procedures which contribute to weave discourse reference. Text deixis is thought of as a metaphorical referential device which maps the ground of utterance onto the text itself. Demonstrative expressions with textual antecedent-triggers, considered as the most important text-deictic units, are identified in a narrative corpus consisting of J. M. Barrie’s Peter Pan and its translation into Catalan. Some linguistic and discourse variables related to DemNPs are analysed to characterise adequately text deixis. It is shown that this referential device is usually combined with abstract nouns, thus categorising and encapsulating (non-nominal complex discourse entities as nouns, while performing a referential cohesive function by means of the text deixis + general noun type of lexical cohesion.

  3. Methods of nutrition surveillance in low-income countries

    Directory of Open Access Journals (Sweden)

    Veronica Tuffrey

    2016-03-01

    Full Text Available Abstract Background In 1974 a joint FAO/UNICEF/WHO Expert Committee met to develop methods for nutrition surveillance. There has been much interest and activity in this topic since then, however there is a lack of guidance for practitioners and confusion exists around the terminology of nutrition surveillance. In this paper we propose a classification of data collection activities, consider the technical issues for each category, and examine the potential applications and challenges related to information and communication technology. Analysis There are three major approaches used to collect primary data for nutrition surveillance: repeated cross-sectional surveys; community-based sentinel monitoring; and the collection of data in schools. There are three major sources of secondary data for surveillance: from feeding centres, health facilities, and community-based data collection, including mass screening for malnutrition in children. Surveillance systems involving repeated surveys are suitable for monitoring and comparing national trends and for planning and policy development. To plan at a local level, surveys at district level or in programme implementation areas are ideal, but given the usually high cost of primary data collection, data obtained from health systems are more appropriate provided they are interpreted with caution and with contextual information. For early warning, data from health systems and sentinel site assessments may be valuable, if consistent in their methods of collection and any systematic bias is deemed to be steady. For evaluation purposes, surveillance systems can only give plausible evidence of whether a programme is effective. However the implementation of programmes can be monitored as long as data are collected on process indicators such as access to, and use of, services. Surveillance systems also have an important role to provide information that can be used for advocacy and for promoting accountability for

  4. Comparing the Effects of Four Instructional Treatments on EFL Students' Achievement in Writing Classified Ads

    Science.gov (United States)

    Khodabandeh, Farzaneh

    2016-01-01

    The current study set out to compare the effect of traditional and non-traditional instructional treatments; i.e. explicit, implicit, task-based and no-instruction approaches on students' abilities to learn how to write classified ads. 72 junior students who have all taken a course in Reading Journalistic Texts at the Payame-Noor University…

  5. Ship detection in South African oceans using SAR, CFAR and a Haar-like feature classifier

    CSIR Research Space (South Africa)

    Schwegmann, CP

    2014-07-01

    Full Text Available -1 2014 IEEE Internatonal Geoscience and Remote Sensing Symposium (IGARSS), Quebec Canada, 13-18 July 2014 SHIP DETECTION IN SOUTH AFRICAN OCEANS USING SAR, CFAR AND A HAAR-LIKE FEATURE CLASSIFIER yzC. P. Schwegmann,yzW. Kleynhans,?zB. P. Salmon...

  6. Text against Text: Counterbalancing the Hegemony of Assessment.

    Science.gov (United States)

    Cosgrove, Cornelius

    A study examined whether composition specialists can counterbalance the potential privileging of the assessment perspective, or of self-appointed interpreters of that perspective, through the study of assessment discourse as text. Fourteen assessment texts were examined, most of them journal articles and most of them featuring the common…

  7. 18 CFR 3a.12 - Authority to classify official information.

    Science.gov (United States)

    2010-04-01

    ... efficient administration. (b) The authority to classify information or material originally as Top Secret is... classify information or material originally as Secret is exercised only by: (1) Officials who have Top... information or material originally as Confidential is exercised by officials who have Top Secret or Secret...

  8. Using Neural Networks to Classify Digitized Images of Galaxies

    Science.gov (United States)

    Goderya, S. N.; McGuire, P. C.

    2000-12-01

    Automated classification of Galaxies into Hubble types is of paramount importance to study the large scale structure of the Universe, particularly as survey projects like the Sloan Digital Sky Survey complete their data acquisition of one million galaxies. At present it is not possible to find robust and efficient artificial intelligence based galaxy classifiers. In this study we will summarize progress made in the development of automated galaxy classifiers using neural networks as machine learning tools. We explore the Bayesian linear algorithm, the higher order probabilistic network, the multilayer perceptron neural network and Support Vector Machine Classifier. The performance of any machine classifier is dependant on the quality of the parameters that characterize the different groups of galaxies. Our effort is to develop geometric and invariant moment based parameters as input to the machine classifiers instead of the raw pixel data. Such an approach reduces the dimensionality of the classifier considerably, and removes the effects of scaling and rotation, and makes it easier to solve for the unknown parameters in the galaxy classifier. To judge the quality of training and classification we develop the concept of Mathews coefficients for the galaxy classification community. Mathews coefficients are single numbers that quantify classifier performance even with unequal prior probabilities of the classes.

  9. Fisher classifier and its probability of error estimation

    Science.gov (United States)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  10. Performance of classification confidence measures in dynamic classifier systems

    Czech Academy of Sciences Publication Activity Database

    Štefka, D.; Holeňa, Martin

    2013-01-01

    Roč. 23, č. 4 (2013), s. 299-319 ISSN 1210-0552 R&D Projects: GA ČR GA13-17187S Institutional support: RVO:67985807 Keywords : classifier combining * dynamic classifier systems * classification confidence Subject RIV: IN - Informatics, Computer Science Impact factor: 0.412, year: 2013

  11. 32 CFR 2400.30 - Reproduction of classified information.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Reproduction of classified information. 2400.30... SECURITY PROGRAM Safeguarding § 2400.30 Reproduction of classified information. Documents or portions of... the originator or higher authority. Any stated prohibition against reproduction shall be strictly...

  12. Classifying spaces with virtually cyclic stabilizers for linear groups

    DEFF Research Database (Denmark)

    Degrijse, Dieter Dries; Köhl, Ralf; Petrosyan, Nansen

    2015-01-01

    We show that every discrete subgroup of GL(n, ℝ) admits a finite-dimensional classifying space with virtually cyclic stabilizers. Applying our methods to SL(3, ℤ), we obtain a four-dimensional classifying space with virtually cyclic stabilizers and a decomposition of the algebraic K-theory of its...

  13. Dynamic integration of classifiers in the space of principal components

    NARCIS (Netherlands)

    Tsymbal, A.; Pechenizkiy, M.; Puuronen, S.; Patterson, D.W.; Kalinichenko, L.A.; Manthey, R.; Thalheim, B.; Wloka, U.

    2003-01-01

    Recent research has shown the integration of multiple classifiers to be one of the most important directions in machine learning and data mining. It was shown that, for an ensemble to be successful, it should consist of accurate and diverse base classifiers. However, it is also important that the

  14. Single-Pol Synthetic Aperture Radar Terrain Classification using Multiclass Confidence for One-Class Classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Mark William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Steinbach, Ryan Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Moya, Mary M [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    Except in the most extreme conditions, Synthetic aperture radar (SAR) is a remote sensing technology that can operate day or night. A SAR can provide surveillance over a long time period by making multiple passes over a wide area. For object-based intelligence it is convenient to segment and classify the SAR images into objects that identify various terrains and man-made structures that we call “static features.” In this paper we introduce a novel SAR image product that captures how different regions decorrelate at different rates. Using superpixels and their first two moments we develop a series of one-class classification algorithms using a goodness-of-fit metric. P-value fusion is used to combine the results from different classes. We also show how to combine multiple one-class classifiers to get a confidence about a classification. This can be used by downstream algorithms such as a conditional random field to enforce spatial constraints.

  15. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    International Nuclear Information System (INIS)

    Blanco, A; Rodriguez, R; Martinez-Maranon, I

    2014-01-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity

  16. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    Science.gov (United States)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  17. Just-in-time classifiers for recurrent concepts.

    Science.gov (United States)

    Alippi, Cesare; Boracchi, Giacomo; Roveri, Manuel

    2013-04-01

    Just-in-time (JIT) classifiers operate in evolving environments by classifying instances and reacting to concept drift. In stationary conditions, a JIT classifier improves its accuracy over time by exploiting additional supervised information coming from the field. In nonstationary conditions, however, the classifier reacts as soon as concept drift is detected; the current classification setup is discarded and a suitable one activated to keep the accuracy high. We present a novel generation of JIT classifiers able to deal with recurrent concept drift by means of a practical formalization of the concept representation and the definition of a set of operators working on such representations. The concept-drift detection activity, which is crucial in promptly reacting to changes exactly when needed, is advanced by considering change-detection tests monitoring both inputs and classes distributions.

  18. Pedoinformatics Approach to Soil Text Analytics

    Science.gov (United States)

    Furey, J.; Seiter, J.; Davis, A.

    2017-12-01

    The several extant schema for the classification of soils rely on differing criteria, but the major soil science taxonomies, including the United States Department of Agriculture (USDA) and the international harmonized World Reference Base for Soil Resources systems, are based principally on inferred pedogenic properties. These taxonomies largely result from compiled individual observations of soil morphologies within soil profiles, and the vast majority of this pedologic information is contained in qualitative text descriptions. We present text mining analyses of hundreds of gigabytes of parsed text and other data in the digitally available USDA soil taxonomy documentation, the Soil Survey Geographic (SSURGO) database, and the National Cooperative Soil Survey (NCSS) soil characterization database. These analyses implemented iPython calls to Gensim modules for topic modelling, with latent semantic indexing completed down to the lowest taxon level (soil series) paragraphs. Via a custom extension of the Natural Language Toolkit (NLTK), approximately one percent of the USDA soil series descriptions were used to train a classifier for the remainder of the documents, essentially by treating soil science words as comprising a novel language. While location-specific descriptors at the soil series level are amenable to geomatics methods, unsupervised clustering of the occurrence of other soil science words did not closely follow the usual hierarchy of soil taxa. We present preliminary phrasal analyses that may account for some of these effects.

  19. Portable digital video surveillance system for monitoring flower-visiting bumblebees

    Directory of Open Access Journals (Sweden)

    Thorsdatter Orvedal Aase, Anne Lene

    2011-08-01

    Full Text Available In this study we used a portable event-triggered video surveillance system for monitoring flower-visiting bumblebees. The system consist of mini digital recorder (mini-DVR with a video motion detection (VMD sensor which detects changes in the image captured by the camera, the intruder triggers the recording immediately. The sensitivity and the detection area are adjustable, which may prevent unwanted recordings. To our best knowledge this is the first study using VMD sensor to monitor flower-visiting insects. Observation of flower-visiting insects has traditionally been monitored by direct observations, which is time demanding, or by continuous video monitoring, which demands a great effort in reviewing the material. A total of 98.5 monitoring hours were conducted. For the mini-DVR with VMD, a total of 35 min were spent reviewing the recordings to locate 75 pollinators, which means ca. 0.35 sec reviewing per monitoring hr. Most pollinators in the order Hymenoptera were identified to species or group level, some were only classified to family (Apidae or genus (Bombus. The use of the video monitoring system described in the present paper could result in a more efficient data sampling and reveal new knowledge to pollination ecology (e.g. species identification and pollinating behaviour.

  20. Knowledge Representation in Travelling Texts

    DEFF Research Database (Denmark)

    Mousten, Birthe; Locmele, Gunta

    2014-01-01

    Today, information travels fast. Texts travel, too. In a corporate context, the question is how to manage which knowledge elements should travel to a new language area or market and in which form? The decision to let knowledge elements travel or not travel highly depends on the limitation...... and the purpose of the text in a new context as well as on predefined parameters for text travel. For texts used in marketing and in technology, the question is whether culture-bound knowledge representation should be domesticated or kept as foreign elements, or should be mirrored or moulded—or should not travel...... at all! When should semantic and pragmatic elements in a text be replaced and by which other elements? The empirical basis of our work is marketing and technical texts in English, which travel into the Latvian and Danish markets, respectively....

  1. Texting while driving: is speech-based text entry less risky than handheld text entry?

    Science.gov (United States)

    He, J; Chaparro, A; Nguyen, B; Burge, R J; Crandall, J; Chaparro, B; Ni, R; Cao, S

    2014-11-01

    Research indicates that using a cell phone to talk or text while maneuvering a vehicle impairs driving performance. However, few published studies directly compare the distracting effects of texting using a hands-free (i.e., speech-based interface) versus handheld cell phone, which is an important issue for legislation, automotive interface design and driving safety training. This study compared the effect of speech-based versus handheld text entries on simulated driving performance by asking participants to perform a car following task while controlling the duration of a secondary text-entry task. Results showed that both speech-based and handheld text entries impaired driving performance relative to the drive-only condition by causing more variation in speed and lane position. Handheld text entry also increased the brake response time and increased variation in headway distance. Text entry using a speech-based cell phone was less detrimental to driving performance than handheld text entry. Nevertheless, the speech-based text entry task still significantly impaired driving compared to the drive-only condition. These results suggest that speech-based text entry disrupts driving, but reduces the level of performance interference compared to text entry with a handheld device. In addition, the difference in the distraction effect caused by speech-based and handheld text entry is not simply due to the difference in task duration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Classifying and explaining democracy in the Muslim world

    Directory of Open Access Journals (Sweden)

    Rohaizan Baharuddin

    2012-12-01

    Full Text Available The purpose of this study is to classify and explain democracies in the 47 Muslim countries between the years 1998 and 2008 by using liberties and elections as independent variables. Specifically focusing on the context of the Muslim world, this study examines the performance of civil liberties and elections, variation of democracy practised the most, the elections, civil liberties and democratic transitions and patterns that followed. Based on the quantitative data primarily collected from Freedom House, this study demonstrates the following aggregate findings: first, the “not free not fair” elections, the “limited” civil liberties and the “Illiberal Partial Democracy” were the dominant feature of elections, civil liberties and democracy practised in the Muslim world; second, a total of 413 Muslim regimes out of 470 (47 regimes x 10 years remained the same as their democratic origin points, without any transitions to a better or worse level of democracy, throughout these 10 years; and third, a slow, yet steady positive transition of both elections and civil liberties occurred in the Muslim world with changes in the nature of elections becoming much more progressive compared to the civil liberties’ transitions.

  3. Molecular Characteristics in MRI-classified Group 1 Glioblastoma Multiforme

    Directory of Open Access Journals (Sweden)

    William E Haskins

    2013-07-01

    Full Text Available Glioblastoma multiforme (GBM is a clinically and pathologically heterogeneous brain tumor. Previous study of MRI-classified GBM has revealed a spatial relationship between Group 1 GBM (GBM1 and the subventricular zone (SVZ. The SVZ is an adult neural stem cell niche and is also suspected to be the origin of a subtype of brain tumor. The intimate contact between GBM1 and the SVZ raises the possibility that tumor cells in GBM1 may be most related to SVZ cells. In support of this notion, we found that neural stem cell and neuroblast markers are highly expressed in GBM1. Additionally, we identified molecular characteristics in this type of GBM that include up-regulation of metabolic enzymes, ribosomal proteins, heat shock proteins, and c-Myc oncoprotein. As GBM1 often recurs at great distances from the initial lesion, the rewiring of metabolism and ribosomal biogenesis may facilitate cancer cells’ growth and survival during tumor migration. Taken together, combined our findings and MRI-based classification of GBM1 would offer better prediction and treatment for this multifocal GBM.

  4. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  5. Classifying Taiwan Lianas with Radiating Plates of Xylem

    Directory of Open Access Journals (Sweden)

    Sheng-Zehn Yang

    2015-12-01

    Full Text Available Radiating plates of xylem are a lianas cambium variation, of which, 22 families have this feature. This study investigates 15 liana species representing nine families with radiating plates of xylem structures. The features of the transverse section and epidermis in fresh liana samples are documented, including shapes and colors of xylem and phloem, ray width and numbers, and skin morphology. Experimental results indicated that the shape of phloem fibers in Ampelopsis brevipedunculata var. hancei is gradually tapered and flame-like, which is in contrast with the other characteristics of this type, including those classified as rays. Both inner and outer cylinders of vascular bundles are found in Piper kwashoense, and the irregularly inner cylinder persists yet gradually diminishes. Red crystals are numerous in the cortex of Celastrus kusanoi. Aristolochia shimadai and A. zollingeriana develop a combination of two cambium variants, radiating plates of xylem and a lobed xylem. The shape of phloem in Stauntonia obovatifoliola is square or truncate, and its rays are numerous. Meanwhile, that of Neoalsomitra integrifolia is blunt and its rays are fewer. As for the features of a stem surface within the same family, Cyclea ochiaiana is brownish in color and has a deep vertical depression with lenticels, Pericampylus glaucus is greenish in color with a vertical shallow depression. Within the same genus, Aristolochia shimadai develops lenticels, which are not in A. zollingeriana; although the periderm developed in Clematis grata is a ring bark and tears easily, that of Clematis tamura is thick and soft.

  6. Passive Sonar Target Detection Using Statistical Classifier and Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Hamed Komari Alaie

    2018-01-01

    Full Text Available This paper presents the results of an experimental investigation about target detecting with passive sonar in Persian Gulf. Detecting propagated sounds in the water is one of the basic challenges of the researchers in sonar field. This challenge will be complex in shallow water (like Persian Gulf and noise less vessels. Generally, in passive sonar, the targets are detected by sonar equation (with constant threshold that increases the detection error in shallow water. The purpose of this study is proposed a new method for detecting targets in passive sonars using adaptive threshold. In this method, target signal (sound is processed in time and frequency domain. For classifying, Bayesian classification is used and posterior distribution is estimated by Maximum Likelihood Estimation algorithm. Finally, target was detected by combining the detection points in both domains using Least Mean Square (LMS adaptive filter. Results of this paper has showed that the proposed method has improved true detection rate by about 24% when compared other the best detection method.

  7. An Informed Framework for Training Classifiers from Social Media

    Directory of Open Access Journals (Sweden)

    Dong Seon Cheng

    2016-04-01

    Full Text Available Extracting information from social media has become a major focus of companies and researchers in recent years. Aside from the study of the social aspects, it has also been found feasible to exploit the collaborative strength of crowds to help solve classical machine learning problems like object recognition. In this work, we focus on the generally underappreciated problem of building effective datasets for training classifiers by automatically assembling data from social media. We detail some of the challenges of this approach and outline a framework that uses expanded search queries to retrieve more qualified data. In particular, we concentrate on collaboratively tagged media on the social platform Flickr, and on the problem of image classification to evaluate our approach. Finally, we describe a novel entropy-based method to incorporate an information-theoretic principle to guide our framework. Experimental validation against well-known public datasets shows the viability of this approach and marks an improvement over the state of the art in terms of simplicity and performance.

  8. Executed Movement Using EEG Signals through a Naive Bayes Classifier

    Directory of Open Access Journals (Sweden)

    Juliano Machado

    2014-11-01

    Full Text Available Recent years have witnessed a rapid development of brain-computer interface (BCI technology. An independent BCI is a communication system for controlling a device by human intension, e.g., a computer, a wheelchair or a neuroprosthes is, not depending on the brain’s normal output pathways of peripheral nerves and muscles, but on detectable signals that represent responsive or intentional brain activities. This paper presents a comparative study of the usage of the linear discriminant analysis (LDA and the naive Bayes (NB classifiers on describing both right- and left-hand movement through electroencephalographic signal (EEG acquisition. For the analysis, we considered the following input features: the energy of the segments of a band pass-filtered signal with the frequency band in sensorimotor rhythms and the components of the spectral energy obtained through the Welch method. We also used the common spatial pattern (CSP filter, so as to increase the discriminatory activity among movement classes. By using the database generated by this experiment, we obtained hit rates up to 70%. The results are compatible with previous studies.

  9. Online Feature Selection for Classifying Emphysema in HRCT Images

    Directory of Open Access Journals (Sweden)

    M. Prasad

    2008-06-01

    Full Text Available Feature subset selection, applied as a pre- processing step to machine learning, is valuable in dimensionality reduction, eliminating irrelevant data and improving classifier performance. In the classic formulation of the feature selection problem, it is assumed that all the features are available at the beginning. However, in many real world problems, there are scenarios where not all features are present initially and must be integrated as they become available. In such scenarios, online feature selection provides an efficient way to sort through a large space of features. It is in this context that we introduce online feature selection for the classification of emphysema, a smoking related disease that appears as low attenuation regions in High Resolution Computer Tomography (HRCT images. The technique was successfully evaluated on 61 HRCT scans and compared with different online feature selection approaches, including hill climbing, best first search, grafting, and correlation-based feature selection. The results were also compared against ldensity maskr, a standard approach used for emphysema detection in medical image analysis.

  10. Instance Selection for Classifier Performance Estimation in Meta Learning

    Directory of Open Access Journals (Sweden)

    Marcin Blachnik

    2017-11-01

    Full Text Available Building an accurate prediction model is challenging and requires appropriate model selection. This process is very time consuming but can be accelerated with meta-learning–automatic model recommendation by estimating the performances of given prediction models without training them. Meta-learning utilizes metadata extracted from the dataset to effectively estimate the accuracy of the model in question. To achieve that goal, metadata descriptors must be gathered efficiently and must be informative to allow the precise estimation of prediction accuracy. In this paper, a new type of metadata descriptors is analyzed. These descriptors are based on the compression level obtained from the instance selection methods at the data-preprocessing stage. To verify their suitability, two types of experiments on real-world datasets have been conducted. In the first one, 11 instance selection methods were examined in order to validate the compression–accuracy relation for three classifiers: k-nearest neighbors (kNN, support vector machine (SVM, and random forest. From this analysis, two methods are recommended (instance-based learning type 2 (IB2, and edited nearest neighbor (ENN which are then compared with the state-of-the-art metaset descriptors. The obtained results confirm that the two suggested compression-based meta-features help to predict accuracy of the base model much more accurately than the state-of-the-art solution.

  11. Implementation of a classifier didactical machine for learning mechatronic processes

    Directory of Open Access Journals (Sweden)

    Alex De La Cruz

    2017-06-01

    Full Text Available The present article shows the design and construction of a classifier didactical machine through artificial vision. The implementation of the machine is to be used as a learning module of mechatronic processes. In the project, it is described the theoretical aspects that relate concepts of mechanical design, electronic design and software management which constitute popular field in science and technology, which is mechatronics. The design of the machine was developed based on the requirements of the user, through the concurrent design methodology to define and materialize the appropriate hardware and software solutions. LabVIEW 2015 was implemented for high-speed image acquisition and analysis, as well as for the establishment of data communication with a programmable logic controller (PLC via Ethernet and an open communications platform known as Open Platform Communications - OPC. In addition, the Arduino MEGA 2560 platform was used to control the movement of the step motor and the servo motors of the module. Also, is used the Arduino MEGA 2560 to control the movement of the stepper motor and servo motors in the module. Finally, we assessed whether the equipment meets the technical specifications raised by running specific test protocols.

  12. Salient Region Detection via Feature Combination and Discriminative Classifier

    Directory of Open Access Journals (Sweden)

    Deming Kong

    2015-01-01

    Full Text Available We introduce a novel approach to detect salient regions of an image via feature combination and discriminative classifier. Our method, which is based on hierarchical image abstraction, uses the logistic regression approach to map the regional feature vector to a saliency score. Four saliency cues are used in our approach, including color contrast in a global context, center-boundary priors, spatially compact color distribution, and objectness, which is as an atomic feature of segmented region in the image. By mapping a four-dimensional regional feature to fifteen-dimensional feature vector, we can linearly separate the salient regions from the clustered background by finding an optimal linear combination of feature coefficients in the fifteen-dimensional feature space and finally fuse the saliency maps across multiple levels. Furthermore, we introduce the weighted salient image center into our saliency analysis task. Extensive experiments on two large benchmark datasets show that the proposed approach achieves the best performance over several state-of-the-art approaches.

  13. Active Learning for Text Classification

    OpenAIRE

    Hu, Rong

    2011-01-01

    Text classification approaches are used extensively to solve real-world challenges. The success or failure of text classification systems hangs on the datasets used to train them, without a good dataset it is impossible to build a quality system. This thesis examines the applicability of active learning in text classification for the rapid and economical creation of labelled training data. Four main contributions are made in this thesis. First, we present two novel selection strategies to cho...

  14. Text Mining Applications and Theory

    CERN Document Server

    Berry, Michael W

    2010-01-01

    Text Mining: Applications and Theory presents the state-of-the-art algorithms for text mining from both the academic and industrial perspectives.  The contributors span several countries and scientific domains: universities, industrial corporations, and government laboratories, and demonstrate the use of techniques from machine learning, knowledge discovery, natural language processing and information retrieval to design computational models for automated text analysis and mining. This volume demonstrates how advancements in the fields of applied mathematics, computer science, machine learning

  15. Surveillance and threat detection prevention versus mitigation

    CERN Document Server

    Kirchner, Richard

    2014-01-01

    Surveillance and Threat Detection offers readers a complete understanding of the terrorist/criminal cycle, and how to interrupt that cycle to prevent an attack. Terrorists and criminals often rely on pre-attack and pre-operational planning and surveillance activities that can last a period of weeks, months, or even years. Identifying and disrupting this surveillance is key to prevention of attacks. The systematic capture of suspicious events and the correlation of those events can reveal terrorist or criminal surveillance, allowing security professionals to employ appropriate countermeasures and identify the steps needed to apprehend the perpetrators. The results will dramatically increase the probability of prevention while streamlining protection assets and costs. Readers of Surveillance and Threat Detection will draw from real-world case studies that apply to their real-world security responsibilities. Ultimately, readers will come away with an understanding of how surveillance detection at a high-value, f...

  16. The Role of Hackers in Countering Surveillance and Promoting Democracy

    Directory of Open Access Journals (Sweden)

    Sebastian Kubitschko

    2015-09-01

    Full Text Available Practices related to media technologies and infrastructures (MTI are an increasingly important part of democratic constellations in general and of surveillance tactics in particular. This article does not seek to discuss surveillance per se, but instead to open a new line of inquiry by presenting qualitative research on the Chaos Computer Club (CCC—one of the world’s largest and Europe’s oldest hacker organizations. Despite the longstanding conception of hacking as infused with political significance, the scope and style of hackers’ engagement with emerging issues related to surveillance remains poorly understood. The rationale of this paper is to examine the CCC as a civil society organization that counter-acts contemporary assemblages of surveillance in two ways: first, by de-constructing existing technology and by supporting, building, maintaining and using alternative media technologies and infrastructures that enable more secure and anonymous communication; and second, by articulating their expertise related to contemporary MTI to a wide range of audiences, publics and actors. Highlighting the significance of “privacy” for the health of democracy, I argue that the hacker organization is co-determining “interstitial spaces within information processing practices” (Cohen, 2012, p. 1931, and by doing so is acting on indispensable structural features of contemporary democratic constellations.

  17. Intimate Surveillance: Normalizing Parental Monitoring and Mediation of Infants Online

    Directory of Open Access Journals (Sweden)

    Tama Leaver

    2017-05-01

    Full Text Available Parents are increasingly sharing information about infants online in various forms and capacities. To more meaningfully understand the way parents decide what to share about young people and the way those decisions are being shaped, this article focuses on two overlapping areas: parental monitoring of babies and infants through the example of wearable technologies and parental mediation through the example of the public sharing practices of celebrity and influencer parents. The article begins by contextualizing these parental practices within the literature on surveillance, with particular attention to online surveillance and the increasing importance of affect. It then gives a brief overview of work on pregnancy mediation, monitoring on social media, and via pregnancy apps, which is the obvious precursor to examining parental sharing and monitoring practices regarding babies and infants. The examples of parental monitoring and parental mediation will then build on the idea of “intimate surveillance” which entails close and seemingly invasive monitoring by parents. Parental monitoring and mediation contribute to the normalization of intimate surveillance to the extent that surveillance is (resituated as a necessary culture of care. The choice to not survey infants is thus positioned, worryingly, as a failure of parenting.

  18. Will smart surveillance systems listen, understand and speak Slovene?

    Directory of Open Access Journals (Sweden)

    Simon Dobrišek

    2013-12-01

    Full Text Available The paper deals with the spoken language technologies that could enable the so-called smart (intelligent surveillance systems to listen, understand and speak Slovenian in the near future. Advanced computational methods of artificial perception and pattern recognition enable such systems to be at least to some extent aware of the environment, the presence of people and other phenomena that could be subject to surveillance. Speech is one such phenomenon that has the potential to be a key source of information in certain security situations. Technologies that enable automatic speech and speaker recognition as well as their psychophysical state by computer analysis of acoustic speech signals provide an entirely new dimension to the development of smart surveillance systems. Automatic recognition of spoken threats, screaming and crying for help, as well as a suspicious psycho-physical state of a speaker provide such systems to some extent with intelligent behaviour. The paper investigates the current state of development of these technologies and the requirements and possibilities of these systems to be used for the Slovenian spoken language, as well as different possible security application scenarios. It also addresses the broader legal and ethical issues raised by the development and use of such technologies, especially as audio surveillance is one of the most sensitive issues of privacy protection.

  19. Legionella spp. and legionellosis in southeastern Italy: disease epidemiology and environmental surveillance in community and health care facilities

    Directory of Open Access Journals (Sweden)

    Barbuti Giovanna

    2010-11-01

    Full Text Available Abstract Background Following the publication of the Italian Guidelines for the control and prevention of legionellosis an environmental and clinical surveillance has been carried out in Southeastern Italy. The aim of the study is to identify the risk factors for the disease, so allowing better programming of the necessary prevention measures. Methods During the period January 2000 - December 2009 the environmental surveillance was carried out by water sampling of 129 health care facilities (73 public and 56 private hospitals and 533 buildings within the community (63 private apartments, 305 hotels, 19 offices, 4 churches, 116 gyms, 3 swimming pools and 23 schools. Water sampling and microbiological analysis were carried out following the Italian Guidelines. From January 2005, all facilities were subject to risk analysis through the use of a standardized report; the results were classified as good (G, medium (M and bad (B. As well, all the clinical surveillance forms for legionellosis, which must be compiled by physicians and sent to the Regional Centre for Epidemiology (OER, were analyzed. Results Legionella spp. was found in 102 (79.1% health care facilities and in 238 (44.7% community buildings. The percentages for the contamination levels 10,000 cfu/L were respectively 33.1%, 53.4% and 13.5% for samples from health care facilities and 33.5%, 43.3% and 23.2% for samples from the community. Both in hospital and community environments, Legionella pneumophila serogroup (L. pn sg 2-14 was the most frequently isolate (respectively 54.8% and 40.8% of positive samples, followed by L. pn sg 1 (respectively 31.3% and 33%. The study showed a significant association between M or B score at the risk analysis and Legionella spp. positive microbiological test results (p Conclusions Our experience suggests that risk analysis and environmental microbiological surveillance should be carried out more frequently to control the environmental spread of Legionella

  20. Gender and classifiers in concurrent systems: Refining the typology of nominal classification

    Directory of Open Access Journals (Sweden)

    Sebastian Fedden

    2017-04-01

    Full Text Available Some languages have both gender and classifiers, contrary to what was once believed possible. We use these interesting languages as a unique window onto nominal classification. They provide the impetus for a new typology, based on the degree of orthogonality of the semantic systems and the degree of difference of the forms realizing them. This nine-way typology integrates traditional gender, traditional classifiers and – importantly – the many recently attested phenomena lying between. Besides progress specifically in understanding nominal classification, our approach provides clarity on the wider theoretical issue of single versus concurrent featural systems.